From 205d9a0daaf47d6ffe0251d0a63512df5198f843 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Sun, 19 Apr 2020 22:18:35 +0800 Subject: [PATCH 001/178] Update 20200401 The ins and outs of high-performance computing as a service.md --- ...high-performance computing as a service.md | 39 +++++++++++++++---- 1 file changed, 32 insertions(+), 7 deletions(-) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md index 0b0a1ac39b..45d258e08c 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -7,84 +7,109 @@ [#]: via: (https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html) [#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/) -The ins and outs of high-performance computing as a service +The ins and outs of high-performance computing as a service 高性能计算即服务的来龙去脉 ====== HPC services can be a way to meet expanding supercomputing needs, but depending on the use case, they’re not necessarily better than on-premises supercomputers. Dell EMC - +HPC服务可以满足不断扩展的超级计算需求,但根据使用情况,它们不一定比本地超级计算机更好。 戴尔EMC Electronics on missiles and military helicopters need to survive extreme conditions. Before any of that physical hardware can be deployed, defense contractor McCormick Stevenson Corp. simulates the real-world conditions it will endure, relying on finite element analysis software like Ansys, which requires significant computing power. +导弹和军用直升机上的电子设备需要在极端条件下生存。 在部署任何物理硬件之前,国防承包商麦考密克·史蒂文森公司(McCormick Stevenson Corp.)都依赖于像Ansys这样的有限元素分析软件来模拟它会承受的现实条件,该软件需要强大的计算能力。 Then one day a few years ago, it unexpectedly ran up against its computing limits. - +几年前的一天,它出乎意料地超出了计算极限。 [10 of the world's fastest supercomputers][1] "We had some jobs that would have overwhelmed the computers that we had in office," says Mike Krawczyk, principal engineer at McCormick Stevenson. "It did not make economic or schedule sense to buy a machine and install software." Instead, the company contracted with Rescale, which could sell them cycles on a supercomputer-class system for a tiny fraction of what they would've spent on new hardware. +麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们从事的某些工作会使我们在办公室使用的计算机不堪重负。” “购买机器并安装软件在经济上或计划上都不合理。”取而代之的是,该公司与Rescale签约,可以在超级计算机级系统上向他们出售自行车的周期,而这只花费了他们在新硬件上花费的一小部分。 McCormick Stevenson had become an early adopter in a market known as supercomputing as a service or high-performance computing (HPC) as a service – two terms that are closely related. HPC is the application of supercomputers to computationally complex problems, while supercomputers are those computers at the cutting edge of processing capacity, according to the National Institute for Computational Sciences. +麦考密克·史蒂文森(McCormick Stevenson)已成为市场上的早期采用者,该市场被称为超级计算即服务或高性能计算(HPC)即服务–这两个紧密相关的术语。根据国家计算科学研究所的说法,HPC是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 Whatever it's called, these services are upending the traditional supercomputing market and bringing HPC power to customers who could never afford it before. But it's no panacea, and it's definitely not plug-and-play – at least not yet. +无论以何种方式称呼,这些服务都在颠覆传统的超级计算市场,并将HPC功能带给以前买不起的客户。但这不是万能药,而且绝对不是即插即用的,至少现在还没有。 -### HPC services in practice +### HPC services in practice HPC服务实践 From the end user's perspective, HPC as a service resembles the batch-processing model that dates back to the early mainframe era. "We create an Ansys batch file and send that up, and after it runs, we pull down the result files and import them locally here," Krawczyk says. +从最终用户的角度来看,HPC即服务类似于可追溯到大型机早期的批处理模型。 “我们创建一个Ansys批处理文件并将其发送出去,然后运行它,我们将结果文件下拉并在此处本地导入,” Krawczyk说。 Behind the scenes, cloud providers are running the supercomputing infrastructure in their own data centers – though that doesn't necessarily imply the sort of cutting-edge hardware you might be visualizing when you hear "supercomputer." As Dave Turek, Vice President of Technical Computing at IBM OpenPOWER, explains it, HPC services at their core are "a collection of servers that are strung together with an interconnect. You have the ability to invoke this virtual computing infrastructure that allows you to bring a lot of different servers to work together in a parallel construct to solve the problem when you present it." - +在幕后,云提供商正在其自己的数据中心中运行超级计算基础结构,尽管这不一定意味着您在听到“超级计算机”时可能会看到的最先进的硬件。正如IBM OpenPOWER技术计算副总裁Dave Turek解释说的那样,HPC服务的核心是“由互连串在一起的服务器的集合。您可以调用此虚拟计算基础结构,使您能够当您提出问题时,许多不同的服务器可以并行构造在一起以解决问题。” [][2] Sounds simple in theory. But making it viable in practice required some chipping away at technical problems, according to Theo Lynn, Professor of Digital Business at Dublin City University. What differentiates ordinary computing from HPC is those interconnects – high-speed, low-latency, and expensive – so those needed to be brought to the world of cloud infrastructure. Storage performance and data transport also needed to be brought up to a level at least in the same ballpark as on-prem HPC before HPC services could be viable. +理论上听起来很简单。都柏林城市大学数字业务教授西奥·林恩(Theo Lynn)表示,但要使其在实践中可行,需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速,低延迟和昂贵-因此需要将这些互连引入云基础架构领域。在HPC服务可行之前,还至少需要将存储性能和数据传输提升到与本地HPC相同的水平。 But Lynn says that some of the innovations that have helped HPC services take off have been more institutional than technological. In particular, "we are now seeing more and more traditional HPC applications adopting cloud-friendly licensing models – a barrier to adoption in the past." +但是林恩说,一些帮助高性能计算服务起飞的创新比技术更具有制度性。特别是,“我们现在看到越来越多的传统HPC应用程序采用云友好型许可模式-过去是采用这种模式的障碍。” And the economics have also shifted the potential customer base, he says. "Cloud service providers have opened up the market more by targeting low-end HPC buyers who couldn’t afford the capex associated with traditional HPC and opening up the market to new users. As the markets open up, the hyperscale economic model becomes more and more feasible, costs start coming down." +他说,经济也改变了潜在的客户群。 “云服务提供商通过针对那些负担不起与传统HPC相关的资本支出的低端HPC买家,并向新用户开放市场,进一步开放了市场。随着市场的开放,超大规模经济模型变得越来越多,更可行,成本开始下降。” -Avoid on-premises CAPEX** +Avoid on-premises CAPEX** 避免内部资本支出** ** HPC services are attractive to private-sector customers in the same fields where traditional supercomputing has long held sway. These include sectors that rely heavily on complex mathematical modeling, including defense contractors like McCormick Stevenson, along with oil and gas companies, financial services firms, and biotech companies. Dublin City University's Lynn adds that loosely coupled workloads are a particularly good use case, which meant that many early adopters used it for 3D image rendering and related applications. +在传统超级计算长期占据主导地位的相同领域,HPC服务对私营部门客户具有吸引力。这些行业包括严重依赖复杂数学模型的行业,包括麦考密克·史蒂文森(McCormick Stevenson)等国防承包商,以及石油和天然气公司,金融服务公司和生物技术公司。都柏林城市大学的Lynn补充说,松散耦合的工作负载是一个特别好的用例,这意味着许多早期采用者将其用于3D图像渲染和相关应用程序。 But when does it make sense to consider HPC services over on-premises HPC? For hhpberlin, a German company that simulates smoke propagation in and fire damage to structural components of buildings, the move came as it outgrew its current resources. +但是,何时在本地HPC上考虑HPC服务才有意义?对于德国的hhpberlin公司,该公司模拟烟雾在建筑物中的传播和火灾对建筑物结构部件的破坏,此举是因为它超出了其现有资源。 "For several years, we had run our own small cluster with up to 80 processor cores," says Susanne Kilian, hhpberlin's scientific head of numerical simulation. "With the rise in application complexity, however, this constellation has increasingly proven to be inadequate; the available capacity was not always sufficient to handle projects promptly." +hhpberlin数值模拟的科学负责人Susanne Kilian说:“几年来,我们一直在运行自己的小型集群,该集群具有多达80个处理器内核。” “但是,随着应用程序复杂性的提高,这种架构已经越来越不足够;可用容​​量并不总是足够迅速地处理项目。” But just spending money on a new cluster wasn't an ideal solution, she says: "In view of the size and administrative environment of our company, the necessity of constant maintenance of this cluster (regular software and hardware upgrades) turned out to be impractical. Plus, the number of required simulation projects is subject to significant fluctuations, such that the utilization of the cluster was not really predictable. Typically, phases with very intensive use alternate with phases with little to no use." By moving to an HPC service model, hhpberlin shed that excess capacity and the need to pay up front for upgrades. +她说:“但是,仅仅花钱买一个新的集群并不是一个理想的解决方案:鉴于我们公司的规模和管理环境,持续维护该集群(定期进行软件和硬件升级)的必要性非常明显。另外,所需的模拟项目的数量会出现很大的波动,因此群集的使用情况并不是真正可预测的。通常,使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为HPC服务模式,hhpberlin消除了过剩的容量,无需支付升级费用。 IBM's Turek explains the calculus that different companies go through while assessing their needs. For a biosciences startup with 30 people, "you need computing, but you really can't afford to have 15% of your staff dedicated to it. It's just like you might also say you don't want to have on-staff legal representation, so you'll get that as a service as well." For a bigger company, though, it comes down to weighing the operational expense of an HPC service against the capacity expense of buying an in-house supercomputer or HPC cluster. +IBM的Turek解释了不同公司在评估其需求时所经历的计算过程。对于拥有30名员工的生物科学初创公司来说,“您需要计算,但您实在负担不起15%的员工专心致志。这就像您可能还说过,您不想拥有在职法律代表,因此您也可以将其作为服务获得。”但是,对于一家较大的公司而言,归结为权衡HPC服务的运营费用与购买内部超级计算机或HPC集群的容量费用。 So far, those are the same sorts of arguments you'd have over adopting any cloud service. But the opex vs. capex dilemma can be weighted towards the former by some of the specifics of the HPC market. Supercomputers aren't commodity hardware like storage or x86 servers; they're very expensive, and technological advances can swiftly render them obsolete. As McCormick Stevenson's Krawczyk puts it, "It's like buying a car: as soon as you drive off the lot it starts to depreciate." And for many companies –especially larger and less nimble ones – the process of buying a supercomputer can get hopelessly bogged down. "You're caught up in planning issues, building issues, construction issues, training issues, and then you have to execute an RFP," says IBM's Turek. "You have to work through the CIO. You have to work with your internal customers to make sure there's continuity of service. It's a very, very complex process and not something that a lot of institutions are really excellent at executing." +到目前为止,这些都是您采用任何云服务时都会遇到的相同类型的争论。但是,可以通过HPC市场的某些细节将运营支出与资本支出的困境加权为前者。超级计算机不是诸如存储或x86服务器之类的商用硬件;它们非常昂贵,技术进步会很快使其过时。正如麦考密克·史蒂文森(McCormick Stevenson)的克拉维奇(Krawczyk)所说,“这就像在买车:开车一走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说:“您陷入了计划问题,建筑问题,施工问题,培训问题,然后必须执行RFP。” “您必须通过CIO进行工作。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程,并不是很多机构在执行方面都非常出色。” Once you choose to go down the services route for HPC, you'll find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, Senior Director and Analyst at Gartner, says bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC  services. +选择了HPC的服务路线后,您会发现您将从云服务中获得了许多期望,特别是仅在需要时才需要为HPC功能付费的能力,从而可以有效利用资源。 Gartner高级总监兼分析师Chirag Dekate表示,当您对高性能计算有短期需求时,突发性工作负载是推动HPC服务采用的关键用例。 "In the manufacturing industry, you tend to have a high peak of HPC activity around the product design stage," he says. "But once the product is designed, HPC resources are less utilized during the rest of the product-development cycle." In contrast, he says, "when you have large, long-running jobs, the economics of the cloud wear down." +他说:“在制造业中,在产品设计阶段,HPC活动往往会达到很高的峰值。” “但是,一旦产品设计完成,在其余产品开发周期中,HPC资源的利用率就会降低。” 相比之下,他说:“当您拥有大量长期运行的工作时,云的经济就会逐渐减弱。” With clever system design, you can integrate those HPC-services bursts of activity with your own in-house conventional computing. Teresa Tung, managing director in Accenture Labs, gives an example: "Accessing HPC via APIs makes it seamless to mix with traditional computing. A traditional AI pipeline might have its training done on a high-end supercomputer at the stage when the model is being developed, but then the resulting trained model that runs predictions over and over would be deployed on other services in the cloud or even devices at the edge." +通过巧妙的系统设计,您可以将这些HPC服务突发事件与您自己的内部常规计算集成在一起。 埃森哲实验室常务董事董德丽举了一个例子:“通过API访问HPC可以无缝地与传统计算混合。在模型构建阶段,传统的AI管道可能会在高端超级计算机上进行培训。 开发出来的软件,但是最终生成的经过反复训练的模型将部署在云中的其他服务上,甚至部署在边缘设备上。” ### It's not for all use cases** ** Use of HPC services lends itself to batch-processing and loosely-coupled use cases. That ties into a common HPC downside: data transfer issues. High-performance computing by its very nature often involves huge data sets, and sending all that information over the internet to a cloud service provider is no simple thing. "We have clients I talk to in the biotech industry who spend $10 million a month on just the data charges," says IBM's Turek. +HPC服务的使用适合批处理和松散耦合的用例。这与HPC的常见缺点有关:数据传输问题。从本质上讲,高性能计算通常涉及庞大的数据集,而将所有这些信息通过Internet发送到云服务提供商并不是一件容易的事。 IBM的Turek说:“我们与生物技术行业的客户交流,他们每月仅在数据费用上就花费1000万美元。” And money isn't the only potential problem. Building a workflow that makes use of your data can challenge you to work around the long times required for data transfer. "When we had our own HPC cluster, local access to the simulation results already produced – and thus an interactive interim evaluation — was of course possible at any time," says hhpberlin's Kilian. "We're currently working on being able to access and evaluate the data produced in the cloud even more efficiently and interactively at any desired time of the simulation without the need to download large amounts of simulation data." +金钱并不是唯一的潜在问题。建立一个利用您的数据的工作流程可能会挑战您在数据传输所需的长时间内工作。 hhpberlin的Kilian说:“当我们拥有自己的HPC集群时,当然可以随时本地访问已经产生的仿真结果,从而进行交互式的临时评估。” “我们目前正在努力,能够在任何所需的模拟时间更高效,交互地访问和评估云中生成的数据,而无需下载大量的模拟数据。” Mike Krawczyk cites another stumbling block: compliance issues. Any service a defense contractor uses needs to be complaint with the International Traffic in Arms Regulations (ITAR), and McCormick Stevenson went with Rescale in part because it was the only vendor they found that checked that box. While more do today, any company looking to use cloud services should be aware of the legal and data-protection issues involved in living on someone else's infrastructure, and the sensitive nature of many of HPC's use cases makes this doubly true for HPC as a service. +Mike Krawczyk提到了另一个绊脚石:合规性问题。国防承包商使用的任何服务都需要向《国际武器交易条例》(ITAR)进行投诉,麦考密克·史蒂文森(McCormick Stevenson)之所以选择Rescale,部分原因是这是他们发现该复选框的唯一供应商。如今,尽管有更多的公司这样做,但任何希望使用云服务的公司都应该意识到生活在其他人的基础架构上所涉及的法律和数据保护问题,而且许多HPC用例的敏感性质使得这对于HPC即服务而言是双重事实。 。 In addition, the IT governance that HPC services require goes beyond regulatory needs. For instance, you'll need to keep track of whether your software licenses permit cloud use ­– especially with specialized software packages written to run on an on-premises HPC cluster. And in general, you need to keep track of how you use HPC services, which can be a tempting resource, especially if you've transitioned from in-house systems where staff was used to having idle HPC capabilities available. For instance, Ron Gilpin, senior director and Azure Platform Services global lead at Avanade, suggests dialing back how many processing cores you use for tasks that aren't time sensitive. "If a job only needs to be completed in an hour instead of ten minutes," he says, "that might use 165 processors instead of 1,000, a savings of thousands of dollars." +此外,HPC服务所需的IT治理超出了监管需求。例如,您需要跟踪您的软件许可证是否允许云使用­ –尤其是针对专门编写在本地HPC群集上运行的软件包。通常,您需要跟踪HPC服务的使用方式,这可能是一个诱人的资源,尤其是当您从习惯于工作人员的内部系统过渡到具有可用的空闲HPC功能时。例如,Avanade全球平台高级主管兼Azure平台服务全球负责人Ron Gilpin建议,回拨您用于非时间敏感任务的处理核心数量。他说:“如果一项工作只需要一个小时而不是十分钟就可以完成,那么它可以使用165个处理器而不是1,000个,从而节省了数千美元。” ### A premium on HPC skills** ** One of the biggest barriers to HPC adoption has always been the unique in-house skills it requires, and HPC services don't magically make that barrier vanish. "Many CIOs have migrated a lot of their workloads into the cloud and they have seen cost savings and increased agility and efficiency, and believe that they can achieve similar results in HPC ecosystems," says Gartner's Dekate. "And a common misperception is that they can somehow optimize human resource cost by essentially moving away from system admins and hiring new cloud experts who can solve their HPC workloads." +一直以来,采用HPC的最大障碍之一就是其所需的独特内部技能,而HPC服务并不能使这种障碍消失。 Gartner的Dekate表示:“许多CIO将许多工作负载迁移到了云中,他们看到了节省成本,提高敏捷性和效率的信念,并且相信他们可以在HPC生态系统中实现类似的结果。” “一个普遍的误解是,他们可以从本质上远离系统管理员,并聘用可以解决其HPC工作负载的新云专家,从而以某种方式优化人力资源成本。” "But HPC is not one of the main enterprise environments," he says. "You're dealing with high-end compute nodes interconnected with high-bandwidth, low-latency networking stacks, along with incredibly complicated application and middleware stacks. Even the filesystem layers in many cases are unique to HPC environments. Not having the right skills can be destabilizing." +他说:“但是HPC并不是主要的企业环境之一。” “您正在处理与高带宽,低延迟网络堆栈以及难以置信的复杂应用程序和中间件堆栈互连的高端计算节点。在许多情况下,甚至文件系统层也是HPC环境所独有的。没有适当的技能 可能会破坏稳定。” But supercomputing skills are in shortening supply, something Dekate refers to as the workforce "greying," in the wake of a generation of developers going to splashy startups rather than academia or the more staid firms where HPC is in use. As a result, vendors of HPC services are doing what they can to bridge the gap. IBM's Turek says that many HPC vets will always want to roll their own exquisitely fine-tuned code and will need specialized debuggers and other tools to help them do that for the cloud. But even HPC newbies can make calls to code libraries built by vendors to exploit supercomputing's parallel processing. And third-party software providers sell turnkey software packages that abstract away much of HPC's complication. +但是超级计算技术却在缩短供应,Dekate将其称为劳动力“灰色”,这是因为一代开发人员将目光投向了新兴的初创公司,而不是学术界或使用HPC的更老套的公司。结果,HPC服务的供应商正在尽其所能弥合差距。 IBM的Turek表示,许多HPC兽医将始终希望推出他们自己精心调整过的代码,并且将需要专门的调试器和其他工具来帮助他们在云中实现这一目标。但是,即使是HPC新手也可以调用供应商构建的代码库,以利用超级计算的并行处理。第三方软件提供商出售的交钥匙软件包可以消除许多HPC复杂性。 Accenture's Tung says the sector needs to lean further into this in order to truly prosper. "HPCaaS has created dramatically impactful new capability, but what needs to happen is making this easy to apply for the data scientist, the enterprise architect, or the software developer," she says. "This includes easy to use APIs, documentation, and sample code. It includes user support to answer questions. It’s not enough to provide an API; that API needs to be fit-for-purpose. For a data scientist this should likely be in Python and easily change out for the frameworks she is already using. The value comes from enabling these users who ultimately will have their jobs improved through new efficiencies and performance, if only they can access the new capabilities." If vendors can pull that off, HPC services might truly bring supercomputing to the masses. +埃森哲的董先生表示,该行业需要进一步倾斜才能真正繁荣。她说:“ ​​HPCaaS已经创建了具有重大影响力的新功能,但是需要做的是使它易于应用于数据科学家,企业架构师或软件开发人员。” “这包括易于使用的API,文档和示例代码。它包含用户回答问题的支持。仅提供API是不够的; API需要适合特定用途。对于数据科学家而言,这可能应该包含在其中。使用Python并轻松地更改她已经在使用的框架。其价值来自使这些用户能够获得新的效率和性能,只要他们能够使用新功能,他们最终将通过新的效率和性能来改善他们的工作。”如果供应商能够做到这一点,那么HPC服务可能真正将超级计算带入大众。 Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. - +加入[Facebook] [3]和[LinkedIn] [4]上的Network World社区,以评论最重要的主题。 -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html From e199633a1ce24e9a1a18526c07a69bf979643a05 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 Apr 2020 18:22:08 +0800 Subject: [PATCH 002/178] PRF @lxbwolf --- ...orkspaces with Great Teeming Workspaces.md | 1153 ++--------------- 1 file changed, 88 insertions(+), 1065 deletions(-) diff --git a/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md b/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md index db5f0bf1ca..683f3dc313 100644 --- a/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md +++ b/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md @@ -1,1136 +1,159 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Manage complex Git workspaces with Great Teeming Workspaces) [#]: via: (https://opensource.com/article/20/2/git-great-teeming-workspaces) [#]: author: (Daniel Gryniewicz https://opensource.com/users/dang) -使用 Great Teeming Workspaces 管理复杂的 Git 工作空间 +使用 GTWS 管理复杂的 Git 工作空间 ====== -GTWS 是一系列脚本,这些脚本能让我们在开发环境中管理一个有不同项目和不同版本的工程时变得更简单。 -![Coding on a computer][1] -[GTWS][2] 是一个 Git 的复杂工作空间管理工具包,可以让我们在开发环境中管理一个有不同项目和不同版本的工程时变得更简单。 +> GTWS 是一系列脚本,它使我们在开发环境中管理不同的项目和项目的各个版本变得很容易。 -有点像 Python 的 [venv][3],但不是为 Python 语言准备的。GTWS 用来处理多项目的多个版本的工作空间。你可以很容易地创建、更新、进入和离开工作空间,每个项目或版本的组合(最多)有一个本地的 origin,用来与 upstream 同步 — 其余的所有工作空间都从本地的 origin 更新。 +![](https://img.linux.net.cn/data/attachment/album/202004/27/182149xh9s7kb5bkf5875b.jpg) + +[Great Teeming Workspaces][2](GTWS)是一个 Git 的复杂工作空间管理工具包,它使我们在开发环境中管理不同的项目和项目的各个版本变得很容易。 + +有点像 Python 的 [venv][3],但不是为 Python 语言准备的。GTWS 用来管理多个项目的多个版本的工作空间。你可以很容易地创建、更新、进入和离开工作空间,每个项目或版本的组合(最多)有一个本地的 origin,用来与 upstream 同步 — 其余的所有工作空间都从本地的 origin 更新。 ### 部署 ``` -${GTWS_ORIGIN}/<project>/<repo>[/<version>] -${GTWS_BASE_SRCDIR}/<project>/<version>/<workspacename>/{<repo>[,<repo>...]} +${GTWS_ORIGIN}//[/] +${GTWS_BASE_SRCDIR}////{[,...]} ``` -源目录的每一级(包括全局的 home 目录)可以包含一个 **.gtwsrc** 文件,这个文件中维护与当前级相关的设置和 bash 代码。每一级的配置会覆盖上一级。 +源代码目录的每一级(包括全局的家目录)可以包含一个 `.gtwsrc` 文件,这个文件中维护与当前级相关的设置和 bash 代码。每一级的配置会覆盖上一级。 ### 安装 用下面的命令检出 GTWS: ``` -`git clone https://github.com/dang/gtws.git` +git clone https://github.com/dang/gtws.git ``` -配置你的 **${HOME}/.gtwsrc**。它应该包含 **GTWS_ORIGIN**,也可以再包含 **GTWS_SETPROMPT**。 +配置你的 `${HOME}/.gtwsrc`。它应该包含 `GTWS_ORIGIN`,也可以再包含 `GTWS_SETPROMPT`。 把仓库目录加到环境变量中: ``` -`export PATH="${PATH}:/path/to/gtws` +export PATH="${PATH}:/path/to/gtws ``` ### 配置 -通过级联 **.gtwsrc** 文件来进行配置。它从根目录向下遍历,会执行在每级目录中找到的 **.gtwsrc** 文件。下级目录的文件会覆盖上一级。 +通过级联 `.gtwsrc` 文件来进行配置。它从根目录向下遍历,会执行在每级目录中找到的 `.gtwsrc` 文件。下级目录的文件会覆盖上一级。 -在你最上层的文件 **~/.gtws/.gtwsrc** 中进行如下设置: +在你最上层的文件 `~/.gtws/.gtwsrc` 中进行如下设置: - * **GTWS_BASE_SRCDIR:** 所有项目源文件目录树的 base。默认为 **$HOME/src**。 - * **GTWS_ORIGIN:** 指定 origin git 目录树的路径。默认为 **$HOME/origin**。 - * **GTWS_SETPROMPT:** 可选配置。如果配置了这个参数,shell 提示符会有工作空间的名字。 - * **GTWS_DEFAULT_PROJECT:** 不指定项目或项目未知时默认的项目名。如果不指定,使用命令行时必须指明项目。 - * **GTWS_DEFAULT_PROJECT_VERSION:** 检出的默认版本。默认为 **master**。 + * `GTWS_BASE_SRCDIR`:所有项目源文件目录树的基目录。默认为 `$HOME/src`。 + * `GTWS_ORIGIN`: 指定 origin git 目录树的路径。默认为 `$HOME/origin`。 + * `GTWS_SETPROMPT`: 可选配置。如果配置了这个参数,shell 提示符会有工作空间的名字。 + * `GTWS_DEFAULT_PROJECT`: 不指定项目或项目未知时默认的项目名。如果不指定,使用命令行时必须指明项目。 + * `GTWS_DEFAULT_PROJECT_VERSION`: 检出的默认版本。默认为 `master`。 在每个项目的根目录进行以下设置: - * **GTWS_PROJECT:** 项目的名字(和 base 目录)。 - * **gtws_project_clone:** 这个函数用于克隆一个项目的指定版本。如果未定义,它会假定项目的 origin 对每一个版本都有一个单独的目录,这样会导致克隆一堆 Git 仓库。 - * **gtws_project_setup:** 在克隆完所有的仓库后,可以选择是否调用这个函数,调用后可以对项目进行必要的配置,如在 IDE 中配置工作空间。 + * `GTWS_PROJECT`: 项目的名字(和基目录)。 + * `gtws_project_clone`: 这个函数用于克隆一个项目的指定版本。如果未定义,它会假定项目的 origin 对每一个版本都有一个单独的目录,这样会导致克隆一堆 Git 仓库。 + * `gtws_project_setup`: 在克隆完所有的仓库后,可以选择是否调用这个函数,调用后可以对项目进行必要的配置,如在 IDE 中配置工作空间。 在项目版本级进行以下设置: - * **GTWS_PROJECT_VERSION:** 项目的版本。用于正确地从 origin 拉取代码。类似 Git 中的分支名字。 + * `GTWS_PROJECT_VERSION:` 项目的版本。用于正确地从 origin 拉取代码。类似 Git 中的分支名字。 下面这些参数可以在目录树的任意地方进行配置,如果能生效,它们可以被重写多次: - * **GTWS_PATH_EXTRA:** 这些是工作空间中加到路径后的额外的路径元素。 - * **GTWS_FILES_EXTRA:** 这些是不在版本控制内,但应该在工作空间中被检出的额外的文件。这些文件包括 **.git/info/exclude**,每个文件都与仓库的 base 相关联。 + * `GTWS_PATH_EXTRA`: 这些是工作空间中加到路径后的额外的路径元素。 + * `GTWS_FILES_EXTRA`: 这些是不在版本控制内,但应该在工作空间中被检出的额外的文件。这些文件包括 `.git/info/exclude`,每个文件都与仓库的基目录相关联。 ### origin 目录 -**GTWS_ORIGIN** (大部分脚本中)指向拉取和推送的原始 Git 检出目录。 +`GTWS_ORIGIN` (大部分脚本中)指向拉取和推送的原始 Git 检出目录。 -**${GTWS_ORIGIN}** 部署: +`${GTWS_ORIGIN}` 部署: - * **/<project>** - * 这是一个项目的仓库 base。 - * 如果指定了 **gtws_project_clone**,你可以配置任意的部署路径。 - * 如果没有指定 **gtws_project_clone**,这个路径下必须有个名为 **git** 的子目录,且 **git** 目录下有一系列用来克隆的裸 Git 仓库。 + * `/` + * 这是一个项目的仓库的基目录。 + * 如果指定了 `gtws_project_clone`,你可以配置任意的部署路径。 + * 如果没有指定 `gtws_project_clone`,这个路径下必须有个名为 `git` 的子目录,且 `git` 目录下有一系列用来克隆的裸 Git 仓库。 ### 工作流示例 -假设你有一个项目名为 **Foo**,它的 upstream 为 **github.com/foo/foo.git**。这个仓库有个名为 **bar** 的子模块,它的 upstream 是 **github.com/bar/bar.git**。Foo 项目在 master 分支开发,使用稳定版本的分支。 +假设你有一个项目名为 `Foo`,它的 upstream 为 `github.com/foo/foo.git`。这个仓库有个名为 `bar` 的子模块,它的 upstream 是 `github.com/bar/bar.git`。Foo 项目在 master 分支开发,使用稳定版本的分支。 为了能在 Foo 中使用 GTWS,你首先要配置目录结构。本例中假设你使用默认的目录结构。 - * 配置你最上层的 **.gtwsrc**: - * **cp ${GTWS_LOC}/examples/gtwsrc.top ~/.gtwsrc** - * 根据需要修改 **~/.gtwsrc**。 + * 配置你最上层的 `.gtwsrc`: + * `cp ${GTWS_LOC}/examples/gtwsrc.top ~/.gtwsrc` + * 根据需要修改 `~/.gtwsrc`。 * 创建顶级目录: - * **mkdir -p ~/origin ~/src** + * `mkdir -p ~/origin ~/src` * 创建并配置项目目录: - * **mkdir -p ~/src/foo** -**cp ${GTWS_LOC}/examples/gtwsrc.project ~/src/foo/.gtwsrc** - * 根据需要修改 **~/src/foo/.gtwsrc**。 + * `mkdir -p ~/src/foo` + + `cp ${GTWS_LOC}/examples/gtwsrc.project ~/src/foo/.gtwsrc` + * 根据需要修改 `~/src/foo/.gtwsrc`。 * 创建并配置 master 版本目录: - * **mkdir -p ~/src/foo/master** -**cp ${GTWS_LOC}/examples/gtwsrc.version ~/src/foo/master/.gtwsrc** - * 根据需要修改 **~/src/foo/master/.gtwsrc**。 + * `mkdir -p ~/src/foo/master` + + `cp ${GTWS_LOC}/examples/gtwsrc.version ~/src/foo/master/.gtwsrc` + * 根据需要修改 `~/src/foo/master/.gtwsrc`。 * 进入版本目录并创建一个临时工作空间来配置镜像: - * **mkdir -p ~/src/foo/master/tmp** -**cd ~/src/foo/master/tmp -git clone --recurse-submodules git://github.com/foo/foo.git -cd foo -gtws-mirror -o ~/origin -p foo**(译注:这个地方原文有误,不加 `-s` 参数会报错) - * 上面命令会创建 **~/origin/foo/git/foo.git** 和 **~/origin/foo/submodule/bar.git**。 + * `mkdir -p ~/src/foo/master/tmp` + + `cd ~/src/foo/master/tmp` + + `git clone --recurse-submodules git://github.com/foo/foo.git` + + `cd foo` + + `gtws-mirror -o ~/origin -p foo`(译注:这个地方原文有误,不加 `-s` 参数会报错) + * 上面命令会创建 `~/origin/foo/git/foo.git` 和 `~/origin/foo/submodule/bar.git`。 * 以后的克隆操作会从这些 origin 而不是 upstream 克隆。 * 现在可以删除工作空间了。 -到现在为止,Foo 的 master 分支的工作可以结束了。假设你现在想修复一个 bug,名为 **bug1234**。你可以脱离你当前的工作空间为修复这个 bug 单独创建一个工作空间,之后在新创建的工作空间中开发。 +到现在为止,Foo 的 master 分支的工作可以结束了。假设你现在想修复一个 bug,名为 `bug1234`。你可以脱离你当前的工作空间为修复这个 bug 单独创建一个工作空间,之后在新创建的工作空间中开发。 * 进入版本目录,创建一个新的工作空间: - * **cd ~/src/foo/master -mkws bug1234** - * 上面的命令创建了 **bug1234/**,在这个目录下检出了 Foo(和它的子模块 **bar**),并创建了 **build/foo** 来构建它。 + * `cd ~/src/foo/master` + + `mkws bug1234` + * 上面的命令创建了 `bug1234/`,在这个目录下检出了 Foo(和它的子模块 `bar`),并创建了 `build/foo` 来构建它。 * 有两种方式进入工作空间: - * **cd ~/src/foo/master/bug1234 -startws** -或者 -**cd ~/src/foo/master/** -**startws bug1234** - * 上面的命令在 bug1234 工作空间中开启了一个子 shell。这个 shell 有 GTWS 的环境和你在各级 **.gtwsrc** 文件中设置的环境。它也把你工作空间的 base 路径加入到了 CD,因此你可以从 base 路径 **cd** 到相关的目录中。 - * 现在你可以修复 bug1234了,构建、测试、提交你的修改。当你可以把代码推送到 upstream 时,执行下面的命令: -**cd foo -wspush**  - * **wspush** 会把代码推送到与你工作空间相关的分支 — 先推送到本地的 origin,再推送到 upstream。 + * `cd ~/src/foo/master/bug1234` + + `startws` + + 或者 + + `cd ~/src/foo/master/` + + `startws bug1234` + * 上面的命令在 `bug1234` 工作空间中开启了一个子 shell。这个 shell 有 GTWS 的环境和你在各级 `.gtwsrc` 文件中设置的环境。它也把你工作空间的基目录加入到了 CD,因此你可以从 base 路径 `cd` 到相关的目录中。 + * 现在你可以修复 `bug1234` 了,构建、测试、提交你的修改。当你可以把代码推送到 upstream 时,执行下面的命令: + + `cd foo` + + `wspush`  + * `wspush` 会把代码推送到与你工作空间相关的分支 — 先推送到本地的 origin,再推送到 upstream。 * 当 upstream 有修改时,你可以用下面的命令同步到本地: -**git sync** - * 上面的命令调用了 GTWS 的 **git-sync** 脚本,会从本地 origin 更新代码。使用下面的命令来更新本地的 origin: -**git sync -o**  - * 上面的命令会更新你本地的 origin 和子模块的镜像,然后用那些命令来更新你的检出仓库的代码。**git-sync** 也有一些其他的很好的工鞥。 + + `git sync` + * 上面的命令调用了 GTWS 的 `git-sync` 脚本,会从本地 origin 更新代码。使用下面的命令来更新本地的 origin: + + `git sync -o`  + * 上面的命令会更新你本地的 origin 和子模块的镜像,然后用那些命令来更新你的检出仓库的代码。`git-sync` 也有一些其他的很好的工鞥。 * 当要结束工作空间中的工作时,直接退出 shell: -**exit** + + `exit` * 你可以在任何时间重复进入工作空间,也可以在同一时间在相同的工作空间中开多个 shell。 - * 当你不需要某个工作空间时,你可以使用 **rmws** 来删除它,或者直接删除它的目录树。 - * 还有一个脚本 **tmws** 使用 tmux 进入工作空间,能创建一系列的窗口/窗格,这完美契合我的工作流。你可以根据你自己的需求来修改它。 + * 当你不需要某个工作空间时,你可以使用 `rmws` 来删除它,或者直接删除它的目录树。 + * 还有一个脚本 `tmws` 使用 tmux 进入工作空间,能创建一系列的窗口/窗格,这完美契合我的工作流。你可以根据你自己的需求来修改它。 -### 脚本内容 - -``` -#!/bin/bash -# Functions for gtws -# - -GTWS_LOC=$(readlink -f $(dirname "${BASH_SOURCE[0]}")) -export GTWS_LOC - -# if is_interactive; then echo "interactive" fi -# -# Check for an interactive shell -is_interactive() { -        case $- in -                *i*) -                        # Don't die in interactive shells -                        return 0 -                        ;; -                *) -                        return 1 -                        ;; -        esac -} - -# if can_die; then exit -# -# Check to see if it's legal to exit during die -can_die() { -        if (( BASH_SUBSHELL > 0 )); then -                debug_print "\t\tbaby shell; exiting" -                return 0 -        fi -        if ! is_interactive; then -                debug_print "\t\tNot interactive; exiting" -                return 0 -        fi -        debug_print "\t\tParent interactive; not exiting" -        return 1 -} - -# In a function: -# command || die "message" || return 1 -# Outside a function: -# command || die "message" -# -# Print a message and exit with failure -die() { -        echo -e "Failed: $1" >&2 -        if [ ! -z "$(declare -F | grep "GTWScleanup")" ]; then -                GTWScleanup -        fi -        if can_die; then -                exit 1 -        fi -        return 1 -} - -# Alternativess for using die properly to handle both interactive and script useage: -# -# Version 1: -# -#testfunc() { -#       command1 || die "${FUNCNAME}: command1 failed" || return 1 -#       command2 || die "${FUNCNAME}: command2 failed" || return 1 -#       command3 || die "${FUNCNAME}: command3 failed" || return 1 -#} -# -# Version 2: -# -#testfunc() { -#       ( -#               command1 || die "${FUNCNAME}: command1 failed" -#               command2 || die "${FUNCNAME}: command2 failed" -#               command3 || die "${FUNCNAME}: command3 failed" -#       ) -#       return $? -#} -# -# Optionally, the return can be replaced with this: -#       local val=$? -#       [[ "${val}" == "0" ]] || die -#       return ${val} -# This will cause the contaning script to abort - -# usage "You need to provide a frobnicator" -# -# Print a message and the usage for the current script and exit with failure. -usage() { -        local myusage; -        if [ -n "${USAGE}" ]; then -                myusage=${USAGE} -        else -                myusage="No usage given" -        fi -        local me; -        if [ -n "${ME}" ]; then -                me=${ME} -        else -                me=$(basename $0) -        fi -        if [ -n "$1" ]; then -                echo "$@" -        fi -        echo "" -        if [ -n "${DESCRIPTION}" ]; then -                echo -e "${me}: ${DESCRIPTION}" -                echo "" -        fi -        echo "Usage:" -        echo "${me} ${myusage}" -        if [ -n "${LONGUSAGE}" ]; then -                echo -e "${LONGUSAGE}" -        fi -        exit 1 -} - -# debug_print "Print debug information" -# -# Print debug information based on GTWS_VERBOSE -debug_print() { -        if [ -n "${GTWS_VERBOSE}" ]; then -                echo -e "${GTWS_INDENT}$@" >&2 -        fi -} - -# debug_trace_start -# -# Start tracing all commands -debug_trace_start() { -        if [ -n "${GTWS_VERBOSE}" ]; then -                set -x -        fi -} - -# debug_trace_stop -# -# Stop tracing all commands -debug_trace_stop() { -        set +x -} - -# cmd_exists ${cmd} -# -# Determine if a command exists on the system -function cmd_exists { -        which $1 > /dev/null 2>&1 -        if [ "$?" == "1" ]; then -                die "You don't have $1 installed, sorry" || return 1 -        fi -} - -# is_git_repo ${dir} -# -# return success if ${dir} is in a git repo, or failure otherwise -is_git_repo() { -        debug_print "is_git_repo $1" -        if [[ $1 == *:* ]]; then -                debug_print "    remote; assume good" -                return 0 -        elif [ ! -d "$1" ]; then -                debug_print "    fail: not dir" -                return 1 -        fi -        cd "$1" -        git rev-parse --git-dir >/dev/null 2>&1 -        local ret=$? -        cd - > /dev/null -        debug_print "    retval: $ret" -        return $ret -} - -# find_git_repo ${basedir} ${repo_name} repo_dir -# -# Find the git repo for ${repo_name} in ${basedir}.  It's one of ${repo_name} -# or ${repo_name}.git -# -# Result will be in the local variable repo_dir  Or: -# -# repo_dir=$(find_git_repo ${basedir} ${repo_name}) -# -function find_git_repo { -        local basedir=$1 -        local repo_name=$2 -        local __resultvar=$3 -        local try="${basedir}/${repo_name}" - -        if ! is_git_repo "${try}" ; then -                try=${try}.git -        fi - -        is_git_repo "${try}" || die "${repo_name} in ${basedir} is not a git repository" || return 1 - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$try'" -        else -                echo "$try" -        fi -} - -# git_top_dir top -# -# Get the top level of the git repo contaning PWD, or return failure; -# -# Result will be in local variable top  Or: -# -# top = $(git_top_dir) -# -# Result will be in local variable top -function git_top_dir { -        local  __resultvar=$1 -        local __top="$(git rev-parse --show-toplevel 2>/dev/null)" - -        if [ -z "${__top}" ]; then -                die "${PWD} is not a git repo" || return 1 -        fi -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__top'" -        else -                echo "$__top" -        fi -} - -# is_git_rebase -# -# return success if git repo is in a rebase -is_git_rebase() { -        debug_print "is_git_rebase $1" -        (test -d "$(git rev-parse --git-path rebase-merge)" || \ -                test -d "$(git rev-parse --git-path rebase-apply)" ) -        local ret=$? -        debug_print "    retval: $ret" -        return $ret -} - -# is_docker -# -# return success if process is running inside docker -is_docker() { -        debug_print "is_docker" -        grep -q docker /proc/self/cgroup -        return $? -} - -# is_gtws -# -# return success if process is running inside a workspace -is_gtws() { -        if [ -n "${GTWS_WS_GUARD}" ]; then -                return 0 -        fi -        return 1 -} - -function gtws_rcp { -        rsync --rsh=ssh -avzS --progress --ignore-missing-args --quiet "$@" -} - -function gtws_cpdot { -        local srcdir=$1 -        local dstdir=$2 - -        debug_print "${FUNCNAME} - ${srcdir} to ${dstdir}" -        if [ -d "${srcdir}" ] && [ -d "${dstdir}" ]; then -                shopt -s dotglob -                cp -a "${srcdir}"/* "${dstdir}"/ -                shopt -u dotglob -        fi -} - -# gtws_find_dockerfile dockerfile -# -# Result will be in local variable dockerfile  Or: -# -# dockerfile = $(gtws_find_dockerfile) -# -# Result will be in local variable dockerfile -# -# Get the path to the most-specific Dockerfile -function gtws_find_dockerfile { -        local  __resultvar=$1 -        local __dir="${GTWS_WSPATH}" -        local __file="Dockerfile" - -        debug_print "${FUNCNAME} - trying ${__dir}/${__file}" -        if [ ! -f "${__dir}/${__file}" ]; then -                # Version dir -                __dir=$(dirname "${__dir}") -                debug_print "${FUNCNAME} - trying ${__dir}/${__file}" -        fi -        if [ ! -f "${__dir}/${__file}" ]; then -                # Project dir -                __dir=$(dirname "${__dir}") -                debug_print "${FUNCNAME} - trying ${__dir}/${__file}" -        fi -        if [ ! -f "${__dir}/${__file}" ]; then -                # Top level, flavor -                __dir="${GTWS_LOC}/dockerfiles" -                __file="Dockerfile-${FLAVOR}" -                debug_print "${FUNCNAME} - trying ${__dir}/${__file}" -        fi -        if [ ! -f "${__dir}/${__file}" ]; then -                # Top level, base -                __dir="${GTWS_LOC}/dockerfiles" -                __file="Dockerfile-base" -                debug_print "${FUNCNAME} - trying ${__dir}/${__file}" -        fi -        if [ ! -f "${__dir}/${__file}" ]; then -                die "Could not find a Dockerfile" || return 1 -        fi - -        debug_print "${FUNCNAME} - found ${__dir}/${__file}" -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'${__dir}/${__file}'" -        else -                echo "$__dir" -        fi -} - -# gtws_smopvn ${GTWS_SUBMODULE_ORIGIN:-${GTWS_ORIGIN}} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} ${GTWS_WSNAME} smopvn -# -# Result will be in local variable smopvn.  Or: -# -# smopvn = $(gtws_smopvn ${GTWS_SUBMODULE_ORIGIN:-${GTWS_ORIGIN}} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} ${GTWS_WSNAME}) -# -# Result will be in local variable smovpn -# -# Get the path to submodules for this workspace -function gtws_smopvn { -        local origin=$1 -        local project=$2 -        local version=$3 -        local name=$4 -        local  __resultvar=$5 -        local __smopv="${origin}/${project}/submodule" - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__smopv'" -        else -                echo "$__smopv" -        fi -} - -# gtws_opvn ${GTWS_ORIGIN} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} ${GTWS_WSNAME} opvn -# -# Result will be in local variable opvn.  Or: -# -# opvn = $(gtws_opvn ${GTWS_ORIGIN} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} ${GTWS_WSNAME}) -# -# Result will be in local variable opvn. -# -# Get the path to git repos for this workspace -function gtws_opvn { -        local origin=$1 -        local project=$2 -        local version=$3 -        local name=$4 -        local  __resultvar=$5 -        local __opv="${origin}/${project}/${version}" - -        if [[ $__opv == *:* ]]; then -                __opv="${__opv}/${name}" -                debug_print "remote; using opvn $__opv" -        elif [ ! -d "${__opv}" ]; then -                __opv="${origin}/${project}/git" -                if [ ! -d "${__opv}" ]; then -                        die "No opvn for ${origin} ${project} ${version}" || return 1 -                fi -        fi -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__opv'" -        else -                echo "$__opv" -        fi -} - -# gtws_submodule_url ${submodule} url -# -# Result will be in local variable url  Or: -# -# url = $(gtws_submodule_url ${submodule}) -# -# Result will be in local variable url -# -# Get the URL for a submodule -function gtws_submodule_url { -        local sub=$1 -        local  __resultvar=$2 -        local __url=$(git config --list | grep "submodule.*url" | grep "\<${sub}\>" | cut -d = -f 2) - -        if [ -z "${__url}" ]; then -                local rpath=${PWD} -                local subsub=$(basename "${sub}") -                cd "$(dirname "${sub}")" -                debug_print "${FUNCNAME} trying ${PWD}" -                __url=$(git config --list | grep submodule | grep "\<${subsub}\>" | cut -d = -f 2) -                cd "${rpath}" -        fi - -        debug_print "${FUNCNAME} $sub url: $__url" -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__url'" -        else -                echo "$__url" -        fi -} - -# gtws_submodule_mirror ${smopv} ${submodule} ${sub_sub_basename} mloc -# -# Result will be in local variable mloc  Or: -# -# mloc = $(gtws_submodule_mirror ${smopv} ${submodule} ${sub_sub_basename}) -# -# Result will be in local variable mloc -# -# Get the path to a local mirror of the submodule, if it exists -function gtws_submodule_mirror { -        local smopv=$1 -        local sub=$2 -        local sub_sub=$3 -        local  __resultvar=$4 -        local __mloc="" -        local url=$(gtws_submodule_url ${sub}) -        if [ -n "${url}" ]; then -                local urlbase=$(basename ${url}) -                # XXX TODO - handle remote repositories -                #if [[ ${smopv} == *:* ]]; then -                ## Remote SMOPV means clone from that checkout; I don't cm -                #refopt="--reference ${smopv}/${name}/${sub}" -                if [ -d "${smopv}/${urlbase}" ]; then -                        __mloc="${smopv}/${urlbase}" -                fi -        fi - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__mloc'" -        else -                echo "$__mloc" -        fi -} - -# gtws_submodule_paths subpaths -# -# Result will be in local variable subpaths  Or: -# -# subpaths = $(gtws_submodule_paths) -# -# Result will be in local variable subpaths -# -# Get the paths to submodules in a get repo.  Does not recurse -function gtws_submodule_paths { -        local  __resultvar=$1 -        local __subpaths=$(git submodule status | sed 's/^ *//' | cut -d ' ' -f 2) - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__subpaths'" -        else -                echo "$__subpaths" -        fi -} - -# gtws_submodule_clone [<base-submodule-path>] [<sub-sub-basename>] -# -# This will set up all the submodules in a repo.  Should be called from inside -# the parent repo -function gtws_submodule_clone { -        local smopv=$1 -        local sub_sub=$2 -        local sub_paths=$(gtws_submodule_paths) -        local rpath="${PWD}" - -        if [ -z "${smopv}" ]; then -                smopv=$(gtws_smopvn "${GTWS_SUBMODULE_ORIGIN:-${GTWS_ORIGIN}}" "${GTWS_PROJECT}" "${GTWS_PROJECT_VERSION}" "${GTWS_WSNAME}") -        fi -        git submodule init || die "${FUNCNAME}: Failed to init submodules" || return 1 -        for sub in ${sub_paths}; do -                local refopt="" -                local mirror=$(gtws_submodule_mirror "${smopv}" "${sub}" "${sub_sub}") -                debug_print "${FUNCNAME} mirror: ${mirror}" -                if [ -n "${mirror}" ]; then -                        refopt="--reference ${mirror}" -                fi -                git submodule update ${refopt} "${sub}" -                # Now see if there are recursive submodules -                cd "${sub}" -                gtws_submodule_clone "${smopv}/${sub}_submodule" "${sub}" || return 1 -                cd "${rpath}" -        done -} - -# gtws_repo_clone <base-repo-path> <repo> <branch> [<base-submodule-path>] [<target-directory>] -function gtws_repo_clone { -        local baserpath=${1%/} -        local repo=$2 -        local branch=$3 -        local basesmpath=$4 -        local rname=${5:-${repo%.git}} -        local rpath="${baserpath}/${repo}" -        local origpath=${PWD} - -        if [[ ${rpath} != *:* ]]; then -                if [ ! -d "${rpath}" ]; then -                        rpath="${rpath}.git" -                fi -        fi -        if [ -z "${basesmpath}" ]; then -                basesmpath="${baserpath}" -        fi -        debug_print "${FUNCNAME}: cloning ${baserpath} - ${repo} : ${branch} into ${GTWS_WSNAME}/${rname} submodules: ${basesmpath}" - -        # Main repo -        #git clone --recurse-submodules -b "${branch}" "${rpath}" || die "failed to clone ${rpath}:${branch}" || return 1 -        git clone -b "${branch}" "${rpath}" ${rname} || die "${FUNCNAME}: failed to clone ${rpath}:${branch}" || return 1 - -        # Update submodules -        cd "${rname}" || die "${FUNCNAME}: failed to cd to ${rpath}" || return 1 -        gtws_submodule_clone "${basesmpath}" || return 1 -        cd "${origpath}" || die "${FUNCNAME}: Failed to cd to ${origpath}" || return 1 - -        # Copy per-repo settings, if they exist -        gtws_cpdot "${baserpath%/git}/extra/repo/${rname}" "${origpath}/${rname}" - -        # Extra files -        for i in ${GTWS_FILES_EXTRA}; do -                local esrc= - -                IFS=':' read -ra ARR <<< "$i" -                if [ -n "${ARR[1]}" ]; then -                        dst="${rname}/${ARR[1]}" -                else -                        dst="${rname}/${ARR[0]}" -                fi - -                if [ -n "${GTWS_REMOTE_IS_WS}" ]; then -                        esrc="${baserpath}/${dst}" -                else -                        esrc="${baserpath%/git}" -                fi - -                gtws_rcp "${esrc}/${ARR[0]}" "${dst}" -        done -} - -# gtws_project_clone_default ${GTWS_ORIGIN} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} ${GTWS_WSNAME} [${SUBMODULE_BASE}] -# -# Clone a version of a project into ${GTWS_WSPATH} (which is the current working directory).  This is the default version of this that clones <origin>/<project>/<version>/* -function gtws_project_clone_default { -        local origin=$1 -        local project=$2 -        local version=$3 -        local name=$4 -        local basesmpath=$5 -        local opv=$(gtws_opvn "${origin}" "${project}" "${version}" "${name}") -        local wspath=${PWD} -        local repos= -        local -A branches - -        if [ -z "${GTWS_PROJECT_REPOS}" ]; then -                for i in "${opv}"/*; do -                        repos="$(basename $i) $repos" -                        branches[$i]=${version} -                done -        else -                for i in ${GTWS_PROJECT_REPOS}; do -                        IFS=':' read -ra ARR <<< "$i" -                        repos="${ARR[0]} $repos" -                        if [ -n "${ARR[1]}" ]; then -                                branches[${ARR[0]}]=${ARR[1]} -                        else -                                branches[${ARR[0]}]=${version} -                        fi -                done -        fi - -        if [ -z "${basesmpath}" ] || [ ! -d "${basesmpath}" ]; then -                basesmpath="${opv}" -        fi - -        for repo in ${repos}; do -                gtws_repo_clone "${opv}" "${repo}" "${branches[${repo}]}" "${basesmpath}" -        done - -        # Copy per-WS settings, if they exist -        gtws_cpdot "${opv%/git}/extra/ws" "${wspath}" -} - -# gtws_repo_setup ${wspath} ${repo_path} -# -# The project can define gtws_repo_setup_local taking the same args to do -# project-specific setup.  It will be called last. -# -# Post-clone setup for an individual repo -function gtws_repo_setup { -        local wspath=$1 -        local rpath=$2 -        local savedir="${PWD}" - -        if [ ! -d "${rpath}" ]; then -                return 0 -        fi - -        cd "${rpath}/src" 2>/dev/null \ -                || cd ${rpath} \ -                || die "Couldn't cd to ${rpath}" || return 1 - -        maketags ${GTWS_MAKETAGS_OPTS} > /dev/null 2> /dev/null & - -        cd ${wspath} || die "Couldn't cd to ${wspath}" || return 1 - -        mkdir -p "${wspath}/build/$(basename ${rpath})" - -        cd "${savedir}" - -        if [ -n "$(declare -F | grep "\<gtws_repo_setup_local\>")" ]; then -                gtws_repo_setup_local "${wspath}" "${rpath}" \ -                        || die "local repo setup failed" || return 1 -        fi -} - -# gtws_project_setup${GTWS_WSNAME} ${GTWS_ORIGIN} ${GTWS_PROJECT} ${GTWS_PROJECT_VERSION} -# -# The project can define gtws_project_setup_local taking the same args to do -# project-specific setup.  It will be called last. -# -# Post clone setup of a workspace in ${GTWS_WSPATH} (which is PWD) -function gtws_project_setup { -        local wsname=$1 -        local origin=$2 -        local project=$3 -        local version=$4 -        local wspath=${PWD} -        local opv=$(gtws_opvn "${origin}" "${project}" "${version}" "placeholder") - -        for i in "${wspath}"/*; do -                gtws_repo_setup "${wspath}" "${i}" -        done - -        mkdir "${wspath}"/install -        mkdir "${wspath}"/chroots -        mkdir "${wspath}"/patches - -        if [ -n "$(declare -F | grep "\<gtws_project_setup_local\>")" ]; then -                gtws_project_setup_local "${wsname}" "${origin}" "${project}" \ -                        "${version}" || die "local project setup failed" || return 1 -        fi -} - -# load_rc /path/to/workspace -# -# This should be in the workspace-level gtwsrc file -# Recursively load all RC files, starting at / -function load_rc { -        local BASE=$(readlink -f "${1}") -        # Load base RC first -        debug_print "load_rc: Enter + Top: ${BASE}" -        source "${HOME}"/.gtwsrc -        while [ "${BASE}" !=  "/" ]; do -                if [ -f "${BASE}"/.gtwsrc ]; then -                        load_rc "$(dirname ${BASE})" -                        debug_print "\tLoading ${BASE}/.gtwsrc" -                        source "${BASE}"/.gtwsrc -                        return 0 -                fi -                BASE=$(readlink -f $(dirname "${BASE}")) -        done -        # Stop at / - -        return 1 -} - -# clear_env -# -# Clear the environment of GTWS_* except for the contents of GTWS_SAVEVARS. -# The default values for GTWS_SAVEVARS are below. -function clear_env { -        local savevars=${GTWS_SAVEVARS:-"LOC PROJECT PROJECT_VERSION VERBOSE WSNAME"} -        local verbose="${GTWS_VERBOSE}" -        debug_print "savevars=$savevars" - -        # Reset prompt -        if [ -n "${GTWS_SAVEPS1}" ]; then -                PS1="${GTWS_SAVEPS1}" -        fi -        if [ -n "${GTWS_SAVEPATH}" ]; then -                export PATH=${GTWS_SAVEPATH} -        fi -        unset LD_LIBRARY_PATH -        unset PYTHONPATH -        unset PROMPT_COMMAND -        unset CDPATH -        unset SDIRS - -        # Save variables -        for i in ${savevars}; do -                SRC=GTWS_${i} -                DST=SAVE_${i} -                debug_print "\t $i: ${DST} = ${!SRC}" -                eval ${DST}=${!SRC} -        done - -        # Clear GTWS evironment -        for i in ${!GTWS*} ; do -                if [ -n "${verbose}" ]; then -                        echo -e "unset $i" >&2 -                fi -                unset $i -        done - -        # Restore variables -        for i in ${savevars}; do -                SRC=SAVE_${i} -                DST=GTWS_${i} -                if [ -n "${verbose}" ]; then -                        echo -e "\t $i: ${DST} = ${!SRC}" >&2 -                fi -                if [ -n "${!SRC}" ]; then -                        eval export ${DST}=${!SRC} -                fi -                unset ${SRC} -        done -} - -# save_env ${file} ${nukevars} -# -# Save the environment of GTWS_* to the give file, except for the variables -# given to nuke.  The default values to nuke are given below. -function save_env { -        local fname=${1} -        local nukevars=${2:-"SAVEPATH ORIGIN WS_GUARD LOC SAVEPS1"} -        debug_print "nukevars=$nukevars" - -        for i in ${!GTWS*} ; do -                for j in ${nukevars}; do -                        if [ "${i}" == "GTWS_${j}" ]; then -                                debug_print "skipping $i" -                                continue 2 -                        fi -                done -                debug_print "saving $i" -                echo "export $i=\"${!i}\"" >> "${fname}" -        done -} - -# gtws_tmux_session_name ${PROJECT} ${VERSION} ${WSNAME} sesname -# -# Result will be in local variable sesname  Or: -# -# sesname = $(gtws_tmux_session_name ${PROJECT} ${VERSION} ${WSNAME}) -# -# Result will be in local variable sesname -# -# Get the tmux session name for a given workspace -function gtws_tmux_session_name { -        local project=$1 -        local version=$2 -        local wsname=$3 -        local  __resultvar=$4 -        local sesname="${project//./_}/${version//./_}/${wsname//./_}" - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$sesname'" -        else -                echo "$sesname" -        fi -} - -# gtws_tmux_session_info ${SESSION_NAME} running attached -# -# Determine if a session is running, and if it is attached -# -# Result will be in local variables running and attached -# -# Test with: -# if $running ; then -#       echo "is running" -# fi - -function gtws_tmux_session_info { -        local ses_name=$1 -        local  __result_running=$2 -        local  __result_attached=$3 - -        local __num_ses=$(tmux ls | grep "^${ses_name}" | wc -l) -        local __attached=$(tmux ls | grep "^${ses_name}" | grep attached) - -        echo "$ses_name ses=${__num_ses}" - -        if [[ "$__result_running" ]]; then -                if [ "${__num_ses}" != "0" ]; then -                        eval $__result_running="true" -                else -                        eval $__result_running="false" -                fi -        fi -        if [[ "$__result_attached" ]]; then -                if [ -n "${__attached}" ]; then -                        eval $__result_attached="true" -                else -                        eval $__result_attached="false" -                fi -        fi -} - -# gtws_tmux_kill ${BASENAME} -# -# Kill all sessiont matching a pattern -function gtws_tmux_kill { -        local basename=$1 -        local old_sessions=$(tmux ls 2>/dev/null | fgrep "${basename}" | cut -f 1 -d:) -        for session in ${old_sessions}; do -                tmux kill-session -t "${session}" -        done -} - -# gtws_tmux_cleanup -# -# Clean up defunct tmux sessions -function gtws_tmux_cleanup { -        local old_sessions=$(tmux ls 2>/dev/null | egrep "^[0-9]{14}.*[0-9]+\\)$" | cut -f 1 -d:) -        for session in ${old_sessions}; do -                tmux kill-session -t "${session}" -        done -} - -# gtws_tmux_attach ${SESSION_NAME} -# -# Attach to a primary session.  It will remain after detaching. -function gtws_tmux_attach { -        local ses_name=$1 - -        tmux attach-session -t "${ses_name}" -} - -# gtws_tmux_slave ${SESSION_NAME} -# -# Create a secondary session attached to the primary session.  It will exit it -# is detached. -function gtws_tmux_slave { -        local ses_name=$1 - -        # Session is is date and time to prevent conflict -        local session=`date +%Y%m%d%H%M%S` -        # Create a new session (without attaching it) and link to base session -        # to share windows -        tmux new-session -d -t "${ses_name}" -s "${session}" -        # Attach to the new session -        gtws_tmux_attach "${session}" -        # When we detach from it, kill the session -        tmux kill-session -t "${session}" -} - -function cdorigin() { -        if [ -n "$(declare -F | grep "gtws_project_cdorigin")" ]; then -                gtws_project_cdorigin $@ -        else -                gtws_cdorigin $@ -        fi -} - -function gtws_get_origin { -        local opv=$1 -        local target=$2 -        local __origin= -        local  __resultvar=$3 - -        # If it's a git repo with a local origin, use that. -        __origin=$(git config --get remote.origin.url) -        if [ ! -d "${__origin}" ]; then -                __origin="${__origin}.git" -        fi -        if [ ! -d "${__origin}" ]; then -                # Try to figure it out -                if [ ! -d "${opv}" ]; then -                        die "No opv for $target" || return 1 -                fi -                find_git_repo "${opv}" "${target}" __origin || return 1 -        fi - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$__origin'" -        else -                echo "$__origin" -        fi -} - -function gtws_cdorigin() { -        local opv=$(gtws_opvn "${GTWS_ORIGIN}" "${GTWS_PROJECT}" "${GTWS_PROJECT_VERSION}" "${GTWS_WSNAME}") -        local gitdir="" -        local target="" -        if [ -n "$1" ]; then -                target="$@" -        else -                git_top_dir gitdir || return 1 -                target=$(basename $gitdir) -        fi - -        gtws_get_origin $opv $target origin || return 1 -        cd "${origin}" -} - -# Copy files to another machine in the same workspace -function wsrcp { -        local target="${!#}" -        local length=$(($#-1)) -        local base=${PWD} - -        if [ -z "${1}" -o -z "${2}" ]; then -                echo "usage: ${FUNCNAME} <path> [<path>...] <target>" -                return 1 -        fi - -        for path in "${@:1:$length}"; do -                gtws_rcp "${path}" "${target}:${base}/${path}" -        done -} - -# Override "cd" inside the workspace to go to GTWS_WSPATH by default -function cd { -        if [ -z "$@" ]; then -                cd "${GTWS_WSPATH}" -        else -                builtin cd $@ -        fi -} - -# Generate diffs/interdiffs for changes and ship to WS on other boxes -function gtws_interdiff { -        local targets=$@ -        local target= -        local savedir=${PWD} -        local topdir=$(git_top_dir) -        local repo=$(basename ${topdir}) -        local mainpatch="${GTWS_WSPATH}/patches/${repo}-full.patch" -        local interpatch="${GTWS_WSPATH}/patches/${repo}-incremental.patch" - -        if [ -z "${targets}" ]; then -                echo "Usage: ${FUNCNAME} <targethost>" -                die "Must give targethost" || return 1 -        fi -        cd "${topdir}" -        if [ -f "${mainpatch}" ]; then -                git diff | interdiff "${mainpatch}" - > "${interpatch}" -        fi -        git diff > "${mainpatch}" -        for target in ${targets}; do -                gtws_rcp "${mainpatch}" "${interpatch}" \ -                        "${target}:${GTWS_WSPATH}/patches" -        done -        cd "${savedir}" -} - -function gtws_debug { -        local cmd=$1 -        if [ -z "${cmd}" ]; then -                echo "Must give a command" -                echo -                die "${FUNCNAME} <cmd-path>" || return 1 -        fi -        local cmdbase=$(basename $cmd) -        local pid=$(pgrep "${cmdbase}") - -        ASAN_OPTIONS="abort_on_error=1" cgdb ${cmd} ${pid} -} - -# remote_cmd "${target}" "${command}" output -# -# Result will be in local variable output  Or: -# -# output = $(remote_cmd "${target}" "${command}") -# -# Result will be in local variable output -# -# Run a command remotely and capture sdtout.  Make sure to quote the command -# appropriately. -remote_cmd() { -        local target=$1 -        local cmd=$2 -        local  __resultvar=$3 -        local output= - -        if [ -z "${GTWS_VERBOSE}" ]; then -                output=$(ssh "${target}" "${cmd}" 2>/dev/null) -        else -                output=$(ssh "${target}" "${cmd}") -        fi -        local ret=$? - -        if [[ "$__resultvar" ]]; then -                eval $__resultvar="'$output'" -        else -                echo "${output}" -        fi -        return ${ret} -} -``` -------------------------------------------------------------------------------- @@ -1139,7 +162,7 @@ via: https://opensource.com/article/20/2/git-great-teeming-workspaces 作者:[Daniel Gryniewicz][a] 选题:[lujun9972][b] 译者:[lxbwolf](https://github.com/lxbwolf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From ea687cb5ad34f0b2ce476096fa3767e783d0a110 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 Apr 2020 18:23:04 +0800 Subject: [PATCH 003/178] PUB @lxbwolf https://linux.cn/article-12156-1.html --- ...ge complex Git workspaces with Great Teeming Workspaces.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md (99%) diff --git a/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md b/published/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md similarity index 99% rename from translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md rename to published/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md index 683f3dc313..9ae4ee5618 100644 --- a/translated/tech/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md +++ b/published/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12156-1.html) [#]: subject: (Manage complex Git workspaces with Great Teeming Workspaces) [#]: via: (https://opensource.com/article/20/2/git-great-teeming-workspaces) [#]: author: (Daniel Gryniewicz https://opensource.com/users/dang) From 060d673bb066fc5da5a58b18b7488ab3844e030d Mon Sep 17 00:00:00 2001 From: Brooke Lau Date: Mon, 27 Apr 2020 20:41:28 +0800 Subject: [PATCH 004/178] APL --- sources/tech/20200425 Inlining optimisations in Go.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200425 Inlining optimisations in Go.md b/sources/tech/20200425 Inlining optimisations in Go.md index 3784d6868b..0e29d0d178 100644 --- a/sources/tech/20200425 Inlining optimisations in Go.md +++ b/sources/tech/20200425 Inlining optimisations in Go.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (lxbwolf) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From cad96ad4d31b19ad60adbfd86f3e9f6a43fd172e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 Apr 2020 22:01:57 +0800 Subject: [PATCH 005/178] PRF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @CrazyShipOne 恭喜你,完成了第一篇翻译贡献! --- ...d to know about open source ad blockers.md | 29 ++++++++++--------- 1 file changed, 15 insertions(+), 14 deletions(-) diff --git a/translated/tech/20200424 What you need to know about open source ad blockers.md b/translated/tech/20200424 What you need to know about open source ad blockers.md index 981a892257..670a16fce6 100644 --- a/translated/tech/20200424 What you need to know about open source ad blockers.md +++ b/translated/tech/20200424 What you need to know about open source ad blockers.md @@ -1,36 +1,37 @@ [#]: collector: (lujun9972) [#]: translator: (CrazyShipOne) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (What you need to know about open source ad blockers) [#]: via: (https://opensource.com/article/20/4/ad-blockers) [#]: author: (Joshua Pearce https://opensource.com/users/jmpearce) -关于开源广告拦截器你需要知道的事 +开源的广告拦截器不但节能,而且能拯救生命! ====== -三个开源广告拦截器针对“无广告拦截器"的情况进行了测试。 -![Browser of things][1] +> 三个开源广告拦截器与“无广告拦截器”对照组进行了测试。 -一项为了调查免费开源广告拦截器节能情况的[研究][2],意外地发现了互联网广告正在浪费你大量的时间。 +![](https://img.linux.net.cn/data/attachment/album/202004/27/220109b86sidn56sn6inoh.jpg) -更重要的是,研究结果表明了你可以挽救回失去的时间。这项研究评估发现,使用 [uBlock Origin][3](一个开源免费的广告拦截器)的情况下平均每个网民一年可以节约超过 100 个小时的时间。 uBlock Origin 是测试中最节能的广告拦截器,不过其他的广告拦截器也为网民节省了时间,能源以及金钱。 +一项旨在调查自由开源的广告拦截器节能的情况的[研究][2],意外地发现了互联网广告正在浪费你大量的时间。 + +更重要的是,研究结果表明了你可以挽救回这些失去的时间。这项研究评估发现,使用 [uBlock Origin][3](一个开源免费的广告拦截器)的情况下平均每个网民一年可以节约超过 100 个小时的时间。uBlock Origin 是测试中最有效的广告拦截器,不过其他的广告拦截器也为网民节省了时间、能源以及金钱。 ![Ad blocker screen comparison][4] -在研究结果中,[AdBlock+][5] 减少了11 % 的页面加载时间,[Privacy Badger][6] 减少了22 %,[uBlock Origin][3] 减少了28 %。对于单独一个页面来说这个时间並不算多,但是网民们浏览页面时常常花费半数以上的时间在网站上快速点击,并且通常在一个页面停留少于 15 秒。鉴于这种情况,加载广告的额外时间增加了许多。 +在研究结果中,[AdBlock+][5] 减少了 11% 的页面加载时间,[Privacy Badger][6] 减少了 22%,[uBlock Origin][3] 减少了 28%。对于单独一个页面来说这个时间並不算多,但是网民们一半以上的浏览时间都是在网站间快速跳转,通常在一个页面停留少于 15 秒。鉴于这种情况,加载广告的额外时间加起来就很多了。 -发布于 _Technologies_ 杂志上的 _[Energy Conservation with Open Source Ad Blockers][7]_ 最初旨在解决日益增长的能源消耗问题。由于全球网民每日花费超过 6.5 小时上网冲浪,互联网相关用电量正在快速地增加。比如,美国自 2000 年以来网上冲浪时间已经翻倍至几乎一周24小时的时间。开源广告拦截器通过消灭网上冲浪和观看视频时产生的广告,具有节约能源和时间的潜力。 +发布于 Technologies 杂志上的《[用开源的广告拦截器节能][7]》一文最初旨在解决日益增长的能源消耗问题。随着全球网民每日上网时间超过 6.5 小时,与互联网相关的用电量正在快速地增加。以美国人为例,自 2000 年他们的上网时间已经增加了一倍,几乎达到每周 24 小时。开源广告拦截器通过消灭上网和观看视频时产生的广告,潜在地减少了时间,从而减少用电。 -在研究过程中,三个开源广告拦截器针对“无广告拦截器”情况进行了测试。一系列全世界最具代表性访问最频繁网站的页面加载时间被记录在案,其中包括搜索引擎(Google,Yahoo,Bing),信息网站(Weather.com,Wikipedia)以及新闻门户(CNN,Fox,New York Times)。除此之外,研究还分析了观看流行与非流行视频内容时广告所花费的时间。这部分研究由于缺乏 Youtube 上流行和非流行内容观看比例的数据而非常具有挑战性。每个视频浪费的广告时间可以从 0.06 % 上升到惊人的 21 %。而且,这还只是浏览器上记录的加载广告而失去的时间。 +在研究过程中,对三个开源广告拦截器与“无广告拦截器”对照组进行了测试。研究人员记录了浏览全球访问量最大的网站的页面加载时间,其中包括网络搜索(谷歌、雅虎、必应)、信息(Weather.com、维基百科)和新闻网站(CNN、福克斯、纽约时报)。除此之外,研究还分析了观看流行与非流行视频内容时广告所花费的时间。这部分研究由于缺乏 Youtube 上流行和非流行内容观看比例的数据而非常具有挑战性。每个视频浪费在广告观看上的时间可以从 0.06% 到惊人的 21% 不等。而且,这还只是浏览器上记录的加载广告而失去的时间。 -总的来说,研究结果表明加载广告所浪费的能源并不是最重要的。由于运行计算机所需的大量电能来自于造成空气污染和人类减寿的煤炭,研究分析了广告拦截器挽救美国人生命的潜在可能性。结果是令人震惊的:如果每一个美国人使用开源广告拦截器,所节约的能源每年将会拯救超过36个美国人的生命 +总的来说,研究结果表明加载广告所浪费的能源并不是是小事。由于运行电脑所使用的大量电力仍然来自于煤炭,而煤炭会造成空气污染和过早死亡,因此该研究分析了广告拦截器拯救美国人生命的潜力。(LCTT 译注:由于这项研究是美国人完成的,所以这里仅提及了美国人,但是同理可推至全球。)结果是令人震惊的:如果美国人都使用开源广告拦截器,每年节约的能源将会拯救超过 36 个美国人的生命。 -电能即金钱,所以削减广告也可以为消费者节约钱财。在美国,如果所有的网民都在他们的电脑上开启 [Privacy Badger][8],美国人每年可以节约超过 9100 万美元。全球的研究结果则更令人吃惊。uBlock Origin 每年可以为全球消费者节约 18 亿美元。 +电能即金钱,所以削减广告也可以为消费者节约钱财。在美国,如果所有的网民都在他们的电脑上开启 [Privacy Badger][8],美国人每年可以节约超过 9100 万美元。在全球范围内,调查研究的结果则更令人吃惊。uBlock Origin 每年可以为全球消费者节约 18 亿美元。 -这项研究开始于人们因为新冠肺炎大流行而被迫居家之前,因此所有的数据都可以被认为是保守的估算。整体来说,研究发现了开源广告拦截器是一项潜在的可以节约能源的有效技术。 +这项研究始于人们因为新冠肺炎大流行而被迫居家之前,因此所有的数据都可以被认为是保守的估算。整体来说,研究发现了开源广告拦截器是一项潜在的节能技术。 -虽然免费的开源广告拦截器可以节约能源并且对自然环境友好,但是可能你只是因为它们可以拦截烦人的广告以节省时间而使用它们。 +虽然自由开源的广告拦截器可以节约能源,对自然环境也有好处,但你可能主要是为了拦截恼人的广告和节省自己的时间而使用它们。 -------------------------------------------------------------------------------- @@ -39,7 +40,7 @@ via: https://opensource.com/article/20/4/ad-blockers 作者:[Joshua Pearce][a] 选题:[lujun9972][b] 译者:[CrazyShipOne](https://github.com/CrazyShipOne) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1dc9b3607e1530751eaca54b5a993100c6452f6a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 27 Apr 2020 22:03:12 +0800 Subject: [PATCH 006/178] PUB MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @CrazyShipOne 本文首发地址: https://linux.cn/article-12157-1.html 你的 LCTT 专页: https://linux.cn/lctt/CrazyShipOne 请注册以领取 LCCN:https://lctt.linux.cn/ --- ...424 What you need to know about open source ad blockers.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200424 What you need to know about open source ad blockers.md (98%) diff --git a/translated/tech/20200424 What you need to know about open source ad blockers.md b/published/20200424 What you need to know about open source ad blockers.md similarity index 98% rename from translated/tech/20200424 What you need to know about open source ad blockers.md rename to published/20200424 What you need to know about open source ad blockers.md index 670a16fce6..c28cbc6080 100644 --- a/translated/tech/20200424 What you need to know about open source ad blockers.md +++ b/published/20200424 What you need to know about open source ad blockers.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (CrazyShipOne) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12157-1.html) [#]: subject: (What you need to know about open source ad blockers) [#]: via: (https://opensource.com/article/20/4/ad-blockers) [#]: author: (Joshua Pearce https://opensource.com/users/jmpearce) From 909b8c9d284298ac227520a33c91e11991d572b2 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:52:47 +0800 Subject: [PATCH 007/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Lubunt?= =?UTF-8?q?u=2020.04=20Review:=20Lightweight,=20Minimalistic,=20Polished?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md --- ...ew- Lightweight, Minimalistic, Polished.md | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md diff --git a/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md b/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md new file mode 100644 index 0000000000..c6857014ef --- /dev/null +++ b/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md @@ -0,0 +1,154 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Lubuntu 20.04 Review: Lightweight, Minimalistic, Polished) +[#]: via: (https://itsfoss.com/lubuntu-20-04-review/) +[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) + +Lubuntu 20.04 Review: Lightweight, Minimalistic, Polished +====== + +_**Lubuntu 20.04 LTS is significantly different than its previous LTS version. It is aiming to give you a more polished experience rather than just focusing on older computer. Read more about it as I review Lubuntu 20.04.**_ + +### Lubuntu 20.04 Review: First LTS release with LXQt + +I have been using Lubuntu 20.04 from a few days before the release. I usually dwell in Arch world with Manjaro and Cinnamon desktop so using Lubuntu was a pleasant change for me. + +Here’s what I have noticed and felt about Lubuntu 20.04. + +#### Bye bye LXDE, Hello LXQt! + +For a long time, [Lubuntu][1] relied on [LXDE][2] to provide a lightweight Linux experience. It now uses LXQt desktop environment. + +[LXDE][3] is based on GTK (the libraries used by GNOME) and more specifically on GTK+ 2 which is dated in 2020. Dissatisfied with GTK+ 3, LXDE developer Hong Jen Yee decided to port the entire desktop to Qt (the libraries used by KDE). LXDE, the Qt port of it, and the [Razor-qt][4] project were combined to form [LXQt][5]. Although today, LXDE and LXQt coexist as separate projects. + +Since LXDE developer itself is focusing on LXQt, it makes no sense for Lubuntu to stick with a desktop environment that had its last stable release more than three years ago. + +Lubuntu 18.04 is the last version of with [LXDE][3]. Fortunately it’s a long term support edition. It will be supported officially by Lubuntu team till 2021. + +![][6] + +#### Not exclusively for older machines + +As the definition of “older machine” has changed in 2020 Lubuntu 18.04 is the last 32bit version. Nowadays even a 10 year old machine comes with at least 2 gigabytes of ram and a dual-core 64bit processor. + +As per that, [Lubuntu Team will no longer provide minimum system requirements and will no longer primarily focus on older hardware][7]. Although LXQt is still a lightweight, classic yet polished and feature rich desktop environment. + +The First Lubuntu release with LXQt was 18.10, giving the developers three standard releases to perfect the LXQt desktop before the Lubuntu 20.04 LTS release, which is a good development strategy. + +#### Not the regular Ubiquity, Lubuntu 20.04 uses Calamares installer + +![Lubuntu 20.04 Calamares Installer][8] + +A fresh installation begins with the new [Calamares][9] installer, in place of the Ubiquity installer that other [official Ubuntu flavors][10] use. + +The whole process is done in approximately 10 minutes, slightly faster than the previous Lubuntu releases. + +As the .iso comes with the essential applications pre-installed you can get your system fully configured pretty fast too. + +#### No upgrade from Lubuntu 18.04 to Lubuntu 20.04 + +Normally, you can [upgrade Ubuntu from one LTS to another LTS release][11]. But Lubuntu team advises not to upgrade from Lubuntu 18.04 to 20.04. They recommend a fresh install and rightly so. + +Lubuntu 18.04 used LXDE desktop while 20.04 uses LXQt. Due to the extensive changes in the desktop environments, upgrading to 20.04 from 18.04 will result in a broken system. + +#### **More KDE and Qt applications** + +![][12] + +Here are some of the applications that are available by default in this new release and as I can see, not all of them are lightweight and most of them are Qt-based. + +Even the software center used is KDE’s Discover instead of Ubuntu’s GNOME software center. + + * Ark – archive manager + * Bluedevil – bluetooth connector + * Discover Software Center – package management system + * FeatherPad – text editor + * FireFox – web browser + * K3b – CD/DVD burner + * Kcalc – calculator + * KDE partition manager – partition manager + * LibreOffice – Office suite (Qt interface version) + * LXimage-Qt – image viewer and screenshot tool + * Muon – package manager + + + * Noblenote – note taker + * PCManFM-Qt – File manager + * Qlipper – clipboard manager + * qPDFview – PDF viewer + * PulseAudio – audio controller + * Qtransmission – bittorrent client (Qt interface version) + * Quassel – IRC client + * ScreenGrab – Screenshot creator + * Skanlite – scanning + * Startup Disk Creator – USB boot disk maker + * Trojita – email client + * VLC – media player + * [MPV video player][13] + + + +#### Testing Lubuntu 20.04 LTS + +Boot times on the LXQt version of Lubuntu are under a minute, booting from an SSD though. + +LXQt currently requires slightly more memory than the Gtk+ v2-based LXDE, but the alternative Gtk+ v3 toolkit would also have required more memory. + +After a reboot the system runs approximately at a very low of 340 MB for the modern standards, 100 MB more than LXDE. + +![htop running on Lubuntu 20.04][14] + +LXQt is not only for users with an older hardware but also for those who are seeking a simple and classic experience at their new machine. + +The desktop layout looks similar to KDE’s Plasma desktop, don’t you think? + +![Lubuntu 20.04 Desktop][15] + +There’s an application menu in the lower-left corner, a taskbar for pinned and active applications, and a system tray in the lower-right corner. + +Lubuntu in its LXQt version can be easily customized and everything is in the menu under preferences, with most key items under LXQt Settings. + +It is worth-mentioning that LXQt uses the popular [Openbox window manager][16] by default. + +Like the last three releases, 20.04 LTS comes with a default dark theme Lubuntu Arc, but it is quick and easy to change it if it doesn’t suit your taste. + +In daily use, Lubuntu 20.04 has proven to me completely trouble-free as every Ubuntu flavour in fact. + +#### Conclusion + +Lubuntu team has successfully made the transition to a modern, still lightweight and minimal desktop environment. LXDE looks like abandoned and it is a good thing to move away to an active project. + +I hope that Lubuntu 20.04 makes you as much enthusiastic as I am, and if so don’t hesitate to let me know at the comments below. Stay tuned! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/lubuntu-20-04-review/ + +作者:[Dimitrios Savvopoulos][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://lubuntu.me/ +[2]: https://github.com/lxde +[3]: https://lxde.org/ +[4]: https://web.archive.org/web/20160220061334/http://razor-qt.org/ +[5]: https://lxqt.org/ +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/Lubuntu-20-04-review.jpg?ssl=1 +[7]: https://itsfoss.com/lubuntu-no-more-old-distro/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/lubuntu-20-04-installer.jpg?ssl=1 +[9]: https://calamares.io/ +[10]: https://itsfoss.com/which-ubuntu-install/ +[11]: https://itsfoss.com/upgrade-ubuntu-version/ +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/Lubuntu-20.04.gif?ssl=1 +[13]: https://itsfoss.com/mpv-video-player/ +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/htop.jpg?fit=800%2C629&ssl=1 +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/Lubuntu-20.04-desktop.jpg?fit=800%2C450&ssl=1 +[16]: https://en.wikipedia.org/wiki/Openbox From a9b137c8958749f4bcd0879a01ef6f0848374cbb Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:55:09 +0800 Subject: [PATCH 008/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Using?= =?UTF-8?q?=20Files=20and=20Folders=20on=20Desktop=20Screen=20in=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md --- ...and Folders on Desktop Screen in Ubuntu.md | 100 ++++++++++++++++++ 1 file changed, 100 insertions(+) create mode 100644 sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md diff --git a/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md b/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md new file mode 100644 index 0000000000..40105e3aed --- /dev/null +++ b/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Files and Folders on Desktop Screen in Ubuntu) +[#]: via: (https://itsfoss.com/add-files-on-desktop-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Using Files and Folders on Desktop Screen in Ubuntu +====== + +_**This beginner tutorial discusses a few difficulties you may face while adding files and folders on the desktop screen on Ubuntu.**_ + +I know a few people who are habitual of putting all the important/frequently used files on the desktop screen for quick access. + +![][1] + +I am not a fan of a cluttered desktop screen but I can imagine that it might actually be helpful to some people. + +For the past few releases, it has been difficult to add files on the desktop screen in Ubuntu’s default GNOME desktop. It’s not really Ubuntu’s fault. + +The [GNOME][2] developers thinks that there is no place for icons and files on the desktop screen. There is no need of putting files on the desktop when you can easily search for it in the menu. And that’s part true. + +This is why the newer version of [GNOME’s File Manager Nautilus][3] doesn’t support icons and files on the desktop very well. + +That said, it’s not impossible to add files and folders on the desktop. Let me show you how you can still use it. + +### Adding files and folders on the desktop screen in Ubuntu + +![][4] + +I am using Ubuntu 20.04 in this tutorial. The steps may or may not vary for other Ubuntu versions. + +#### Add the files and folders to the “Desktop folder” + +If you open the file manager, you should see an entry called Desktop in the left sidebar or in the folders list. This folder represents your desktop screen (in a way). + +![Desktop folder can be used to add files to the desktop screen][5] + +Anything you add to this folder will be reflected on the desktop screen. + +![Anything added to the Desktop folder will be reflected on the desktop screen][6] + +If you delete files from this ‘Desktop folder’, it will be removed from the desktop screen as well. + +#### Drag and drop files to desktop screen doesn’t work + +Now, if you try to drag and drop files from the file manager on the desktop, it won’t work. It’s not a bug, it’s a feature that irks a lot of people. + +A workaround would be to open two instances of the file manager. Open Desktop folder in one of them and then drag and drop files to this folder and they will be added on the desktop. + +I know that’s not ideal but you don’t have a lot of choices here. + +#### You cannot use Ctrl+C and Ctrl+V to copy-paste on the desktop, use the right click menu + +To add salt to injury, you cannot use Ctrl+V the famous keyboard shortcut to paste files on the desktop screen. + +But you can still use the right click context menu and select Paste from there to put the copied files on the desktop. You can even create new folders this way. + +![Right click menu can be used for copy-pasting files to desktop][7] + +Does it make sense? Not to me but that’s how it is in Ubuntu 20.04. + +#### You cannot delete files and folder using the Delete key, use the right click menu again + +What’s worse is that you cannot use the delete key or shift delete key to remove files from the desktop screen. But you can still right click on the files or folders and select “Move to trash” to delete the file. + +![Delete files from desktop using right click][8] + +Alright, so now you know that at least there is a way to add files on the desktop with some restrictions. But it doesn’t end here unfortunately. + +You cannot search for files with their names on the desktop screen. Normally, if you start typing ‘abc’, files starting with ‘abc’ are highlighted. You don’t get it here. + +I don’t know why so many restrictions have been put on adding files on the desktop. Thankfully, I don’t use it a lot otherwise I have been way too frustrated. + +If interested, you may read about [adding application shortcut on the desktop in Ubuntu][9] as well. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/add-files-on-desktop-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/files-on-desktop-ubuntu.jpg?ssl=1 +[2]: https://www.gnome.org/ +[3]: https://wiki.gnome.org/Apps/Files +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-ubuntu.png?ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/desktop-folder-ubuntu.png?ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-screen-ubuntu.jpg?ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-new-files-ubuntu-desktop.jpg?ssl=1 +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/delete-files-from-desktop-ubuntu.jpg?ssl=1 +[9]: https://itsfoss.com/ubuntu-desktop-shortcut/ From 8522d8fd9a834046262f65c769c6a3490abe3677 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:55:58 +0800 Subject: [PATCH 009/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200427=20How=20?= =?UTF-8?q?to=20secure=20your=20Linux=20email=20services=20with=20SSL/TLS?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md --- ... your Linux email services with SSL-TLS.md | 209 ++++++++++++++++++ 1 file changed, 209 insertions(+) create mode 100644 sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md diff --git a/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md b/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md new file mode 100644 index 0000000000..163e6b017f --- /dev/null +++ b/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md @@ -0,0 +1,209 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to secure your Linux email services with SSL/TLS) +[#]: via: (https://opensource.com/article/20/4/securing-linux-email) +[#]: author: (Marc Skinner https://opensource.com/users/marc-skinner) + +How to secure your Linux email services with SSL/TLS +====== +Protect your Linux email services by understanding security +certificates. +![email or newsletters via inbox and browser][1] + +Traditionally, email services send data in an unprotected way—whether you are sending emails via SMTP or receiving them via IMAP or POP, the defaults are in cleartext. With more online applications enforcing encryption and the general consensus to protect your data, it's best to secure your email services with a Secure Sockets Layer/Transport Layer Security (SSL/TLS) security certificate. + +First, a quick review of email services and protocols. Email is sent via a service called Simple Mail Transport Protocol (SMTP) using TCP port 25. This protocol sends emails from server to server based on DNS mail exchanger (MX) record lookups. Once an email is on the email server, it is retrieved using one of two services: Internet Message Access Protocol (IMAP) using port TCP 143, or Post Office Protocol (POP3) using port TCP 110. All of these services, by default, send your email and authentication to/from these services in plain text—thus, it's very unprotected! + +To protect the email data and authentication, these services have added a security feature in which they can utilize an SSL/TLS certificate to wrap the data flow and communication with encryption. How SSL/TLS encryption secures information is beyond the scope of this article, but [Bryant Son's internet security article][2] covers it in great detail. At a high level, SSL/TLS encryption is a public/private encryption algorithm. + +By adding these security features into the services, they can listen on new TCP ports: + +Service | Default TCP Port | SSL/TLS Port +---|---|--- +SMTP | 25 | 587 +IMAP | 143 | 993 +POP3 | 110 | 995 + +### Generate SSL/TLS certificates + +SSL/TLS certificates can be generated for free using tools like [OpenSSL][3], or they can be purchased for a range of prices from public certificate authorities (CAs). In the past, generating your own certificate was easy and worked in most cases, but with the increasing demand for better security, most email clients don't trust self-generated SSL/TLS certificates without a manual exception. + +If your use case is private or for testing, then saving money with a self-generated certificate makes sense. But if you're rolling this out to a large group or have paying customers, then you're better served by purchasing a certificate from a public, trusted company that sells them. + +In either case, the process to start requesting a new certificate is to use the OpenSSL tooling on your Linux system to create a certificate signing request (CSR): + + +``` +`$ openssl req -new -newkey rsa:2048 -nodes -keyout mail.mydomain.key -out mail.mydomain.csr` +``` + +This command will create a new CSR and private key at the same time for the service you are trying to secure. The process will ask you a number of questions associated with the certificate: location details, server fully qualified domain name (FQDN), email contact information, etc. Once you have filled out the information, the key and CSR will be generated. + +#### If you generate your own certificate + +If you want to generate your own certificate, you must create your own [root CA][4] before issuing the CSR command above. You can create your own root CA with: + + +``` +`$ openssl genrsa -des3 -out myCA.key 2048` +``` + +It will prompt you to add a passphrase. Please give it a secure passphrase and don't lose it—this is your private root CA key, and as the name states, it's the root of all trust in your certificates. + +Next, generate the root CA certificate: + + +``` +`$ openssl req -x509 -new -nodes -key myCA.key -sha256 -days 1825 -out myCA.pem` +``` + +After answering a few more questions, you will generate a root CA certificate with a five-year lifespan. + +Using the CSR file from the steps above, you can request a new certificate to be generated and signed by the root CA you just created: + + +``` +`$ openssl x509 -req -in mail.mydomain.csr -CA myCA.pem -CAkey myCA.key -CAcreateserial -out mail.mydomain.pem -days 1825 -sha256` +``` + +Enter your private root CA key passphrase to create and sign the certificate. + +Now you have the two files needed to configure your email services for enhanced security: the private key file, **mail.mydomain.key**, and the public certificate file, **mail.mydomain.pem**. + +#### If you purchase a certificate + +If you purchase a certificate from a vendor, it will ask you to upload that CSR to its system, as it is used as the input to generate the SSL/TLS certificate. The certificate will be accessible as a file (such as **mail.mydomain.pem**). Many SSL vendors also require you to download an intermediate certificate. If this is the case, you must combine the two certificate files into one, so the email service can process them both in combination. You can combine your certificate with a third-party intermediate certificate with: + + +``` +`$ cat mail.mydomain.pem gd_bundle-g2-g1.crt > mail.mydomain.pem` +``` + +Notice that the output's file extension is **.pem**, which stands for Privacy-Enhanced Mail. + +Now you have the two files you need to configure your email services for enhanced security: the private key file, **mail.mydomain.key**, and the public combined certificate file, **mail.mydomain.pem**. + +### Create a safe directory for your files + +Whether you created your own key or bought one from a vendor, create a safe, root-owned directory for the two files you created above. An example workflow to create a safe play would be: + + +``` +$ mkdir /etc/pki/tls +$ chown root:root /etc/pki/tls +$ chmod 700 /etc/pki/tls +``` + +Make sure to set the permissions on your files after you copy them into **/etc/pki/tls** with: + + +``` +`$ chmod 600 /etc/pki/tls/*` +``` + +### Configure your SMTP and IMAP services + +Next, configure both the SMTP and the IMAP services to use the new security certificates. The programs used in this example for SMTP and IMAP are **postfix** and **dovecot**. + +Edit ***/_****etc****_/*****postfix/main.cf** in your preferred text editor. Add the following lines: + + +``` +smtpd_use_tls = yes +smtpd_tls_cert_file = /etc/pki/tls/mail.mydomain.pem +smtpd_tls_key_file = /etc/pki/tls/mail.mydomain.key +``` + +### Customize your config + +The following options allow you to disable/enable different ciphers, protocols, etc.: + + +``` +smtpd_tls_eecdh_grade = strong +smtpd_tls_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 +smtpd_tls_mandatory_protocols= !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 +smtpd_tls_mandatory_ciphers = high +smtpd_tls_security_level=may +smtpd_tls_ciphers = high +tls_preempt_cipherlist = yes +smtpd_tls_mandatory_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL +smtpd_tls_exclude_ciphers = aNULL, MD5 , DES, ADH, RC4, PSD, SRP, 3DES, eNULL +smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 +smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1 +``` + +Edit **/etc/dovecot/dovecot.conf** by adding these three lines: + + +``` +ssl = required +ssl_cert = </etc/pki/tls/mail.mydomain.pem +ssl_key = </etc/pki/tls/mail.mydomain.key +``` + +Add the following options to disable/enable different ciphers, protocols, and more (I'll leave understanding and considering these up to you): + + +``` +ssl_cipher_list = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:ALL:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!RC4:!SSLv2 +ssl_prefer_server_ciphers = yes +ssl_protocols = !SSLv2 !SSLv3 !TLSv1 !TLSv1.1 +ssl_min_protocol = TLSv1.2 +``` + +### Set context for SELinux + +If your Linux distribution has SELinux enabled, set the correct SELinux context for your new certificate files. + +For Postfix SELinux: + + +``` +`$ chcon -u system_u -t cert_t mail.mydomain.*` +``` + +For Dovecot SELinux: + + +``` +`$ chcon -u system_u -t dovecot_cert_t mail.mydomain.*` +``` + +Restart both services and connect with your updated email client configurations. Some email clients will auto-detect the new port numbers; others will require you to update them. + +### Test your setup + +Quickly test from the command line with **openssl** and the **s_client** plugin: + + +``` +$ openssl s_client -connect mail.mydomain.com:993 +$ openssl s_client -starttls imap -connect mail.mydomain.com:143 +$ openssl s_client -starttls smtp -connect mail.mydomain.com:587 +``` + +These test commands will show a plethora of data about the connection, certificate, cipher, session, and protocol you're using. This is not only a good way to validate that the new configuration is working but also to confirm you're using the appropriate certificate and security settings you defined in the **postfix** or **dovecot** configuration files. + +Stay secure! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/securing-linux-email + +作者:[Marc Skinner][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/marc-skinner +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/newsletter_email_mail_web_browser.jpg?itok=Lo91H9UH (email or newsletters via inbox and browser) +[2]: https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority +[3]: https://www.openssl.org/ +[4]: https://en.wikipedia.org/wiki/Root_certificate From 3a0ea87cb0f427dc80d1393deee304b1b02b91bd Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:56:50 +0800 Subject: [PATCH 010/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200427=20DevOps?= =?UTF-8?q?=20vs.=20Agile:=20Do=20they=20have=20anything=20in=20common=3F?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md --- ... Agile- Do they have anything in common.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md diff --git a/sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md b/sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md new file mode 100644 index 0000000000..a8fcf422c6 --- /dev/null +++ b/sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DevOps vs. Agile: Do they have anything in common?) +[#]: via: (https://opensource.com/article/20/4/devops-vs-agile-common) +[#]: author: (Taz Brown https://opensource.com/users/heronthecli) + +DevOps vs. Agile: Do they have anything in common? +====== +Agile and DevOps have many differences, yet they both seek to address +complexity, improve quality, and innovate around software design. +![Women in tech boardroom][1] + +The topic of DevOps vs. Agile is almost like debating iPhone vs. Android—everyone has an opinion, and emotions can become heated, especially if people disagree. + +After writing _[DevOps v. Agile: What's the difference?][2]_ and reading the comments on the article, I wanted to add some more thoughts—including how some of my thinking has changed on the topic. + +My perspective comes from where I am now but also where I have been. I used to be a systems administrator and infrastructure engineer, and now I am a senior scrum master with a major utility company in Missouri. (I actually was a scrum master before I was an admin or engineer, but I digress.) + +My team consists of six frontend software engineers and IT programmer analysts, a business analyst, two product owners, and me. Recently, we learned that management wants our team to become a [DevSecOps][3] team, so our core scrum team is working with a DevSecOps team that is helping us make the transition. No one is naive to the fact that this will not be easy, but the DevSecOps team's experience gives us confidence that we can succeed. + +Our team's manager recently hired a senior software engineer who will drive the DevSecOps goal. As a scrum master, I will continue to focus on continuous improvement. The team is fairly young, so they don't have expansive work experience, but they are smart and driven, and there is much room for greatness. In addition, our entire organization is going through an Agile transformation, so most people are new to all things Agile, including the [Agile Manifesto][4] and the [Five Scrum Values][5]. + +![Spray paint on a container][6] + +Kyle Glenn on Unsplash + +### Agile, Scrum, DevOps, and more + +There is a clear relationship between DevOps and Agile. Agile is the methodology, Scrum is the framework, and DevOps falls under the agile [umbrella][7] along with kanban, lean, large-scale Scrum, Extreme Programming, Crystal, and more. For example, our Scrum team is an Agile team that will operate as a DevSecOps team. + +Neither DevOps nor Agile is about the tools. Rather, both are about the [mindset and culture][8]. When it is done right, teams think and act differently and achieve greater results, including faster software delivery, continuous integration (CI), continuous delivery (CD), continuous improvement, working software, faster solutions, more collaboration, and fewer silos. Additional results are seen in quality testing, better automation, and improved systems, processes, and practices. + +#### Common concepts + +Some of the Agile concepts they have in common are associated with the Agile Manifesto; the most familiar of the 12 principles are the first four: + + * Individual and interactions over processes and tools + * Working software over comprehensive documentation + * Customer collaboration over contract negotiations + * Responding to change over following a plan + + + +Some of the DevOps concepts they have in common are the CI/CD pipeline, optimizing software delivery and quality, a culture of innovation, service-level objectives and indicators (SLOs and SLIs), collaboration across teams, and automation. + +#### DevOps and Agile benefits + +DevOps speeds up things between developers and operations. Furthermore, even though DevOps isn't about the tools, the fact that the dev and ops teams use the same tech stack creates a shared language and empathy between the two. Our Scrum team uses Jira to track all bugs, enhancements, and team performance. Common DevOps tools are Jenkins, AWS, SonarQube, GitHub, Splunk, and Ansible. While the tools differ from team to team, the mindset and culture should be common across all. + +DevOps also creates less division between dev and ops and a sense of understanding of what it's like to walk in each other's shoes because now they function as one. + +Agile teams continuously deliver often and fast, adapting incrementally along the way. Working in two-week sprints appears to be the sweet spot for most software- or product-delivery teams. Agile teams may utilize DevOps principles in their work (e.g., implementing a CI/CD pipeline), and dev teams that work with ops are likely to work in the same two-week increments. + +DevOps traditionally leads to continuous deployment, delivery, and integration. Teamwork is integrated; problems and failures are jointly owned by development, operations, and other entities, such as quality assurance (QA), testing, automation, etc. + +### Summing up + +I believe that Agile and DevOps breathe the same air, with many concepts and theories crossing between them. + +While I have no doubts that there will be counter opinions and even some sharply worded disagreements with my opinions, I think we would all agree that Agile and DevOps seek to address complexity, improve quality, and innovate around software design. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/devops-vs-agile-common + +作者:[Taz Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/heronthecli +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/christina-wocintechchat-com-rg1y72ekw6o-unsplash_1.jpg?itok=MoIv8HlK (Women in tech boardroom) +[2]: https://opensource.com/article/20/2/devops-vs-agile +[3]: https://opensource.com/article/19/1/what-devsecops +[4]: https://agilemanifesto.org/ +[5]: https://www.scrumalliance.org/about-scrum/values +[6]: https://opensource.com/sites/default/files/uploads/roomtogrow_kyleglenn.jpg (Spray paint on a container) +[7]: https://opensource.com/article/20/4/kanban-devops +[8]: https://opensource.com/article/19/5/values-devops-mindset From cbfd6769ce8bc91db18dbcc9f86b1b06e8e8b8e0 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:57:31 +0800 Subject: [PATCH 011/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200427=20Compar?= =?UTF-8?q?ing=20subscription,=20pay-per-bug,=20and=20consulting=20softwar?= =?UTF-8?q?e=20business=20models?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md --- ...and consulting software business models.md | 81 +++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md diff --git a/sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md b/sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md new file mode 100644 index 0000000000..c7d76c5590 --- /dev/null +++ b/sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Comparing subscription, pay-per-bug, and consulting software business models) +[#]: via: (https://opensource.com/article/20/4/open-source-business) +[#]: author: (Jos Poortvliet https://opensource.com/users/jospoortvliet) + +Comparing subscription, pay-per-bug, and consulting software business models +====== +Learn how NextCloud achieves business revenue that is one-third +consulting and two-thirds subscription. +![A chair in a field.][1] + +At FOSS Backstage this year, I hosted a discussion on open source models and shared why I think subscriptions are a great way to support open source products. + +### Picking a business model + +The company I co-founded, Nextcloud, only offers its services to customers on a (per user) subscription model. Some customers ask us during the sales conversation why they cannot just get access to our engineers on an hourly basis as they get from third party consultants: pay for the bug fix or support case, and be done? + +When we discussed how to build our new company in 2016 during our very first "[hackweek][2]," our business model was an important conversation. The team discussed different approaches, and we decided to go for a subscription model. Such a model felt more in line with our mission and principles to [create a great product and run a trustworthy company.][3] + +Why? While hourly rates seem like an easy way to get a quick fix for a problem, getting paid for fixing issues creates an incentive to create more buggy software, which of course, was not how we want to run our business. + +Let's look a bit deeper at the choices. + +### Subscription + +As a customer, you want the most reliable software. And when (not if, when) things go wrong, you want them fixed as quickly as possible. Our company offers customers a subscription to our services. Such a subscription includes direct and unlimited access to our engineers the moment you run into a problem. These two points are very important. It means that a customer has the certainty of having the very best resources at their fingertips at the moment they need them, and as much as they need them, in a way that only the original software vendor can deliver. + +For that, they pay a fee that scales with the number of users. + +### Pay-per-bug + +What if, instead, you could pay per issue fixed? In a way, this choice is similar to insurance. Indeed, when you need cold medicine once a year, the cost of a health insurance policy with a monthly premium seems prohibitive. But when a car accident crushes your spine and your medical costs skyrocket, not having to sell your house makes up for that premium and then some. + +There is also the bigger picture: The incentives potentially created by a pay-per-bug model. It is a simple, albeit cynical calculation: If we got paid per bug we fix for customers, leaving more bugs in our product would increase our income. Now we're not saying every company offering product consulting by the hour is extorting their customers, merely that this backward incentive exists, even when most moral businesses would not give in to it. + +Conversely, now that we offer insurance, we lower our support costs by building a better product, just like art insurance providers do what they can to ensure the art kept at their customers' exhibition is secure to keep their costs low. Our support team deeply cares about how our customers use our product: a better setup and good security practices not only make happier customers; it saves them time. + +Or, in other words, it helps maximize the ROI or TEI the customer gains from deploying Nextcloud. + +That is what we'd call a win-win and a business model an open source company can be proud of. + +### Consulting + +Now I discuss this business model regularly with others, and I invariably get told: "You might be right, but most of my customers only want to pay for the bugs we fix, or for features. Consulting is our main source of income! And I have no illusions about changing that." + +Yes, that's an issue. We open source folk honestly have a sales and marketing issue. And yes, you might think that, as a marketing guy, I just see everything as a nail I can hit with my communications hammer. But really, communication is core to the issue—how you communicate your value—which, in turn, requires you to be aware of your value. + +Recently, I had some challenging discussions with users on our forums who were upset that our engineers had not fixed their issues quickly enough. These users were clearly business users, and giving free support on GitHub obviously isn't exactly a great advertisement for the need for a subscription. So some abuse was thrown our way, and I took the hit. When complaining about it all to a fellow entrepreneurial friend of mine, he remarked: "Isn't that good news? They were frustrated and angry because clearly, you have something to offer, they need it, and are annoyed that they are forced to pay you for it. A good place to be in if you want to sell something!" Right he was, of course. + +The lesson I want to share with this anecdote is: You have something to offer. Your current customers will certainly be upset if you tell them that, going forward, they can only get your services under a subscription. But the angrier they are, the more you know they _need_ you, and you should persist! + +But moving to subscription from consulting requires a great deal of discipline, including saying "no" often. We have hard, internal rules, and one of these is that sales can only sell consulting valued at up to 50% of the subscription value, and only for deals greater than $10,000. + +But it has worked. The breakdown of our business revenue is now about one-third consulting and two-thirds subscription. Engineering hours for consulting work are not the limiting factor for expanding our customer base; sales and sales engineering is. We doubled our order intake last year and plan to do the same this year. + +### Bottomline: Sustainability + +For us, a better product and more sustainable business follows from the subscription model. It also benefits software engineers, who prefer to build a better product and spend less time on debugging issues with customers. A pay-by-the-hour model, with all the time tracking and incentive to keep customers dependent, does not work for us. + +What has your experience been? Tell us in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/open-source-business + +作者:[Jos Poortvliet][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jospoortvliet +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_WorkInPublic_4618517_1110_CS_A.png?itok=RwVrWArk (A chair in a field.) +[2]: https://nextcloud.com/blog/invitation-to-our-hackweek-in-stuttgart/ +[3]: https://nextcloud.com/blog/the-nextcloud-mission-and-principles/ From 383bb876b7559bda0658f1ced1a979786fb2f999 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 28 Apr 2020 00:57:59 +0800 Subject: [PATCH 012/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200426=206=20ti?= =?UTF-8?q?ps=20for=20securing=20your=20WordPress=20website?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200426 6 tips for securing your WordPress website.md --- ...ips for securing your WordPress website.md | 175 ++++++++++++++++++ 1 file changed, 175 insertions(+) create mode 100644 sources/tech/20200426 6 tips for securing your WordPress website.md diff --git a/sources/tech/20200426 6 tips for securing your WordPress website.md b/sources/tech/20200426 6 tips for securing your WordPress website.md new file mode 100644 index 0000000000..757ff42d30 --- /dev/null +++ b/sources/tech/20200426 6 tips for securing your WordPress website.md @@ -0,0 +1,175 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 tips for securing your WordPress website) +[#]: via: (https://opensource.com/article/20/4/wordpress-security) +[#]: author: (Lucy Carney https://opensource.com/users/lucy-carney) + +6 tips for securing your WordPress website +====== +Even beginners can—and should—take these steps to protect their +WordPress sites against cyberattacks. +![A lock on the side of a building][1] + +Already powering over 30% of the internet, WordPress is the fastest-growing content management system (CMS) in the world—and it's not hard to see why. With tons of customization available through coding and plugins, top-notch SEO, and a supreme reputation for blogging, WordPress has certainly earned its popularity. + +However, with popularity comes other, less appealing attention. WordPress is a common target for intruders, malware, and cyberattacks—in fact, WordPress accounted for around [90% of hacked CMS platforms][2] in 2019. + +Whether you're a first-time WordPress user or an experienced developer, there are important steps you can take to protect your WordPress website. The following six key tips will get you started. + +### 1\. Choose reliable hosting + +Hosting is the unseen foundation of all websites—without it, you can't publish your site online. But hosting does much more than simply host your site. It's also responsible for site speed, performance, and security. + +The first thing to do is to check if a host includes SSL security in its plans. + +SSL is an essential security feature for all websites, whether you're running a small blog or a large online store. You'll need a more [advanced SSL certificate][3] if you're accepting payments, but for most sites, the basic free SSL should be fine. + +Other security features to look out for include: + + * Frequent, automatic offsite backups + * Malware and antivirus scanning and removal + * Distributed denial of service (DDoS) protection + * Real-time network monitoring + * Advanced firewall protection + + + +In addition to these digital security features, it's worth thinking about your hosting provider's _physical_ security measures as well. These include limiting access to data centers with security guards, CCTV, and two-factor or biometric authentication. + +### 2\. Use security plugins + +One of the best—and easiest—ways of protecting your website's security is to install a security plugin, such as [Sucuri][4], which is an open source, GPLv2 licensed project. Security plugins are vitally important because they automate security, which means you can focus on running your site rather than committing all your time to fighting off online threats. + +These plugins detect and block malicious attacks and alert you about any issues that require your attention. In short, they constantly work in the background to protect your site, meaning you don't have to stay awake 24/7 to fight off hackers, bugs, and other digital nasties. + +A good security plugin will provide all the essential security features you need for free, but some advanced features require a paid subscription. For example, you'll need to pay if you want to unlock [Sucuri's website firewall][5]. Enabling a web application firewall (WAF) blocks common threats and adds an extra layer of security to your site, so it's a good idea to look for this feature when choosing a security plugin. + +### 3\. Choose trustworthy plugins and themes + +The joy of WordPress is that it is open source, so anyone and everyone can pitch in with themes and plugins that they've developed. This can also pose problems when it comes to picking a high-quality theme or plugin. + +It serves to be cautious when picking a free theme or plugin, as some are poorly designed—or worse, may hide malicious code. + +To avoid this, always source free themes and plugins from reputable sources, such as the WordPress library. Always read reviews and research the developer to see if they've built any other programs. + +Outdated or poorly designed themes and plugins can leave "backdoors" open for attackers or bugs to get into your site, which is why it pays to be careful in your choices. However, you should also be wary of nulled or cracked themes. These are premium themes that have been compromised by hackers and are for sale illegally. You might buy a nulled theme believing that it's all above-board—only to have your site damaged by hidden malicious code. + +To avoid nulled themes, don't get drawn in by discounted prices, and always stick to reputable stores, such as the official [WordPress directory][6]. If you're looking elsewhere, stick to large and trusted stores, such as [Themify][7], a theme and plugin store that has been running since 2010. Themify ensures all its WordPress themes pass the [Google Mobile-Friendly][8] test and are open source under the [GNU General Public License][9]. + +### 4\. Run regular updates + +It's a fundamental WordPress rule: _always keep your site up to date._ However, it's a rule not everyone sticks to—in fact, only [43% of WordPress sites][10] are running the latest version. + +The problem is that when your site becomes outdated, it becomes susceptible to glitches, bugs, intrusions, and crashes because it falls behind on security and performance fixes. Outdated sites can't fix bugs the same way as updated sites can, and attackers can tell which sites are outdated. This means they can search for the most vulnerable sites and attack accordingly. + +This is why you should always run your site on the latest version of WordPress. And in order to keep your security at its strongest, you must update your plugins and themes as well as your core WordPress software. + +If you choose a managed WordPress hosting plan, you might find that your provider will check and run updates for you—be clear whether your host offers software _and_ plugin updates. If not, you can install an open source plugin manager, such as the GPLv2-licensed [Easy Updates Manager plugin][11], as an alternative. + +### 5\. Strengthen your logins + +Aside from creating a secure WordPress website through carefully choosing your theme and installing security plugins, you also need to safeguard against unauthorized access through logins. + +#### Password protection + +The first and simplest way to strengthen your login security is to change your password—especially if you're using an [easily guessed phrase][12] such as "123456" or "qwerty." + +Instead, try to use a long passphrase rather than a password, as they are harder to crack. The best way is to use a series of unrelated words strung together that you find easy to remember. + +Here are some other tips: + + * Never reuse passwords + * Don't include obvious words such as family members' names or your favorite football team + * Never share your login details with anyone + * Include capitals and numbers to add complexity to your passphrase + * Don't write down or store your login details anywhere + * Use a [password manager][13] + + + +#### Change your login URL + +It's a good idea to change your default login web address from the standard format: yourdomain.com/wp-admin. This is because hackers know this is the default URL, so you risk brute-force attacks by not changing it. + +To avoid this, change the URL to something different. Use an open source plugin such as the GPLv2-licensed [WPS Hide Login][14] for safe, quick, and easy customization. + +#### Apply two-factor authentication + +For extra protection against unauthorized logins and brute-force attacks, you should add two-factor authentication. This means that even if someone _does_ get access to your login details, they'll need a code that's sent directly to your phone to gain access to your WordPress site's admin. + +Adding two-factor authentication is pretty easy. Simply install yet another plugin—this time, search the WordPress Plugin Directory for "two-factor authentication," and select the plugin you want. One option is [Two Factor][15], a popular GPLv2 licensed project that has over 10,000 active installations. + +#### Limit login attempts + +WordPress tries to be helpful by letting you guess your login details as many times as you like. However, this is also helpful to hackers trying to gain unauthorized access to your WordPress site to release malicious code. + +To combat brute-force attacks, install a plugin that limits login attempts and set how many guesses you want to allow. + +### 6\. Disable file editing + +This isn't such a beginner-friendly step, so don't attempt it unless you're a confident coder—and always back up your site first! + +That said, disabling file editing _is_ an important measure if you're really serious about protecting your WordPress website. If you don't hide your files, it means anyone can edit your theme and plugin code straight from the admin area—which is dangerous if an intruder gets in. + +To deny unauthorized access, go to your **wp-config.php** file and enter: + + +``` +<Files wp-config.php> +order allow,deny +deny from all +</Files> +``` + +Or, to remove the theme and plugin editing options from your WordPress admin area completely, edit your **wp-config.php** file by adding: + + +``` +`define( 'DISALLOW_FILE_EDIT', true );` +``` + +Once you've saved and reloaded the file, the plugin and theme editors will disappear from your menus within the WordPress admin area, stopping anyone from editing your theme or plugin code—including you**.** Should you need to restore access to your theme and plugin code, just delete the code you added to your **wp-config.php** file when you disabled editing. + +Whether you block unauthorized access or totally disable file editing, it's important to take action to protect your site's code. Otherwise, it's easy for unwelcome visitors to edit your files and add new code. This means an attacker could use the editor to gather data from your WordPress site or even use your site to launch attacks on others. + +For an easier way of hiding your files, you can use a security plugin that will do it for you, such as Sucuri. + +### WordPress security recap + +WordPress is an excellent open source platform that should be enjoyed by beginners and developers alike without the fear of becoming a victim of an attack. Sadly, these threats aren't going anywhere anytime soon, so it's vital to stay on top of your site's security. + +Using the measures outlined above, you can create a stronger, more secure level of protection for your WordPress site and ensure a much more enjoyable experience for yourself. + +Staying secure is an ongoing commitment rather than a one-time checklist, so be sure to revisit these steps regularly and stay alert when building and using your CMS. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/wordpress-security + +作者:[Lucy Carney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lucy-carney +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building) +[2]: https://cyberforces.com/en/wordpress-most-hacked-cms +[3]: https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority +[4]: https://wordpress.org/plugins/sucuri-scanner/ +[5]: https://sucuri.net/website-firewall/ +[6]: https://wordpress.org/themes/ +[7]: https://themify.me/ +[8]: https://developers.google.com/search/mobile-sites/ +[9]: http://www.gnu.org/licenses/gpl.html +[10]: https://wordpress.org/about/stats/ +[11]: https://wordpress.org/plugins/stops-core-theme-and-plugin-updates/ +[12]: https://www.forbes.com/sites/kateoflahertyuk/2019/04/21/these-are-the-worlds-most-hacked-passwords-is-yours-on-the-list/#4f157c2f289c +[13]: https://opensource.com/article/16/12/password-managers +[14]: https://wordpress.org/plugins/wps-hide-login/ +[15]: https://en-gb.wordpress.org/plugins/two-factor/ From 06ff74321accae13a94eec3af4a27783f98c2815 Mon Sep 17 00:00:00 2001 From: qfzy1233 Date: Tue, 28 Apr 2020 08:37:54 +0800 Subject: [PATCH 013/178] translating --- .../20200424 16 Things to do After Installing Ubuntu 20.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename {sources => source}/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md (99%) diff --git a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md similarity index 99% rename from sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md rename to source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index cca31a0426..5637651850 100644 --- a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (qfzy1233) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From f624b93b436b8df0d8ca53185d38c9cd9e44faa5 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 28 Apr 2020 08:43:08 +0800 Subject: [PATCH 014/178] translated --- ...v5- Why there is IPv4, IPv6 but no IPv5.md | 76 ------------------- ...v5- Why there is IPv4, IPv6 but no IPv5.md | 76 +++++++++++++++++++ 2 files changed, 76 insertions(+), 76 deletions(-) delete mode 100644 sources/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md create mode 100644 translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md diff --git a/sources/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/sources/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md deleted file mode 100644 index 4934186f76..0000000000 --- a/sources/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md +++ /dev/null @@ -1,76 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5?) -[#]: via: (https://itsfoss.com/what-happened-to-ipv5/) -[#]: author: (John Paul https://itsfoss.com/author/john/) - -What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5? -====== - -If you have spent any amount of time in the world of the internet, you should have heard about the IPv4 and IPv6 protocols that our computers use every day. - -One question that you might be asking is: Why there is no IPv5? Why IPv6 came after IPv4 and not IPv5? Was there ever a IPv5 and if yes, whatever happened to IPv5? - -The answer is yes, there was an IPv5…sort of. Let me quickly explain a few things around it. - -### The early history of the internet - -![ARPA Logical Map in 1977 | Image courtesy: Wikipedia][1] - -In the late 1960s, the US Department of Defense’s [Advanced Research Projects Agency][2] (ARPA) started a [project][3] to link computers across the country. The initial goal was to create a networked system of all of the ARPA-funded computers across the country. - -Since this was the first time a network of this scale was put together, they were also creating the technology and hardware as they went. One of the first things they worked was an internet protocol (IP) named [Transmission Control Protocol][4] (TCP). This protocol “reliable, ordered, and error-checked delivery of a stream of octets (bytes) between applications running on hosts communicating via an IP network”. Basically, it made sure data got where it needed to go safely. - -Originally, TCP was designed to be [“a host-level, end-to-end protocol and a packaging and routing protocol”][5]. However, they realized that they needed to split the protocol to make it more manageable. It was decided that IP would handle packaging and routing. - -By this time TCP had gone through three versions, so the new protocol became known as IPv4. - -### The birth of IPv5 - -IPv5 started life under a different name: Internet Stream Protocol (or ST). It was created to experiment with streaming voice and video [“by Apple, NeXT, and Sun Microsystems”][6]. - -This new protocol was capable of “transferring data packets on specific frequencies while maintaining communication”. - -### So what happened to IPv5? - -![][7] - -IPv5 was never accepted as an official internet protocol. This was mainly due to the 32-bit limitation. - -IPV5 used the same addressing system as IPv4. Each address was made up of four sets of numbers between 0 and 255. This limited the number of possible addresses to [4.3 billion][6]. - -In the early 1970s, that might have seemed like more than the world would ever need. However, the explosive growth of the Internet proved that idea wrong. In 2011, the world officially ran out of the IPv4 addresses. - -In the 1990s, a new project was started to work on the next generation of internet protocol (IPng). This led to the 128-bit IPv6. An IPv6 address contains a [“series of eight 4-character hexadecimal numbers”][6] that can contain numbers from 0 to 9 and letters from A to F. Unlike IPv4, IPv6 had trillions of possible addresses, so we should be safe for a while. - -Meanwhile, IPv5 laid the groundwork for the voice-over-IP technology that we use to communicate all over the world today. **So, I guess in some small way, you could say that IPv5 still survives to this day**. - -I hope you liked this anecdote about internet history. You may read some other [trivia article about Linux and tech in general][8]. - -If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][9]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/what-happened-to-ipv5/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/Arpa_internet.png?fit=800%2C573&ssl=1 -[2]: https://en.wikipedia.org/wiki/DARPA -[3]: https://en.wikipedia.org/wiki/ARPANET -[4]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol -[5]: https://fcw.com/articles/2006/07/31/what-ever-happened-to-ipv5.aspx -[6]: https://www.lifewire.com/what-happened-to-ipv5-3971327 -[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/what-happened-to-ipv5.png?ssl=1 -[8]: https://itsfoss.com/category/story/ -[9]: https://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md new file mode 100644 index 0000000000..71bfe2137a --- /dev/null +++ b/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5?) +[#]: via: (https://itsfoss.com/what-happened-to-ipv5/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +IPv5 发生了什么?为什么有 IPv4、IPv6 但没有 IPv5? +====== + +如果你花过很多时间在互联网上,那么你应该已经听说过计算机每天使用的 IPv4 和 IPv6 协议。 + +你可能会问的一个问题是:为什么没有 IPv5?为什么 IPv6 在 IPv4 之后而不是 IPv5 之后出现?是否有 IPv5,如果是,那么 IPv5 发生了什么? + +答案是肯定的,曾经有一个 IPv5。让我解释一下这里发生的事。 + +### 互联网的早期历史 + +![ARPA Logical Map in 1977 | Image courtesy: Wikipedia][1] + +在 1960 年代后期,美国国防部的[高级研究计划局][2] (ARPA) 发起了一个[项目][3]来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 + +由于这是第一次将如此规模的网络整合在一起,因此他们也在不断发展自己的技术和硬件。他们的第一件工作是名为[传输控制协议][4] (TCP) 的互联网协议 (IP)。该协议“可靠、有序、并会对通过 IP 网络传输的八进制(字节)流错误检测”。基本上,它确保数据安全到达。 + +最初,TCP 被设计为[“主机级别的端到端协议以及打包和路由协议”][5]。但是,他们意识到他们需要拆分协议以使其更易于管理。于是决定由 IP 处理打包和路由。 + +那时,TCP 已经经历了三个版本,因此新协议被称为 IPv4。 + +### IPv5 的诞生 + +IPv5 以不同的名称开始使用:互联网流协议(或 ST)。它是[由 Apple、NeXT 和 Sun Microsystems][6] 创建用于实验流式传输语音和视频。 + +该新协议能够“在保持通信的同时在特定频率上传输数据包”。 + +### 那么 IPv5 发生了什么? + +![][7] + +IPv5 从未被接受为正式的互联网协议。这主要是由于 32 位限制。 + +IPV5 使用与 IPv4 相同的寻址系统。每个地址由 0 到 255 之间的四组数字组成。这将可能的地址数量限制为 [43 亿][6]。 + +在 1970 年代初,这似乎比全世界所需要的还要多。但是,互联网的爆炸性增长证明了这一想法是错误的。2011年,世界正式耗尽了 IPv4 地址。 + +在 1990 年代,一个新项目开始致力于下一代互联网协议 (IPng)。这导致了 128 位的 IPv6。IPv6 地址包含 [“8 组 4 字符的十六进制数字”][6],它可以包含从 0 到 9 的数字和从 A 到 F 的字母。与 IPv4 不同,IPv6 拥有数万亿个可能的地址,因此我们应该能安全一阵子。 + +同时,IPv5 奠定了 VoIP 的基础,而该技术已被我们用于当今世界范围内的通信。**因此,我想往小了说,你可以说 IPv5 仍然可以保留到了今天**。 + +希望你喜欢有关互联网历史的轶事。你可以阅读其他[关于 Linux 和技术的琐事文章] [8]。 + +如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit][9] 上分享它。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/what-happened-to-ipv5/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/Arpa_internet.png?fit=800%2C573&ssl=1 +[2]: https://en.wikipedia.org/wiki/DARPA +[3]: https://en.wikipedia.org/wiki/ARPANET +[4]: https://en.wikipedia.org/wiki/Transmission_Control_Protocol +[5]: https://fcw.com/articles/2006/07/31/what-ever-happened-to-ipv5.aspx +[6]: https://www.lifewire.com/what-happened-to-ipv5-3971327 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/what-happened-to-ipv5.png?ssl=1 +[8]: https://itsfoss.com/category/story/ +[9]: https://reddit.com/r/linuxusersgroup From ad862bdb1dcffac6e1a97fe6bc5d4939992d7ed8 Mon Sep 17 00:00:00 2001 From: geekpi Date: Tue, 28 Apr 2020 08:50:47 +0800 Subject: [PATCH 015/178] translating --- ...ing Pixel Art With Free and Open Source Editor Pixelorama.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md b/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md index fc3c4b7bab..b4b0949dab 100644 --- a/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md +++ b/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 78f8d77d2f68681f6d686c3c2a150d6dddae0ec9 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 09:43:46 +0800 Subject: [PATCH 016/178] PRF @geekpi --- ...ings You Should Know About Ubuntu 20.04.md | 50 +++++++++---------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md b/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md index 30a3049bfe..c6679159bf 100644 --- a/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md +++ b/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Things You Should Know About Ubuntu 20.04) @@ -10,13 +10,11 @@ 关于 Ubuntu 20.04 你应该了解的事情 ====== -[Ubuntu 20.04][1] 即将发布,你可能对升级、安装等有一些问题和疑问。 +[Ubuntu 20.04][1] 已经发布,你可能对升级、安装等有一些问题和疑问。 -我在各种社交媒体渠道上主持了一些问答环节,回答像你这样的读者的疑虑。 +我在各种社交媒体渠道上主持了一些问答环节,回答像你这样的读者的疑虑。我将列出这些关于 Ubuntu 20.04 的常见问题,并给出答案。我希望它能帮助你消除你的疑虑。如果你仍有问题,请随时在下面的评论栏提问。 -我将列出这些关于Ubuntu 20.04的常见问题,并给出答案。我希望它能帮助你消除你的疑虑。如果你仍有问题,请随时在下面的评论栏提问。 - -### Ubuntu 20.04: 已回复的问题 +### Ubuntu 20.04:已回复的问题 ![][2] @@ -28,25 +26,25 @@ Ubuntu 20.04 LTS 于 2020 年 4 月 23 日发布。所有变种,如 Kubuntu、 #### Ubuntu 20.04 的系统要求是什么? -对于默认的 GNOME 版本,应至少具有 4GB 的内存、2 GHz 双核处理器和至少 25GB 的磁盘空间。 +对于默认的 GNOME 版本,应至少具有 4GB 的内存、2GHz 双核处理器和至少 25GB 的磁盘空间。 其他 [Ubuntu 变种][3]可能有不同的系统要求。 #### 我可以在 32 位系统上使用 Ubuntu 20.04 吗? -完全不行。你不能在 32 位系统上使用 Ubuntu 20.04。即使你使用的是 32 位 Ubuntu 18.04,也不能升级到 Ubuntu 20.04。过去几年有 32 位的系统 ISO。 +完全不行。你不能在 32 位系统上使用 Ubuntu 20.04。即使你使用的是 32 位 Ubuntu 18.04,也不能升级到 Ubuntu 20.04。32 位的系统 ISO 是以前用的。 ![Error while upgrading 32-bit Ubuntu 18.04 to Ubuntu 20.04][4] -#### 我可以在Ubuntu 20.04上使用 Wine 吗? +#### 我可以在 Ubuntu 20.04 上使用 Wine 吗? -是的,你仍然可以在 Ubuntu 20.04 上使用 Wine,因为仍然有 32 位库,来用于 Wine 和 [Steam Play][5] 所需的软件包。 +是的,你仍然可以在 Ubuntu 20.04 上使用 Wine,因为仍然用于 Wine 和 [Steam Play][5] 软件包所需的 32 位库。 #### 我需要购买 Ubuntu 20.04 或许可证? -不,Ubuntu 是完全免费使用的。你不必像在 Windows 中那样购买许可证密钥或激活 Ubuntu。 +不,Ubuntu 完全可以免费使用。你不必像在 Windows 中那样购买许可证密钥或激活 Ubuntu。 -Ubuntu 的下载页会请求你捐赠一些资金,如果你想捐赠一些钱来帮助开发这个系统,这完全取决于你。 +Ubuntu 的下载页会请求你捐赠一些资金,如果你想为开发这个强大的操作系统捐钱,由你自己决定。 #### GNOME 版本是什么? @@ -54,21 +52,23 @@ Ubuntu 20.04 有 GNOME 3.36。 #### Ubuntu 20.04 的性能是否优于 Ubuntu 18.04? -是的,在几个方面。Ubuntu 20.04 安装速度更快,甚至加速更快。我在 4:40 的视频中展示了性能比较。 +是的,在几个方面。Ubuntu 20.04 系统速度更快,甚至超快。我在下面这个视频的 4:40 处展示了性能对比。 -在 GNOME 3.36 中,滚动、窗口动画和其他 UI 元素更加流畅,并提供了更流畅的体验。 +- [video](https://img.linux.net.cn/static/video/Top%207%20Best%20Features%20You%27ll%20Love%20in%20Ubuntu%2020.04-lpq8pm_xkSE.mp4) + +在 GNOME 3.36 中,滚动、窗口动画和其他 UI 元素更加流畅,提供了更流畅的体验。 #### Ubuntu 20.04 将支持多长时间? -它是一个长期支持 (LTS) 版本,与任何 LTS 版本一样,它将在五年内得到支持。这意味着 Ubuntu 20.04 将在 2025 年 4 月之前获得安全和维护更新。 +它是一个长期支持(LTS)版本,与任何 LTS 版本一样,它将在五年内得到支持。这意味着 Ubuntu 20.04 将在 2025 年 4 月之前获得安全和维护更新。 #### 升级到 Ubuntu 20.04 时,是否会丢失数据? -你可以从 Ubuntu 19.10 或 Ubuntu 18.04 升级到 Ubuntu 20.04。你无需创建 live USB 并从中安装。你所需要的是一个良好的互联网连接,来下载约1.5GB 的数据。 +你可以从 Ubuntu 19.10 或 Ubuntu 18.04 升级到 Ubuntu 20.04。你无需创建 live USB 并从中安装。你所需要的是一个良好的互联网连接,来下载约 1.5GB 的数据。 -从现有系统升级不会破坏你的文件。你应该会有所有文件,并且大多数现有软件应具有相同的版本或升级后的版本。 +从现有系统升级不会破坏你的文件。你应该会留有所有文件,并且大多数现有软件应具有相同的版本或升级后的版本。 -如果你使用了某些第三方工具或[其他 PPA][6],升级过程将禁用它们。如果 Ubuntu 20.04 适合这些其他存储库,那么可以再次启用它们。 +如果你使用了某些第三方工具或[其他 PPA][6],升级过程将禁用它们。如果 Ubuntu 20.04 可以使用这些其他存储库,那么可以再次启用它们。 升级大约需要一个小时,重启后,你将登录到新版本。 @@ -78,21 +78,21 @@ Ubuntu 20.04 有 GNOME 3.36。 ![][7] -如果你正在使用 Ubuntu 19.10 并有正确的更新设置(如前面部分所述),那么应在 Ubuntu 18.04 后的几天内通知你升级到 Ubuntu 20.04。 +如果你正在使用 Ubuntu 19.10 并有正确的更新设置(如前面部分所述),那么应在发布后的几天内通知你升级到 Ubuntu 20.04。 -对于 Ubuntu 18.04 用户,可能需要几周时间才能正式通知他们 Ubuntu 18.04 可用。可能,你可能会在第一个点版本 Ubuntu 20.04.1 后获得提示。 +对于 Ubuntu 18.04 用户,可能需要几周时间才能正式通知他们 Ubuntu 20.04 可用。可能,你可能会在第一个点版本 Ubuntu 20.04.1 后获得提示。 #### 如果我升级到 Ubuntu 20.04,我可以降级到 19.10 或 18.04 吗? -不行。虽然升级到新版本很容易,但无法选择降级。 如果你想回到 Ubuntu 18.04,你将重新[安装 Ubuntu18.04][8]。 +不行。虽然升级到新版本很容易,但无法选择降级。如果你想回到 Ubuntu 18.04,你需要重新[安装 Ubuntu 18.04][8]。 #### 我使用的是 Ubuntu 18.04 LTS。我应该升级到 Ubuntu 20.04 LTS 吗? 这取决于你。如果你对 Ubuntu 20.04 中的新功能印象深刻,并希望上手尝试,那么你应该升级。 -如果你想要一个更稳定的系统,我建议等待第一个点版本 Ubuntu 20.04.1,新版本将有 bug 修复。20.04.1 通常在 Ubuntu 20.04 发布后大约两个月到来。 +如果你想要一个更稳定的系统,我建议等待第一个点版本 Ubuntu 20.04.1,新版本会有 bug 修复。20.04.1 通常在 Ubuntu 20.04 发布后大约两个月到来。 -其他情况下,我建议尽早升级到 Ubuntu 20.04。Ubuntu 20.04 具有更新的内核、性能改进,尤其是仓库中有更新版本的软件。 +无论是那种情况,我都建议你或早或晚升级到 Ubuntu 20.04。Ubuntu 20.04 具有更新的内核、性能改进,尤其是仓库中有更新版本的软件。 在外部磁盘上进行备份,并且有良好的互联网连接,升级不应成为问题。 @@ -113,13 +113,13 @@ via: https://itsfoss.com/ubuntu-20-04-faq/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/abhishek/ [b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/ubuntu-20-04-release-features/ +[1]: https://linux.cn/article-12142-1.html [2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu_20_04_faq.jpg?ssl=1 [3]: https://itsfoss.com/which-ubuntu-install/ [4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/ubuntu-32-bit.jpg?ssl=1 From 6cb125fc9c54e0bfe75b5484b68ea65eb3e4af8b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 09:46:57 +0800 Subject: [PATCH 017/178] PUB @geekpi https://linux.cn/article-12158-1.html --- .../20200422 Things You Should Know About Ubuntu 20.04.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200422 Things You Should Know About Ubuntu 20.04.md (98%) diff --git a/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md b/published/20200422 Things You Should Know About Ubuntu 20.04.md similarity index 98% rename from translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md rename to published/20200422 Things You Should Know About Ubuntu 20.04.md index c6679159bf..a6a56b25f2 100644 --- a/translated/tech/20200422 Things You Should Know About Ubuntu 20.04.md +++ b/published/20200422 Things You Should Know About Ubuntu 20.04.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12158-1.html) [#]: subject: (Things You Should Know About Ubuntu 20.04) [#]: via: (https://itsfoss.com/ubuntu-20-04-faq/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From 517d740add3258ab448a75da02d182bc53ab4809 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 28 Apr 2020 14:32:02 +0800 Subject: [PATCH 018/178] =?UTF-8?q?Revert=20"=E7=94=B3=E9=A2=86=E5=8E=9F?= =?UTF-8?q?=E6=96=87"?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20200424 16 Things to do After Installing Ubuntu 20.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename {source => sources}/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md (99%) diff --git a/source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md similarity index 99% rename from source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md rename to sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index 5637651850..cca31a0426 100644 --- a/source/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: (qfzy1233) +[#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 34e3debb2bc3b4231736bc31b9cb2fa1196cb83f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 28 Apr 2020 14:37:57 +0800 Subject: [PATCH 019/178] Rename sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md to sources/talk/20200427 DevOps vs. Agile- Do they have anything in common.md --- .../20200427 DevOps vs. Agile- Do they have anything in common.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20200427 DevOps vs. Agile- Do they have anything in common.md (100%) diff --git a/sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md b/sources/talk/20200427 DevOps vs. Agile- Do they have anything in common.md similarity index 100% rename from sources/tech/20200427 DevOps vs. Agile- Do they have anything in common.md rename to sources/talk/20200427 DevOps vs. Agile- Do they have anything in common.md From 4c5f404a98520faada11aed9869662a07163d4b6 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Tue, 28 Apr 2020 14:43:44 +0800 Subject: [PATCH 020/178] Rename sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md to sources/talk/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md --- ...ption, pay-per-bug, and consulting software business models.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md (100%) diff --git a/sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md b/sources/talk/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md similarity index 100% rename from sources/tech/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md rename to sources/talk/20200427 Comparing subscription, pay-per-bug, and consulting software business models.md From 1f19b90198533641d6525b26574c628c2f5e2f85 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 15:17:31 +0800 Subject: [PATCH 021/178] APL --- .../20200421 Using Python to visualize COVID-19 projections.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200421 Using Python to visualize COVID-19 projections.md b/sources/tech/20200421 Using Python to visualize COVID-19 projections.md index 96fcb61521..a1ff4658fb 100644 --- a/sources/tech/20200421 Using Python to visualize COVID-19 projections.md +++ b/sources/tech/20200421 Using Python to visualize COVID-19 projections.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From ff377d7295dedd255ef591344db484c003e53208 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 17:15:17 +0800 Subject: [PATCH 022/178] TSL --- ...ython to visualize COVID-19 projections.md | 255 ------------------ ...ython to visualize COVID-19 projections.md | 239 ++++++++++++++++ 2 files changed, 239 insertions(+), 255 deletions(-) delete mode 100644 sources/tech/20200421 Using Python to visualize COVID-19 projections.md create mode 100644 translated/tech/20200421 Using Python to visualize COVID-19 projections.md diff --git a/sources/tech/20200421 Using Python to visualize COVID-19 projections.md b/sources/tech/20200421 Using Python to visualize COVID-19 projections.md deleted file mode 100644 index a1ff4658fb..0000000000 --- a/sources/tech/20200421 Using Python to visualize COVID-19 projections.md +++ /dev/null @@ -1,255 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Using Python to visualize COVID-19 projections) -[#]: via: (https://opensource.com/article/20/4/python-data-covid-19) -[#]: author: (AnuragGupta https://opensource.com/users/999anuraggupta) - -Using Python to visualize COVID-19 projections -====== -I'll demonstrate how to create two visualizations of the spread of a -virus across the globe, provided open data and using open source -libraries. -![Colorful sound wave graph][1] - -Using [Python][2] and some graphing libraries, you can project the total number of confirmed cases of COVID-19, and also display the total number of deaths for a country (this article uses India as an example) on a given date. Humans sometimes need help interpreting and processing the meaning of data, so this article also demonstrates how to create an animated horizontal bar graph for five countries, showing the variation of cases by date. - -### Projecting confirmed cases and deaths for India - -This is done in three steps. - -#### 1\. Download data - -Scientific data isn't always open, but fortunately, many modern science and healthcare organizations are eager to share information with each other and the public. Data about COVID-19 cases is available online, and it's updated frequently. - -To parse the data, you first must download it:  - -Load the data directly into a Pandas DataFrame. Pandas provides a function, **read_csv()**, which can take a URL and give back a DataFrame object, as shown below: - - -``` -import pycountry -import plotly.express as px -import pandas as pd -URL_DATASET = r'' -df1 = pd.read_csv(URL_DATASET) -print(df1.head(3))  # Get first 3 entries in the dataframe -print(df1.tail(3))  # Get last 3 entries in the dataframe -``` - -The top row of the data set contains column names: - - 1. Date - 2. Country - 3. Confirmed - 4. Recovered - 5. Deaths - - - -The output of the **head** query includes a unique identifier (not listed as a column) plus an entry for each column: - - -``` -0 2020-01-22 Afghanistan 0 0 0 -1 2020-01-22 Albania 0 0 0 -1 2020-01-22 Algeria 0 0 0 -``` - -The output of the **tail** query is similar but contains the tail end of the data set: - - -``` -12597 2020-03-31 West Bank and Gaza 119 18 1 -12598 2020-03-31 Zambia 35 0 0 -12599 2020-03-31 Zimbabwe 8 0 1 -``` - -From the output, you can see that the DataFrame (**df1**) has the following columns: - - 1. Date - 2. Country - 3. Confirmed - 4. Recovered - 5. Dead - - - -Further, you can see that the **Date** column has entries starting from January 22 to March 31. This database is updated daily, so you will have current values. - -#### 2\. Select data for India - -In this step, we will select only those rows in the DataFrame that include India. This is shown in the script below: - - -``` -#### ----- Step 2 (Select data for India)---- -df_india = df1[df1['Country'] == 'India'] -print(df_india.head(3)) -``` - -#### 3\. Plot data - -Here we create a bar chart. We will put the dates on the X-axis and the number of confirmed cases and the number of deaths on the Y-axis. There are a few noteworthy things about this part of the script which are as follows: - - * The line of code: **plt.rcParams["_figure.figsize"_]=20,20** is meant only for Jupyter. So remove it if you are using some other IDE. - - * Notice the line of code: **ax1 = plt.gca()**. To ensure that both the plots i.e. for confirmed cases as well as for deaths are plotted on the same graph, we need to give to the second graph the **ax** object of the plot. So we use **gca()** to do this. (By the way, 'gca' stands for 'get current axis'). - - - - -The complete script is given below: - - -``` -#  Author:- Anurag Gupta # email:- [999.anuraggupta@gmail.com][3] -%matplotlib inline -import matplotlib.pyplot as plt -import pandas as pd - -#### ----- Step 1 (Download data)---- -URL_DATASET = r'' -df1 = pd.read_csv(URL_DATASET) -# print(df1.head(3))  # Uncomment to see the dataframe - -#### ----- Step 2 (Select data for India)---- -df_india = df1[df1['Country'] == 'India'] -print(df_india.head(3)) - -#### ----- Step 3 (Plot data)---- -# Increase size of plot -plt.rcParams["figure.figsize"]=20,20  # Remove if not on Jupyter -# Plot column 'Confirmed' -df_india.plot(kind = 'bar', x = 'Date', y = 'Confirmed', color = 'blue') - -ax1 = plt.gca() -df_india.plot(kind = 'bar', x = 'Date', y = 'Deaths', color = 'red', ax = ax1) -plt.show() -``` - -The entire script is [available on GitHub][4]. - -### Creating an animated horizontal bar graph for five countries - -Note for Jupyter: To run this in Jupyter as a dynamic animation rather than as a static png, you need to add a magic command at the beginning of your cell, namely: **%matplotlib notebook**. This will keep the figure alive instead of displaying a static png file and can hence also show animations. If you are on another IDE, remove this line. - -#### 1\. Download the data - -This step is exactly the same as in the previous script, and therefore, it need not be repeated. - -#### 2\. Create a list of all dates - -If you examine the data you downloaded, you notice that it has a column **Date**. Now, this column has a date value for each country. So the same date is occurring a number of times. We need to create a list of dates with only unique values. This will be used on the X-axis of our bar charts. We have a line of code like: **list_dates = df[_‘Date’_].unique()**. The **unique()** method will pick up only the unique values for each date. - -#### 3\. Pick five countries and create an **ax** object - -Take a list of five countries. (You can choose whatever countries you prefer, or even increase or decrease the number of countries). I have also taken a list of five colors for the bars of each country. (You can change this too if you like). One important line of code here is: **fig, ax = plt.subplots(figsize=(15, 8))**. This is needed to create an **ax** object. - -#### 4\. Write the call back function - -If you want to do animation in Matplotlib, you need to create an object of a class called **matplotlib.animation.FuncAnimation**. The signature of this class is available online. The constructor of this class, apart from other parameters, also takes a parameter called **func**, and you have to give this parameter a callback function. So in this step, we will write the callback function, which is repeatedly called in order to render the animation. - -#### 5\. Create **FuncAnimation** object - -This step has partly been explained in the previous step. - -Our code to create an object of this class is: - - -``` -my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, -                    frames= list_dates, blit=True, -                    interval=20) -``` - -The three important parameters to be given are: - - * **fig**, which must be given a fig object, which we created earlier. - * **func**, which must be the call back function. - * **frames**, which must contain the variable on which the animation is to be done. Here in our case, it will be the list of dates we created earlier. - - - -#### 6\. Save the animation to an mp4 file - -You can save the animation created into an mp4 file. But for this you need **ffmpeg**. You can download this using pip by **pip install ffmpeg-python**, or using conda (on Jupyter) **install -c conda-forge ffmpeg**. - -And finally, you can run the animation using **plt.show()**. Please note that on many platforms, the **ffmpeg** may not work properly and may require further "tweaking." - - -``` -%matplotlib notebook -#  Author:- Anurag Gupta # email:- [999.anuraggupta@gmail.com][3] -import pandas as pd -import matplotlib.pyplot as plt -import matplotlib.animation as animation -from time import sleep - -#### ---- Step 1:- Download data -URL_DATASET = r'' -df = pd.read_csv(URL_DATASET, usecols = ['Date', 'Country', 'Confirmed']) -# print(df.head(3)) # uncomment this to see output - -#### ---- Step 2:- Create list of all dates -list_dates = df['Date'].unique() -# print(list_dates) # Uncomment to see the dates - -#### --- Step 3:- Pick 5 countries. Also create ax object -fig, ax = plt.subplots(figsize=(15, 8)) -# We will animate for these 5 countries only -list_countries = ['India', 'China', 'US', 'Italy', 'Spain'] -# colors for the 5 horizontal bars -list_colors = ['black', 'red', 'green', 'blue', 'yellow'] - -### --- Step 4:- Write the call back function -# plot_bar() is the call back function used in FuncAnimation class object -def plot_bar(some_date): -    df2 = df[df['Date'].eq(some_date)] -    ax.clear() -    # Only take Confirmed column in descending order -    df3 = df2.sort_values(by = 'Confirmed', ascending = False) -    # Select the top 5 Confirmed countries -    df4 = df3[df3['Country'].isin(list_countries)] -    # print(df4)  # Uncomment to see that dat is only for 5 countries -    sleep(0.2)  # To slow down the animation -    # ax.barh() makes a horizontal bar plot. -    return ax.barh(df4['Country'], df4['Confirmed'], color= list_colors) - -###----Step 5:- Create FuncAnimation object--------- -my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, -                    frames= list_dates, blit=True, -                    interval=20) - -### --- Step 6:- Save the animation to an mp4 -# Place where to save the mp4. Give your file path instead -path_mp4 = r'C:\Python-articles\population_covid2.mp4'   -# my_anim.save(path_mp4, fps=30, extra_args=['-vcodec', 'libx264']) -my_anim.save(filename = path_mp4, writer = 'ffmpeg', -             fps=30, -             extra_args= ['-vcodec', 'libx264', '-pix_fmt', 'yuv420p']) -plt.show() -``` - -The complete script is [available on GitHub][5]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/python-data-covid-19 - -作者:[AnuragGupta][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/999anuraggupta -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph) -[2]: https://opensource.com/resources/python -[3]: mailto:999.anuraggupta@gmail.com -[4]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_india -[5]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_animated diff --git a/translated/tech/20200421 Using Python to visualize COVID-19 projections.md b/translated/tech/20200421 Using Python to visualize COVID-19 projections.md new file mode 100644 index 0000000000..3768686e8c --- /dev/null +++ b/translated/tech/20200421 Using Python to visualize COVID-19 projections.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Python to visualize COVID-19 projections) +[#]: via: (https://opensource.com/article/20/4/python-data-covid-19) +[#]: author: (AnuragGupta https://opensource.com/users/999anuraggupta) + +使用 Python 来可视化 COVID-19 预测 +====== + +> 我将演示如何使用开源库利用提供的全球病毒传播的开放数据来创建两个可视效果。 + +![Colorful sound wave graph][1] + +使用 [Python][2] 和一些图形库,你可以预测出 COVID-19 确诊病例的总数,也可以显示一个国家(本文以印度为例)在给定日期的死亡总数。人们有时需要帮助解释和处理数据的意义,所以本文还演示了如何为五个国家创建一个动画横条形图,以显示按日期显示病例的变化。 + +### 印度的确诊病例和死亡人数预测 + +这要分三步来完成。 + +#### 1、下载数据 + +科学数据并不总是开放的,但幸运的是,许多现代科学和医疗机构都乐于相互之间及与公众共享信息。关于 COVID-19 病例的数据可以在网上查到,并且经常更新。 + +要解析这些数据,首先必须先下载。 。 + +直接将数据加载到 Pandas `DataFrame` 中。Pandas 提供了一个函数 `read_csv()`,它可以获取一个 URL 并返回一个 `DataFrame` 对象,如下所示。 + + +``` +import pycountry +import plotly.express as px +import pandas as pd +URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' +df1 = pd.read_csv(URL_DATASET) +print(df1.head(3)) # 获取数据帧中的前 3 项 +print(df1.tail(3)) # 获取数据帧中的后 3 项 +``` + +数据集的顶行包含列名。 + +1. `Date` +2. `Country` +3. `Confirmed` +4. `Recovered` +5. `Deaths` + +`head` 查询的输出包括一个唯一的标识符(不作为列列出)和每个列的条目。 + +``` +0 2020-01-22 Afghanistan 0 0 0 +1 2020-01-22 Albania 0 0 0 +1 2020-01-22 Algeria 0 0 0 +``` + +`tail` 查询的输出类似,但包含数据集的尾端。 + +``` +12597 2020-03-31 West Bank and Gaza 119 18 1 +12598 2020-03-31 Zambia 35 0 0 +12599 2020-03-31 Zimbabwe 8 0 1 +``` + +从输出中,可以看到 DataFrame(`df1`)有以下几个列: + +1. 日期 +2. 国家 +3. 确诊 +4. 康复 +5. 死亡 + +此外,你可以看到 `Date` 栏中的条目从 1 月 22 日开始到 3 月 31 日。这个数据库每天都会更新,所以你会有当前的值。 + +#### 2、选择印度的数据 + +在这一步中,我们将只选择 DataFrame 中包含印度的那些行。这在下面的脚本中可以看到。 + +``` +#### ----- Step 2 (Select data for India)---- +df_india = df1[df1['Country'] == 'India'] +print(df_india.head(3)) +``` + +#### 3、数据绘图 + +在这里,我们创建一个条形图。我们将把日期放在 X 轴上,把确诊的病例数和死亡人数放在 Y 轴上。这一部分的脚本有以下几个值得注意的地方。 + + * `plt.rcParams["_figure.figure.figsize"_]=20,20` 这一行代码只适用于 Jupyter。所以如果你使用其他 IDE,请删除它。 + * 注意这行代码:`ax1 = plt.gca()`。为了确保两个图,即确诊病例和死亡病例的图都被绘制在同一个图上,我们需要给第二个图的 `ax` 对象。所以我们使用 `gca()` 来完成这个任务。(顺便说一下,`gca` 代表“get current axis”) + +完整的脚本如下所示。 + +``` +# Author:- Anurag Gupta # email:- 999.anuraggupta@gmail.com +%matplotlib inline +import matplotlib.pyplot as plt +import pandas as pd + +#### ----- Step 1 (Download data)---- +URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' +df1 = pd.read_csv(URL_DATASET) +# print(df1.head(3)) # Uncomment to see the dataframe + +#### ----- Step 2 (Select data for India)---- +df_india = df1[df1['Country'] == 'India'] +print(df_india.head(3)) + +#### ----- Step 3 (Plot data)---- +# Increase size of plot +plt.rcParams["figure.figsize"]=20,20 # Remove if not on Jupyter +# Plot column 'Confirmed' +df_india.plot(kind = 'bar', x = 'Date', y = 'Confirmed', color = 'blue') + +ax1 = plt.gca() +df_india.plot(kind = 'bar', x = 'Date', y = 'Deaths', color = 'red', ax = ax1) +plt.show() +``` + +整个脚本[可在 GitHub 上找到][4]。 + +#### 为五个国家创建一个动画水平条形图 + +关于 Jupyter 的注意事项:要在 Jupyter 中以动态动画的形式运行,而不是静态 png 的形式,你需要在单元格的开头添加一个神奇的命令,即: `%matplotlib notebook`。这将使图形保持动态,而不是显示静态的 png 文件,因此也可以显示动画。如果你在其他 IDE 上,请删除这一行。 + +#### 1、下载数据 + +这一步和前面的脚本完全一样,所以不需要重复。 + +#### 2、创建一个所有日期的列表 + +如果你检查你下载的数据,你会发现它有一列 `Date`。现在,这一列对每个国家都有一个日期值。因此,同一个日期会出现多次。我们需要创建一个只具有唯一值的日期列表。这会用在我们条形图的 X 轴上。我们有一行代码,如 `list_dates = df[_'Date'_].unique()`。`unique()` 方法将只提取每个日期的唯一值。 + +#### 3、挑选五个国家并创建一个 `ax` 对象。 + +做一个五个国家的名单。(你可以选择你喜欢的国家,甚至可以增加或减少国家的数量。)我也做了一个五个颜色的列表,每个国家的条形图的颜色对应一种。(如果你喜欢的话,也可以改一下。)这里有一行重要的代码是:`fig, ax = plt.subplots(figsize=(15, 8))`。这是创建一个 `ax` 对象所需要的。 + +#### 4、编写回调函数 + +如果你想在 Matplotlib 中做动画,你需要创建一个名为 `matplotlib.animation.FuncAnimation` 的类的对象。这个类的签名可以在网上查到。这个类的构造函数,除了其他参数外,还需要一个叫 `func` 的参数,你必须给这个参数一个回调函数。所以在这一步中,我们会写个回调函数,这个回调函数会被反复调用,以渲染动画。 + +#### 5、创建 `FuncAnimation` 对象 + +这一步在上一步中已经部分说明了。 + +我们创建这个类的对象的代码是: + +``` +my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, + frames= list_dates, blit=True, + interval=20) +``` + +要给出的三个重要参数是: + +* `fig`,必须给出一个 fig 对象,也就是我们之前创建的 fig 对象。 +* `func`,必须是回调函数。 +* `frames`,必须包含要做动画的变量。在我们这里,它是我们之前创建的日期列表。 + +#### 6、将动画保存为 mp4 文件 + +你可以将创建的动画保存为 mp4 文件。但是,你需要 `ffmpeg`。你可以用 `pip` 下载:`pip install ffmpeg-python`,或者用 conda(在 Jupyter 上):`install -c conda-forge ffmpeg`。 + +最后,你可以使用 `plt.show()` 运行动画。请注意,在许多平台上,`ffmpeg` 可能无法正常工作,可能需要进一步“调整”。 + +``` +%matplotlib notebook +# Author:- Anurag Gupta # email:- 999.anuraggupta@gmail.com +import pandas as pd +import matplotlib.pyplot as plt +import matplotlib.animation as animation +from time import sleep + +#### ---- Step 1:- Download data +URL_DATASET = r'https://raw.githubusercontent.com/datasets/covid-19/master/data/countries-aggregated.csv' +df = pd.read_csv(URL_DATASET, usecols = ['Date', 'Country', 'Confirmed']) +# print(df.head(3)) # uncomment this to see output + +#### ---- Step 2:- Create list of all dates +list_dates = df['Date'].unique() +# print(list_dates) # Uncomment to see the dates + +#### --- Step 3:- Pick 5 countries. Also create ax object +fig, ax = plt.subplots(figsize=(15, 8)) +# We will animate for these 5 countries only +list_countries = ['India', 'China', 'US', 'Italy', 'Spain'] +# colors for the 5 horizontal bars +list_colors = ['black', 'red', 'green', 'blue', 'yellow'] + +### --- Step 4:- Write the call back function +# plot_bar() is the call back function used in FuncAnimation class object +def plot_bar(some_date): + df2 = df[df['Date'].eq(some_date)] + ax.clear() + # Only take Confirmed column in descending order + df3 = df2.sort_values(by = 'Confirmed', ascending = False) + # Select the top 5 Confirmed countries + df4 = df3[df3['Country'].isin(list_countries)] + # print(df4) # Uncomment to see that dat is only for 5 countries + sleep(0.2) # To slow down the animation + # ax.barh() makes a horizontal bar plot. + return ax.barh(df4['Country'], df4['Confirmed'], color= list_colors) + +###----Step 5:- Create FuncAnimation object--------- +my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, + frames= list_dates, blit=True, + interval=20) + +### --- Step 6:- Save the animation to an mp4 +# Place where to save the mp4. Give your file path instead +path_mp4 = r'C:\Python-articles\population_covid2.mp4' +# my_anim.save(path_mp4, fps=30, extra_args=['-vcodec', 'libx264']) +my_anim.save(filename = path_mp4, writer = 'ffmpeg', + fps=30, + extra_args= ['-vcodec', 'libx264', '-pix_fmt', 'yuv420p']) +plt.show() +``` + +完整的脚本[可以在 GitHub 上找到][5]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/python-data-covid-19 + +作者:[AnuragGupta][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/999anuraggupta +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/colorful_sound_wave.png?itok=jlUJG0bM (Colorful sound wave graph) +[2]: https://opensource.com/resources/python +[3]: mailto:999.anuraggupta@gmail.com +[4]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_india +[5]: https://raw.githubusercontent.com/ag999git/jupyter_notebooks/master/corona_bar_animated From 0186888ddb45ff748be764b1f41eb9d95fe785bd Mon Sep 17 00:00:00 2001 From: Brooke Lau Date: Tue, 28 Apr 2020 21:05:34 +0800 Subject: [PATCH 023/178] TSL --- .../20200425 Inlining optimisations in Go.md | 215 ------------------ .../20200425 Inlining optimisations in Go.md | 211 +++++++++++++++++ 2 files changed, 211 insertions(+), 215 deletions(-) delete mode 100644 sources/tech/20200425 Inlining optimisations in Go.md create mode 100644 translated/tech/20200425 Inlining optimisations in Go.md diff --git a/sources/tech/20200425 Inlining optimisations in Go.md b/sources/tech/20200425 Inlining optimisations in Go.md deleted file mode 100644 index 0e29d0d178..0000000000 --- a/sources/tech/20200425 Inlining optimisations in Go.md +++ /dev/null @@ -1,215 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Inlining optimisations in Go) -[#]: via: (https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go) -[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) - -Inlining optimisations in Go -====== - -This is a post about how the Go compiler implements inlining and how this optimisation affects your Go code. - -_n.b._ This article focuses on _gc_, the de facto Go compiler from [golang.org][1]. The concepts discussed apply broadly to other Go compilers like gccgo and llgo but may differ in implementation and efficacy. - -### What is inlining? - -Inlining is the act of combining smaller functions into their respective callers. In the early days of computing this optimisation was typically performed by hand. Nowadays inlining is one of a class of fundamental optimisations performed automatically during the compilation process. - -### Why is inlining important? - -Inlining is important for two reasons. The first is it removes the overhead of the function call itself. The second is it permits the compiler to more effectively apply other optimisation strategies. - -#### Function call overhead - -Calling a function[1][2] in any language carries a cost. There are the overheads of marshalling parameters into registers or onto the stack (depending on the ABI) and reversing the process on return. Invoking a function call involves jumping the program counter from one point in the instruction stream to another which can cause a pipeline stall. Once inside the function there is usually some preamble required to prepare a new stack frame for the function to execute and a similar epilogue needed to retire the frame before returning to the caller. - -In Go a function call carries additional costs to support dynamic stack growth. On entry the amount of stack space available to the goroutine is compared to the amount required for the function. If insufficient stack space is available, the preamble jumps into the runtime logic that grows the stack by copying it to a new, larger, location. Once this is done the runtime jumps back to the start of the original function, the stack check is performed again, which now passes, and the call continues. In this way goroutines can start with a small stack allocation which grows only when needed.[2][3] - -This check is cheap–only a few instructions–and because goroutine stacks grows geometrically the check rarely fails. Thus, the branch prediction unit inside a modern processor can hide the cost of the stack check by assuming it will always be successful. In the case where the processor mis-predicts the stack check and has to discard the work done while it was executing speculatively, the cost of the pipeline stall is relatively small compared to the cost of the work needed for the runtime to grow a goroutines stack. - -While the overhead of the generic and Go specific components of each function call are well optimised by modern processors using speculative execution techniques, those overheads cannot be entirely eliminated, thus each function call carries with it a performance cost over and above the time it takes to perform useful work. As a function call’s overhead is fixed, smaller functions pay a larger cost relative to larger ones because they tend to do less useful work per invocation. - -The solution to eliminating these overheads must therefore be to eliminate the function call itself, which the Go compiler does, under certain conditions, by replacing the call to a function with the contents of the function. This is known as _inlining_ because it brings the body of the function in line with its caller. - -#### Improved optimisation opportunities - -Dr. Cliff Click describes inlining as _the_ optimisation performed by modern compilers as it forms the basis for optimisations like constant proportion and dead code elimination. In effect, inlining allows the compiler to _see_ _furthe_r, allowing it to observe, in the context that a particular function is being called, logic that can be further simplified or eliminated entirely. As inlining can be applied recursively optimisation decisions can be made not only in the context of each individual function, but also applied to the chain of functions in a call path. - -### Inlining in action - -The effects of inlining can be demonstrated with this small example - -``` -package main - -import "testing" - -//go:noinline -func max(a, b int) int { - if a > b { - return a - } - return b -} - -var Result int - -func BenchmarkMax(b *testing.B) { - var r int - for i := 0; i < b.N; i++ { - r = max(-1, i) - } - Result = r -} -``` - -Running this benchmark gives the following result:[3][4] - -``` -% go test -bench=.  -BenchmarkMax-4   530687617         2.24 ns/op -``` - -The cost of `max(-1, i)` is around 2.24 nanoseconds on my 2015 MacBook Air. Now let’s remove the `//go:noinline` pragma and see the result: - -``` -% go test -bench=.  -BenchmarkMax-4   1000000000         0.514 ns/op -``` - -From 2.24 ns to 0.51 ns, or according to `benchstat`, a 78% improvement. - -``` -% benchstat {old,new}.txt -name   old time/op  new time/op  delta -Max-4  2.21ns ± 1%  0.49ns ± 6%  -77.96%  (p=0.000 n=18+19) -``` - -Where did these improvements come from? - -First, the removal of the function call and associated preamble[4][5] was a major contributor. Pulling the contents of `max` into its caller reduced the number of instructions executed by the processor and eliminated several branches. - -Now the contents of `max` are visible to the compiler as it optimises `BenchmarkMax` it can make some additional improvements. Consider that once `max` is inlined, this is what the body of `BenchmarkMax` looks like to the compiler: - -``` -func BenchmarkMax(b *testing.B) { - var r int - for i := 0; i < b.N; i++ { - if -1 > i { - r = -1 - } else { - r = i - } - } - Result = r -} -``` - -Running the benchmark again we see our manually inlined version performs as well as the version inlined by the compiler - -``` -% benchstat {old,new}.txt -name   old time/op  new time/op  delta -Max-4  2.21ns ± 1%  0.48ns ± 3%  -78.14%  (p=0.000 n=18+18) -``` - -Now the compiler has access to the result of inlining `max` into `BenchmarkMax` it can apply optimisation passes which were not possible before. For example, the compiler has noted that `i` is initialised to `0` and only incremented so any comparison with `i` can assume `i` will never be negative. Thus, the condition `-1 > i` will never be true.[5][6] - -Having proved that `-1 > i` will never be true, the compiler can simplify the code to - -``` -func BenchmarkMax(b *testing.B) { - var r int - for i := 0; i < b.N; i++ { - if false { - r = -1 - } else { - r = i - } - } - Result = r -} -``` - -and because the branch is now a constant, the compiler can eliminate the unreachable path leaving it with - -``` -func BenchmarkMax(b *testing.B) { - var r int - for i := 0; i < b.N; i++ { - r = i - } - Result = r -} -``` - -Thus, through inlining and the optimisations it unlocks, the compiler has reduced the expression `r = max(-1, i)` to simply `r = i`. - -### The limits of inlining - -In this article I’ve discussed, so called, _leaf_ inlining; the act of inlining a function at the bottom of a call stack into its direct caller. Inlining is a recursive process, once a function has been inlined into its caller, the compiler may inline the resulting code into _its_ caller, as so on. For example, this code - -``` -func BenchmarkMaxMaxMax(b *testing.B) { - var r int - for i := 0; i < b.N; i++ { - r = max(max(-1, i), max(0, i)) - } - Result = r -} -``` - -runs as fast as the previous example as the compiler is able to repeatedly apply the optimisations outlined above to reduce the code to the same `r = i` expression. - -In the next article I’ll discuss an alternative inlining strategy when the Go compiler wishes to inline a function in the middle of a call stack. Finally I’ll discuss the limits that the compiler is prepared to go to to inline code, and which Go constructs are currently beyond its capability. - - 1. In Go, a method is just a function with a predefined formal parameter, the receiver. The relative costs of calling a free function vs a invoking a method, assuming that method is not called through an interface, are the same.[][7] - 2. Up until Go 1.14 the stack check preamble was also used by the garbage collector to stop the world by setting all active goroutine’s stacks to zero, forcing them to trap into the runtime the next time they made a function call. This system was [recently replaced][8] with a mechanism which allowed the runtime to pause an goroutine without waiting for it to make a function call.[][9] - 3. I’m using the `//go:noinline` pragma to prevent the compiler from inlining `max`. This is because I want to isolate the effects of inlining on `max` rather than disabling optimisations globally with `-gcflags='-l -N'`. I go into detail about the `//go:` comments in [this presentation][10].[][11] - 4. You can check this for yourself by comparing the output of `go test -bench=. -gcflags=-S` with and without the `//go:noinline` annotation.[][12] - 5. You can check this yourself with the `-gcflags=-d=ssa/prove/debug=on` flag.[][13] - - - -#### Related posts: - - 1. [Five things that make Go fast][14] - 2. [Why is a Goroutine’s stack infinite ?][15] - 3. [How to write benchmarks in Go][16] - 4. [Go’s hidden #pragmas][17] - - - --------------------------------------------------------------------------------- - -via: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go - -作者:[Dave Cheney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://dave.cheney.net/author/davecheney -[b]: https://github.com/lujun9972 -[1]: https://github.com/golang/go -[2]: tmp.gBQ2tEtMHc#easy-footnote-bottom-1-4053 (In Go, a method is just a function with a predefined formal parameter, the receiver. The relative costs of calling a free function vs a invoking a method, assuming that method is not called through an interface, are the same.) -[3]: tmp.gBQ2tEtMHc#easy-footnote-bottom-2-4053 (Up until Go 1.14 the stack check preamble was also used by the garbage collector to stop the world by setting all active goroutine’s stacks to zero, forcing them to trap into the runtime the next time they made a function call. This system was recently replaced with a mechanism which allowed the runtime to pause an goroutine without waiting for it to make a function call.) -[4]: tmp.gBQ2tEtMHc#easy-footnote-bottom-3-4053 (I’m using the //go:noinline pragma to prevent the compiler from inlining max. This is because I want to isolate the effects of inlining on max rather than disabling optimisations globally with -gcflags='-l -N'. I go into detail about the //go: comments in this presentation.) -[5]: tmp.gBQ2tEtMHc#easy-footnote-bottom-4-4053 (You can check this for yourself by comparing the output of go test -bench=. -gcflags=-S with and without the //go:noinline annotation.) -[6]: tmp.gBQ2tEtMHc#easy-footnote-bottom-5-4053 (You can check this yourself with the -gcflags=-d=ssa/prove/debug=on flag.) -[7]: tmp.gBQ2tEtMHc#easy-footnote-1-4053 -[8]: https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md -[9]: tmp.gBQ2tEtMHc#easy-footnote-2-4053 -[10]: https://dave.cheney.net/2018/01/08/gos-hidden-pragmas -[11]: tmp.gBQ2tEtMHc#easy-footnote-3-4053 -[12]: tmp.gBQ2tEtMHc#easy-footnote-4-4053 -[13]: tmp.gBQ2tEtMHc#easy-footnote-5-4053 -[14]: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast (Five things that make Go fast) -[15]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) -[16]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go) -[17]: https://dave.cheney.net/2018/01/08/gos-hidden-pragmas (Go’s hidden #pragmas) diff --git a/translated/tech/20200425 Inlining optimisations in Go.md b/translated/tech/20200425 Inlining optimisations in Go.md new file mode 100644 index 0000000000..80f86ae47e --- /dev/null +++ b/translated/tech/20200425 Inlining optimisations in Go.md @@ -0,0 +1,211 @@ +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Inlining optimisations in Go) +[#]: via: (https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Go 中的内联优化 +====== + +本文讨论 Go 编译器是如何实现内联的以及这种优化方法如何影响你的 Go 代码。 + +*请注意:*本文重点讨论 *gc*,实际上是 [golang.org](https://github.com/golang/go) 的 Go 编译器。讨论到的概念可以广泛用于其他 Go 编译器,如 gccgo 和 llgo,但它们在实现方式和功能上可能有所差异。 + +### 内联是什么? + +内联就是把简短的函数在调用它的地方展开。在计算机发展历程的早期,这个优化是由程序员手动实现的。现在,内联已经成为编译过程中自动实现的基本优化过程的其中一步。 + +### 为什么内联很重要? + +有两个原因。第一个是它消除了函数调用本身的虚耗。第二个是它使得编译器能更高效地执行其他的优化策略。 + +#### 函数调用的虚耗 + +在任何语言中,调用一个函数 [1][2] 都会有消耗。把参数编组进寄存器或放入栈中(取决于 ABI),在返回结果时倒序取出时会有虚耗。引入一次函数调用会导致程序计数器从指令流的一点跳到另一点,这可能导致管道阻塞。函数内部通常有前置处理,需要为函数执行准备新的栈帧,还有与前置相似的后续处理,需要在返回给调用方之前释放栈帧空间。 + +在 Go 中函数调用会消耗额外的资源来支持栈的动态增长。在进入函数时,goroutine 可用的栈空间与函数需要的空间大小相等。如果可用空间不同,前置处理就会跳到把数据复制到一块新的、更大的空间的运行时逻辑,而这会导致栈空间变大。当这个复制完成后,运行时跳回到原来的函数入口,再执行栈空间检查,函数调用继续执行。这种方式下,goroutine 开始时可以申请很小的栈空间,在有需要时再申请更大的空间。[2][3] + +这个检查消耗很小 — 只有几个指令 — 而且由于 goroutine 是成几何级数增长的,因此这个检查很少失败。这样,现代处理器的分支预测单元会通过假定检查肯定会成功来隐藏栈空间检查的消耗。当处理器预测错了栈空间检查,必须要抛弃它推测性执行的操作时,与为了增加 goroutine 的栈空间运行时所需的操作消耗的资源相比,管道阻塞的代价更小。 + +虽然现代处理器可以用预测性执行技术优化每次函数调用中的泛型和 Go 特定的元素的虚耗,但那些虚耗不能被完全消除,因此在每次函数调用执行必要的工作过程中都会有性能消耗。一次函数调用本身的虚耗是固定的,与更大的函数相比,调用小函数的代价更大,因为在每次调用过程中它们做的有用的工作更少。 + +消除这些虚耗的方法必须是要消除函数调用本身,Go 的编译器就是这么做的,在某些条件下通过用函数的内容来替换函数调用来实现。这个过程被称为*内联*,因为它在函数调用处把函数体展开了。 + +#### 改进的优化机会 + +Cliff Click 博士把内联描述为现代编译器做的优化措施,像常量传播(译注:此处作者笔误,原文为 constant proportion,修正为 constant propagation)和死码消除一样,都是编译器的基本优化方法。实际上,内联可以让编译器看得更深,使编译器可以观察调用的特定函数的上下文内容,可以看到能继续简化或彻底消除的逻辑。由于可以递归地执行内联,因此不仅可以在每个独立的函数上下文处进行这种优化,也可以在整个函数调用链中进行。 + +### 实践中的内联 + +下面这个例子可以演示内联的影响: + +```go +package main + +import "testing" + +//go:noinline +func max(a, b int) int { + if a > b { + return a + } + return b +} + +var Result int + +func BenchmarkMax(b *testing.B) { + var r int + for i := 0; i < b.N; i++ { + r = max(-1, i) + } + Result = r +} +``` + +运行这个基准,会得到如下结果:[3](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-3-4053) + +```bash +% go test -bench=. +BenchmarkMax-4 530687617 2.24 ns/op +``` + +在我的 2015 MacBook Air 上 `max(-1, i)` 的耗时约为 2.24 纳秒。现在去掉 `//go:noinline` 编译指令,再看下结果: + +```bash +% go test -bench=. +BenchmarkMax-4 1000000000 0.514 ns/op +``` + +从 2.24 纳秒降到了 0.51 纳秒,或者从 `benchstat` 的结果可以看出,有 78% 的提升。 + +```bash +% benchstat {old,new}.txt +name old time/op new time/op delta +Max-4 2.21ns ± 1% 0.49ns ± 6% -77.96% (p=0.000 n=18+19) +``` + +这个提升是从哪儿来的呢? + +首先,移除掉函数调用以及与之关联的前置处理 [4](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-4-4053) 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 + +现在由于编译器优化了 `BenchmarkMax`,因此它可以看到 `max` 函数的内容,进而可以做更多的提升。当 `max` 被内联后,`BenchmarkMax` 呈现给编译器的样子,看起来是这样的: + +```go +func BenchmarkMax(b *testing.B) { + var r int + for i := 0; i < b.N; i++ { + if -1 > i { + r = -1 + } else { + r = i + } + } + Result = r +} +``` + +再运行一次基准,我们看一下手动内联的版本和编译器内联的版本的表现: + +```bash +% benchstat {old,new}.txt +name old time/op new time/op delta +Max-4 2.21ns ± 1% 0.48ns ± 3% -78.14% (p=0.000 n=18+18) +``` + +现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 true。[5](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-5-4053) + +证明了 `-1 > i` 永远不为 true 后,编译器可以把代码简化为: + +```go +func BenchmarkMax(b *testing.B) { + var r int + for i := 0; i < b.N; i++ { + if false { + r = -1 + } else { + r = i + } + } + Result = r +} +``` + +并且因为分支里是个常量,编译器可以通过下面的方式移除不会走到的分支: + +```go +func BenchmarkMax(b *testing.B) { + var r int + for i := 0; i < b.N; i++ { + r = i + } + Result = r +} +``` + +这样,通过内联和由内联解锁的优化过程,编译器把表达式 `r = max(-1, i))` 简化为 `r = i`。 + +### 内联的限制 + +本文中我论述的内联称作*叶子*内联;把函数调用栈中最底层的函数在调用它的函数处展开的行为。内联是个递归的过程,当把函数内联到调用它的函数 A 处后,编译器会把内联后的结果代码再内联到 A 的调用方,这样持续内联下去。例如,下面的代码: + +```go +func BenchmarkMaxMaxMax(b *testing.B) { + var r int + for i := 0; i < b.N; i++ { + r = max(max(-1, i), max(0, i)) + } + Result = r +} +``` + +与之前的例子中的代码运行速度一样快,因为编译器可以对上面的代码重复地进行内联,也把代码简化到 `r = i` 表达式。 + +下一篇文章中,我会论述当 Go 编译器想要内联函数调用栈中间的某个函数时选用的另一种内联策略。最后我会论述编译器为了内联代码准备好要达到的极限,这个极限 Go 现在的能力还达不到。 + +1. 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。 +2. 在 Go 1.14 以前,栈检查的前置处理也被 gc 用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换](https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md)为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。 +3. 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。 +4. 你可以自己通过比较 `go test -bench=. -gcflags=-S`有无 `//go:noinline` 注释时的不同结果来验证一下。 +5. 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。 + +#### 相关文章: + +1. [使 Go 变快的 5 件事](https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast) +2. [为什么 Goroutine 的栈空间会无限增长?](https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite) +3. [Go 中怎么写基准测试](https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go) +4. [Go 中隐藏的编译指令](https://dave.cheney.net/2018/01/08/gos-hidden-pragmas) + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[lxbwolf](https://github.com/lxbwolf) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://github.com/golang/go +[2]: tmp.gBQ2tEtMHc#easy-footnote-bottom-1-4053 (In Go, a method is just a function with a predefined formal parameter, the receiver. The relative costs of calling a free function vs a invoking a method, assuming that method is not called through an interface, are the same.) +[3]: tmp.gBQ2tEtMHc#easy-footnote-bottom-2-4053 (Up until Go 1.14 the stack check preamble was also used by the garbage collector to stop the world by setting all active goroutine’s stacks to zero, forcing them to trap into the runtime the next time they made a function call. This system was recently replaced with a mechanism which allowed the runtime to pause an goroutine without waiting for it to make a function call.) +[4]: tmp.gBQ2tEtMHc#easy-footnote-bottom-3-4053 (I’m using the //go:noinline pragma to prevent the compiler from inlining max. This is because I want to isolate the effects of inlining on max rather than disabling optimisations globally with -gcflags='-l -N'. I go into detail about the //go: comments in this presentation.) +[5]: tmp.gBQ2tEtMHc#easy-footnote-bottom-4-4053 (You can check this for yourself by comparing the output of go test -bench=. -gcflags=-S with and without the //go:noinline annotation.) +[6]: tmp.gBQ2tEtMHc#easy-footnote-bottom-5-4053 (You can check this yourself with the -gcflags=-d=ssa/prove/debug=on flag.) +[7]: tmp.gBQ2tEtMHc#easy-footnote-1-4053 +[8]: https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md +[9]: tmp.gBQ2tEtMHc#easy-footnote-2-4053 +[10]: https://dave.cheney.net/2018/01/08/gos-hidden-pragmas +[11]: tmp.gBQ2tEtMHc#easy-footnote-3-4053 +[12]: tmp.gBQ2tEtMHc#easy-footnote-4-4053 +[13]: tmp.gBQ2tEtMHc#easy-footnote-5-4053 +[14]: https://dave.cheney.net/2014/06/07/five-things-that-make-go-fast (Five things that make Go fast) +[15]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) +[16]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go) +[17]: https://dave.cheney.net/2018/01/08/gos-hidden-pragmas (Go’s hidden #pragmas) From 711b86d8e1bd4020f0e59406d2db0484ac01c056 Mon Sep 17 00:00:00 2001 From: Brooke Lau Date: Tue, 28 Apr 2020 21:21:03 +0800 Subject: [PATCH 024/178] modify --- .../20200425 Inlining optimisations in Go.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/translated/tech/20200425 Inlining optimisations in Go.md b/translated/tech/20200425 Inlining optimisations in Go.md index 80f86ae47e..dc13968c1d 100644 --- a/translated/tech/20200425 Inlining optimisations in Go.md +++ b/translated/tech/20200425 Inlining optimisations in Go.md @@ -66,7 +66,7 @@ func BenchmarkMax(b *testing.B) { } ``` -运行这个基准,会得到如下结果:[3](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-3-4053) +运行这个基准,会得到如下结果:[3][4] ```bash % go test -bench=. @@ -90,7 +90,7 @@ Max-4 2.21ns ± 1% 0.49ns ± 6% -77.96% (p=0.000 n=18+19) 这个提升是从哪儿来的呢? -首先,移除掉函数调用以及与之关联的前置处理 [4](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-4-4053) 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 +首先,移除掉函数调用以及与之关联的前置处理 [4][5] 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 现在由于编译器优化了 `BenchmarkMax`,因此它可以看到 `max` 函数的内容,进而可以做更多的提升。当 `max` 被内联后,`BenchmarkMax` 呈现给编译器的样子,看起来是这样的: @@ -116,7 +116,7 @@ name old time/op new time/op delta Max-4 2.21ns ± 1% 0.48ns ± 3% -78.14% (p=0.000 n=18+18) ``` -现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 true。[5](https://github.com/LCTT/TranslateProject/blob/master/sources/tech/tmp.gBQ2tEtMHc#easy-footnote-bottom-5-4053) +现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 true。[5][6] 证明了 `-1 > i` 永远不为 true 后,编译器可以把代码简化为: @@ -166,11 +166,11 @@ func BenchmarkMaxMaxMax(b *testing.B) { 下一篇文章中,我会论述当 Go 编译器想要内联函数调用栈中间的某个函数时选用的另一种内联策略。最后我会论述编译器为了内联代码准备好要达到的极限,这个极限 Go 现在的能力还达不到。 -1. 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。 -2. 在 Go 1.14 以前,栈检查的前置处理也被 gc 用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换](https://github.com/golang/proposal/blob/master/design/24543-non-cooperative-preemption.md)为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。 -3. 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。 -4. 你可以自己通过比较 `go test -bench=. -gcflags=-S`有无 `//go:noinline` 注释时的不同结果来验证一下。 -5. 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。 +1. 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。[][7] +2. 在 Go 1.14 以前,栈检查的前置处理也被 gc 用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换][8]为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。[][9] +3. 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。关于 `//go:` 注释在[这篇文章][10]中详细论述。[][11] +4. 你可以自己通过比较 `go test -bench=. -gcflags=-S`有无 `//go:noinline` 注释时的不同结果来验证一下。[][12] +5. 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。[][13] #### 相关文章: From 18324716b266335f6a4abff2998ae23bdcec4871 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 22:33:30 +0800 Subject: [PATCH 025/178] PRF @geekpi --- ...e and Open Source Audio-Video Converter.md | 37 +++++++++---------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md b/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md index 1c8c5e6fe4..f9e8e8cf16 100644 --- a/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md +++ b/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md @@ -1,26 +1,28 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (MystiQ: A Free and Open Source Audio/Video Converter) [#]: via: (https://itsfoss.com/mystiq/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -MystiQ:一个免费开源音频/视频转换器 +MystiQ:一个自由开源的音视频转换器 ====== -_**简述:MystiQ 是可用于 Linux 和 Windows 的新开源视频转换器工具。它的底层使用 FFMPEG,并为你提供基于 Qt 的整洁干净的图形界面。**_ +![](https://img.linux.net.cn/data/attachment/album/202004/28/223258cr9rxzyrj344kh68.jpg) + +> MystiQ 是一款全新的开源视频转换工具,适用于 Linux 和 Windows。它的底层使用 FFMPEG,并为你提供了一个基于 Qt 的整洁干净的图形界面。 ### MystiQ,一个基于 QT 的 FFmpeg GUI 前端 ![][1] -一个音频/视频转换工具可为跨多个平台的每位计算机用户提供方便。 +音频/视频转换工具可为每位跨多个平台的计算机用户提供方便。 -出于同样的原因,我着重介绍 [MystiQ][2] 是个好主意,这是一个相对较新的视频/音频转换器工具,可用于 Linux 和 Windows。截至目前,它还不支持 macOS,但可能会在不久的将来出现。 +出于同样的原因,我想着重介绍 [MystiQ][2] 是个好主意,这是一个相对较新的视频/音频转换器工具,适用于 Linux 和 Windows。截至目前,它还不支持 macOS,但可能会在不久的将来支持。 -MystiQ 是基于 [Qt 5 界面][4]的 [FFmpeg][3] 图形前端。 现在,你可以随时[在 Linux 命令行中安装并使用 ffmpeg][5],但这不是很舒服,是吗? 这就是为什么 [Handbrake][6] 和 MystiQ 之类的工具可以使我们的生活更轻松的原因。 +MystiQ 是基于 [Qt 5 界面][4]的 [FFmpeg][3] 图形前端。现在,你可以随时[在 Linux 命令行中安装并使用 ffmpeg][5],但这不是很舒服,是吗?这就是为什么 [Handbrake][6] 和 MystiQ 之类的工具可以使我们的生活更方便的原因。 由于 MystiQ 基于 FFmpeg,因此你可以将其用于一些基本的视频编辑,例如修剪、旋转等。 @@ -34,36 +36,31 @@ MystiQ 是基于 [Qt 5 界面][4]的 [FFmpeg][3] 图形前端。 现在,你可 * 视频转换 * 音频转换(也可从视频中提取音频) - * 支持格式:MP4、WEBM、MKV、MP3、MOV、OGG、WAV、ASF、FLV、3GP、M4A 等。 + * 支持的格式:MP4、WEBM、MKV、MP3、MOV、OGG、WAV、ASF、FLV、3GP、M4A 等。 * 跨平台(Windows 和 Linux) * 适用于 32 位和 64 位系统的安装包 * 能够调整音频质量(采样率、比特率等)进行转换 - * **基本视频编辑功能**(剪辑视频、插入字幕、旋转视频、缩放视频等) + * 基本的视频编辑功能(剪辑视频、插入字幕、旋转视频、缩放视频等) * 将彩色视频转换为黑白 - * 可轻松转换视频的多个预设以获得最佳质量或获得最佳压缩效果。 - - -#### [在 Linux 中使用 SoundConverter 轻松转换音频文件格式][9] - -如果你想将音频文件格式转换为 wav、mp3、ogg 或任何其他格式,SoundConverter 是你在 Linux 中所需的工具。 + * 有几个预设方案,可轻松转换视频以获得最佳质量或获得最佳压缩效果。 ### 安装 MystiQ 你可能没有在软件中心中找到它,但将它安装在 Linux 发行版上非常容易。 -它提供了 **.AppImage** 文件和 **.deb/.rpm** 文件(32 位和 64 位软件包)。如果你好奇想使用的话,可以阅读[如何使用 AppImage 文件][10]。 +它提供了 .AppImage 文件和 .deb / .rpm 文件(32 位和 64 位软件包)。如果你不清楚如何使用的话,可以阅读[如何使用 AppImage 文件][10]。 如果你想帮助他们测试软件进行改进,你还可以找到他们的 [GitHub 页面][11],并查看源码或任何近期的预发布软件包。 你可以在其官方网站下载适用于 Linux 和 Windows 的安装程序文件。 -[下载 MystiQ][2] +- [下载 MystiQ][2] -**总结** +### 总结 -在本文中,我使用 [Pop!\_OS][12] 20.04 测试了 MytiQ 转换器,并且在转换视频和音频时没遇到问题。而且,对于像我这样的普通用户来说,它的转换速度足够快。 +在本文中,我使用 [Pop!_OS][12] 20.04 测试了 MytiQ 转换器,并且在转换视频和音频时没遇到任何问题。而且,对于像我这样的普通用户来说,它的转换速度足够快。 -请尝试一下,让我知道你对它的想法!另外,如果你在 Linux 上一直使用其他工具转换视频和音频,那它是什么? +欢迎尝试一下,让我知道你对它的想法!另外,如果你在 Linux 上一直使用其他工具转换视频和音频,那它是什么? -------------------------------------------------------------------------------- @@ -72,7 +69,7 @@ via: https://itsfoss.com/mystiq/ 作者:[Ankush Das][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b186bf1620d39d846764aa952c4bf5768961975c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 28 Apr 2020 22:34:15 +0800 Subject: [PATCH 026/178] PUB @geekpi https://linux.cn/article-12160-1.html --- ...21 MystiQ- A Free and Open Source Audio-Video Converter.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md (98%) diff --git a/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md b/published/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md similarity index 98% rename from translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md rename to published/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md index f9e8e8cf16..c4c99dac95 100644 --- a/translated/tech/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md +++ b/published/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12160-1.html) [#]: subject: (MystiQ: A Free and Open Source Audio/Video Converter) [#]: via: (https://itsfoss.com/mystiq/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) From 80391aec594fe5c70820faf2e08a42b872344c9d Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 00:55:21 +0800 Subject: [PATCH 027/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Upgrad?= =?UTF-8?q?ing=20Fedora=2031=20to=20Fedora=2032?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md --- ...200428 Upgrading Fedora 31 to Fedora 32.md | 99 +++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md diff --git a/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md new file mode 100644 index 0000000000..1807d5e6a7 --- /dev/null +++ b/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Upgrading Fedora 31 to Fedora 32) +[#]: via: (https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +Upgrading Fedora 31 to Fedora 32 +====== + +![][1] + +Fedora 32 [is available now][2]. You’ll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 30 to Fedora 31. + +Before upgrading, visit the [wiki page of common Fedora 32 bugs][3] to see if there’s an issue that might affect your upgrade. Although the Fedora community tries to ensure upgrades work well, there’s no way to guarantee this for every combination of hardware and software that users might have. + +### Upgrading Fedora 31 Workstation to Fedora 32 + +Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell. + +Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 32 is Now Available. + +If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available. + +Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later. + +### Using the command line + +If you’ve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 31 to Fedora 32. Using this plugin will make your upgrade to Fedora 32 simple and easy. + +#### 1\. Update software and back up your system + +Before you do start the upgrade process, make sure you have the latest software for Fedora 31. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use _GNOME Software_ or enter the following command in a terminal. + +``` +sudo dnf upgrade --refresh +``` + +Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][4] on the Fedora Magazine. + +#### 2\. Install the DNF plugin + +Next, open a terminal and type the following command to install the plugin: + +``` +sudo dnf install dnf-plugin-system-upgrade +``` + +#### 3\. Start the update with DNF + +Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal: + +``` +sudo dnf system-upgrade download --releasever=32 +``` + +This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _‐‐allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade. + +#### 4\. Reboot and upgrade + +Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal: + +``` +sudo dnf system-upgrade reboot +``` + +Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 31; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process. + +Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 32 system. + +![][5] + +### Resolving upgrade problems + +On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade quick docs][6] for more information on troubleshooting. + +If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/31-32-816x345.png +[2]: https://fedoramagazine.org/announcing-fedora-32/ +[3]: https://fedoraproject.org/wiki/Common_F32_bugs +[4]: https://fedoramagazine.org/taking-smart-backups-duplicity/ +[5]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png +[6]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues From 3e7abc111a46ee3921a0aca1fc2fa0e22210c8db Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 00:56:11 +0800 Subject: [PATCH 028/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Fedora?= =?UTF-8?q?=2032=20is=20officially=20here!?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Fedora 32 is officially here.md --- .../20200428 Fedora 32 is officially here.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20200428 Fedora 32 is officially here.md diff --git a/sources/tech/20200428 Fedora 32 is officially here.md b/sources/tech/20200428 Fedora 32 is officially here.md new file mode 100644 index 0000000000..67989780ea --- /dev/null +++ b/sources/tech/20200428 Fedora 32 is officially here.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fedora 32 is officially here!) +[#]: via: (https://fedoramagazine.org/announcing-fedora-32/) +[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/) + +Fedora 32 is officially here! +====== + +![][1] + +It’s here! We’re proud to announce the release of Fedora 32. Thanks to the hard work of thousands of Fedora community members and contributors, we’re celebrating yet another on-time release. + +If you just want to get to the bits without delay, head over to right now. For details, read on! + +## **All of Fedora’s Flavors** + +Fedora Editions are targeted outputs geared toward specific “showcase” uses. + +Fedora Workstation focuses on the desktop. In particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features [GNOME 3.36][2], which has plenty of great improvements as usual. My favorite is the new lock screen! + +Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion. For edge computing use cases, [Fedora IoT][3] provides a strong foundation for IoT ecosystems. + +Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several [update streams][4] that can be followed for automatic updates that occur roughly every two weeks. Currently the **next** stream is based on Fedora 32, with the **testing** and **stable** streams to follow. You can find information about released artifacts that follow the **next** stream from [the download page][5] and information about how to use those artifacts in the [Fedora CoreOS Documentation][6]. + +Of course, we produce more than just the editions. [Fedora Spins][7] and [Labs][8] target a variety of audiences and use cases, including the [Fedora Astronomy Lab][9], which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like [KDE Plasma][10] and [Xfce][11]. New in Fedora 32 is the [Comp Neuro Lab][12], developed by our Neuroscience Special Interest Group to enable computational neuroscience. + +And, don’t forget our alternate architectures: [ARM AArch64, Power, and S390x][13]. Of particular note, we have improved support for the Rockchip system-on-a-chip devices including the Rock960, RockPro64, and Rock64. + +**General improvements** + +No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “[First][14]” foundation, we’ve updated key programming language and system library packages, including GCC 10, Ruby 2.7, and Python 3.8. Of course, with Python 2 past end-of-life, we’ve removed most Python 2 packages from Fedora. A legacy python27 package is provided for developers and users who still need it. In Fedora Workstation, we’ve enabled the EarlyOOM service by default to improve the user experience in low-memory situations. + +We’re excited for you to try out the new release! Go to and download it now. Or if you’re already running a Fedora operating system, follow the easy [upgrade instructions][15]. + +## **In the unlikely event of a problem….** + +If you run into a problem, check out the [Fedora 32 Common Bugs][16] page, and if you have questions, visit our [Ask Fedora][17] user-support platform. + +## **Thank you everyone** + +Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other. I invite you to join us in the [Red Hat Summit Virtual Experience][18] 28-29 April to learn more about Fedora and other communities. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/announcing-fedora-32/ + +作者:[Matthew Miller][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/mattdm/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/f32-final-816x345.png +[2]: https://www.gnome.org/news/2020/03/gnome-3-36-released/ +[3]: https://iot.fedoraproject.org/ +[4]: https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/ +[5]: https://getfedora.org/en/coreos/download?stream=next +[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/ +[7]: https://spins.fedoraproject.org/ +[8]: https://labs.fedoraproject.org/ +[9]: https://labs.fedoraproject.org/en/astronomy/ +[10]: https://spins.fedoraproject.org/en/kde/ +[11]: https://spins.fedoraproject.org/en/xfce/ +[12]: https://labs.fedoraproject.org/en/comp-neuro +[13]: https://alt.fedoraproject.org/alt/ +[14]: https://docs.fedoraproject.org/en-US/project/#_first +[15]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/ +[16]: https://fedoraproject.org/wiki/Common_F32_bugs +[17]: http://ask.fedoraproject.org +[18]: https://www.redhat.com/en/summit From b9a825004c3b4672ddcd66513f2f0fa8e34a921e Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 00:56:44 +0800 Subject: [PATCH 029/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20What?= =?UTF-8?q?=E2=80=99s=20new=20in=20Fedora=2032=20Workstation?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 What-s new in Fedora 32 Workstation.md --- ...428 What-s new in Fedora 32 Workstation.md | 89 +++++++++++++++++++ 1 file changed, 89 insertions(+) create mode 100644 sources/tech/20200428 What-s new in Fedora 32 Workstation.md diff --git a/sources/tech/20200428 What-s new in Fedora 32 Workstation.md b/sources/tech/20200428 What-s new in Fedora 32 Workstation.md new file mode 100644 index 0000000000..402a8e63a7 --- /dev/null +++ b/sources/tech/20200428 What-s new in Fedora 32 Workstation.md @@ -0,0 +1,89 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What’s new in Fedora 32 Workstation) +[#]: via: (https://fedoramagazine.org/whats-new-fedora-32-workstation/) +[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/) + +What’s new in Fedora 32 Workstation +====== + +![][1] + +Fedora 32 Workstation is the [latest release][2] of our free, leading-edge operating system. You can download it from [the official website here][3] right now. There are several new and noteworthy changes in Fedora 32 Workstation. Read more details below. + +### GNOME 3.36 + +Fedora 32 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.36 in Fedora 32 Workstation includes many updates and improvements, including: + +#### Redesigned Lock Screen + +The lock screen in Fedora 32 is a totally new experience. The new design removes the “window shade” metaphor used in previous releases, and focuses on ease and speed of use. + +![Unlock screen in Fedora 32][4] + +#### New Extensions Application + +Fedora 32 features the new Extensions application, to easily manage your GNOME Extensions. In the past, extensions were installed, configured, and enabled using either the Software application and / or the Tweak Tool. + +![The new Extensions application in Fedora 32][5] + +Note that the Extensions application is not installed by default on Fedora 32. To either use the Software application to search and install, or use the following command in the terminal: + +``` +sudo dnf install gnome-extensions-app +``` + +#### Reorganized Settings + +Eagle-eyed Fedora users will notice that the Settings application has been re-organized. The structure of the settings categories is a lot flatter, resulting in more settings being visible at a glance. + +Additionally, the **About** category now has a more information about your system, including which windowing system you are running (e.g. Wayland) + +![The reorganized settings application in Fedora 32][6] + +#### Redesigned Notifications / Calendar popover + +The Notifications / Calendar popover — toggled by clicking on the Date and Time at the top of your desktop — has had numerous small style tweaks. Additionally, the popover now has a **Do Not Disturb** switch to quickly disable all notifications. This quick access is useful when presenting your screen, and not wanting your personal notifications appearing. + +![The new Notification / Calendar popover in Fedora 32 ][7] + +#### Redesigned Clocks Application + +The Clocks application is totally redesigned in Fedora 32. It features a design that works better on smaller windows. + +![The Clocks application in Fedora 32][8] + +GNOME 3.36 also provides many additional features and enhancements. Check out the [GNOME 3.36 Release Notes][9] for further information + +* * * + +### Improved Out of Memory handling + +Previously, if a system encountered a low-memory situation, it may have encountered heavy swap usage (aka [swap thrashing][10])– sometimes resulting in the Workstation UI slowing down, or becoming unresponsive for periods of time. Fedora 32 Workstation now ships and enables EarlyOOM by default. EarlyOOM enables users to more quickly recover and regain control over their system in low-memory situations with heavy swap usage. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/whats-new-fedora-32-workstation/ + +作者:[Ryan Lerch][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/ryanlerch/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/fedora32workstation-816x345.jpg +[2]: https://fedoramagazine.org/announcing-fedora-32/ +[3]: https://getfedora.org/workstation +[4]: https://fedoramagazine.org/wp-content/uploads/2020/04/unlock.gif +[5]: https://fedoramagazine.org/wp-content/uploads/2020/04/extensions.png +[6]: https://fedoramagazine.org/wp-content/uploads/2020/04/settings.png +[7]: https://fedoramagazine.org/wp-content/uploads/2020/04/donotdisturb.png +[8]: https://fedoramagazine.org/wp-content/uploads/2020/04/clocks.png +[9]: https://help.gnome.org/misc/release-notes/3.36/ +[10]: https://en.wikipedia.org/wiki/Thrashing_(computer_science) From 480a6e2fe955db047b0a5d460390d1a8af643e3e Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 01:04:33 +0800 Subject: [PATCH 030/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20Manjar?= =?UTF-8?q?o=2020=20Lysia=20Arrives=20with=20ZFS=20and=20Snap=20Support?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md --- ...Lysia Arrives with ZFS and Snap Support.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md diff --git a/sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md new file mode 100644 index 0000000000..377a58f34a --- /dev/null +++ b/sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manjaro 20 Lysia Arrives with ZFS and Snap Support) +[#]: via: (https://itsfoss.com/manjaro-20-release/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Manjaro 20 Lysia Arrives with ZFS and Snap Support +====== + +_**Manjaro Linux has refreshed its ISO with Manjaro 20 “Lysia”. It now supports Snap and Flatpak packages in Pamac. ZFS option is added in Manjaro Architect installer and the latest kernel 5.6 is used as the base.**_ + +It’s raining new distribution releases. Ubuntu 20.04 LTS was released last week. Fedora 32 will be releasing shortly and [Manjaro has released version 20][1] codenamed Lysia. + +### What’s new in Manjaro 20 Lysia? + +Plenty actually. Let me show you some of the major new features in Manjaro 20. + +#### New Matcha theme + +Manjaro 20 has a new default theme called Matcha. It gives the desktop a more polished look. + +![][2] + +#### Snap and Flatpak support in Pamac and terminal + +Snap and Flatpak package support is improved. You can use them in command line if you want. + +You can also enable Snap and Flatpak support in the Pamac GUI package manager. + +![Enable Snap support in Pamac Manjaro][3] + +Once enabled, you can find and install Snap/Flatpak applications in the Pamac software manager. + +![Snap applications in Pamac][4] + +#### Pamac offers to install new software based on search (in GNOME) + +In the GNOME variant, if you search for something, Pamac software manager will now offer to install software that match the query. GNOME Software Center does that in other distributions that use GNOME desktop. + +#### ZFS support lands in Manjaro Architect + +You can now easily use ZFS as root in Manjaro Linux. The [ZFS file system][5] support is available in [Manjaro Architect][6]. + +Do note that I am saying Manjaro Architect, the terminal based installer. It’s not the same as the regular graphical [Calamares installer][7]. + +![][8] + +#### Linux kernel 5.6 + +The latest stable [Linux kernel 5.6][9] brings more hardware support for thunderbolt, Nvidia and USB4. You can also use [WireGuard VPN][10]. + +![][11] + +#### Miscellaneous other features + + * New desktop environment versions: Xfce 4.14, GNOME 3.36 and KDE Plasma 5.18 + * zsh is the new default shell + * Display-Profiles allows you to store one or more profiles for your preferred display configuration + * Improved Gnome-Layout-Switcher + * Latest drivers + * Improved and polished Manjaro tools + + + +### How to get Manjaro 20 Lysia? + +_**If you are already using it, just update your Manjaro Linux system and you should already be using version 20.**_ + +Manjaro uses a rolling release model which means you don’t have to manually upgrade from one version to another. You don’t have to reinstall as soon as there is a new version is released. + +If Manjaro is rolling release distribution, why does it release a new version every now and then? It’s because they have to refresh the ISO so that new users downloading Manjaro will not have to install updates for last few years. This is why Arch Linux also refreshes its ISO every month. + +Manjaro ‘ISO refreshes’ are codenamed and have a version because it helps the developers clearly mark each stage of development. + +So, the bottom line is that if you are already using it, just [update your Manjaro Linux system][12] using Pamac or command line. + +If you want to try Manjaro or if you want to use ZFS, then you can [install Manjaro][13] by downloading the ISO from its website: + +[Download Manjaro Linux][14] + +Enjoy the new release of Manjaro Linux. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/manjaro-20-release/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://forum.manjaro.org/t/manjaro-20-0-lysia-released/138633 +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-lysia.jpeg?resize=800%2C440&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/enable-snap-in-pamac-manjaro.jpg?resize=800%2C490&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/snap-app-pacman.jpg?resize=800%2C489&ssl=1 +[5]: https://itsfoss.com/what-is-zfs/ +[6]: https://itsfoss.com/manjaro-architect-review/ +[7]: https://calamares.io/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pacman-prompts-install-apps.jpg?resize=800%2C331&ssl=1 +[9]: https://itsfoss.com/linux-kernel-5-6/ +[10]: https://itsfoss.com/wireguard/ +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-neofetch-screen.jpg?resize=800%2C495&ssl=1 +[12]: https://itsfoss.com/update-arch-linux/ +[13]: https://itsfoss.com/install-manjaro-linux/ +[14]: https://manjaro.org/download/ From 8716d84eb3961d2b94e88e5b9943a1f5b7915dd6 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 01:05:41 +0800 Subject: [PATCH 031/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20How=20?= =?UTF-8?q?I=20empower=20and=20reach=20millions=20through=20open=20source?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 How I empower and reach millions through open source.md --- ... and reach millions through open source.md | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 sources/tech/20200428 How I empower and reach millions through open source.md diff --git a/sources/tech/20200428 How I empower and reach millions through open source.md b/sources/tech/20200428 How I empower and reach millions through open source.md new file mode 100644 index 0000000000..a3c23dd073 --- /dev/null +++ b/sources/tech/20200428 How I empower and reach millions through open source.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How I empower and reach millions through open source) +[#]: via: (https://opensource.com/article/20/4/interview-Netha-Hussain) +[#]: author: (Jay Barber https://opensource.com/users/jaybarber) + +How I empower and reach millions through open source +====== +Learn how Netha Hussain, winner of the 2020 Women in Open Source +Academic Award, shares knowledge and inspires people. +![Lightbulb][1] + +"I wanted to link to a particular Wikipedia article on my blog, but I found there wasn't one on that topic, so I wrote it myself," says Netha Hussain, 2020 [Women in Open Source Academic Award][2] winner. "That's the beauty of open source; anyone can contribute." + +![Photo by Lvova Anastasiya \(Львова Анастасия, Lvova\), CC BY-SA][3] + +Practicality drove Netha's entry into open source culture, and it has continued to be at the center of her work in the ten years since. + +She received her first computer in high school, but it did not immediately spark her passion. She says she mostly used it for games and other diversions, as many teenagers do. It wasn't until she entered medical school and realized that technology could be a powerful tool to help her achieve her goals that Netha truly found her path. Taking stock of her many contributions to [Wikimedia][4], [Mozilla][5], and [TED][6], it's fair to say that once she engaged with open source culture, she never looked back. + +### Finding ways to help + +Growing up in India, Netha was initially drawn to mathematics but soon found herself pulled in other directions. "At the time, I would have expected to continue down that path, mathematics or maybe writing, but the thing that I've always most wanted to do is help people," Netha says. "Medicine seemed to be the most direct path to providing real, tangible assistance to those around me, so I became a doctor." + +That drive to help continues to guide her now as she prepares to defend her doctoral thesis in clinical neuroscience at the University of Gothenburg in Sweden. + +"At a certain point, I decided that rather than limiting myself to what I could do through treating patients, I could also contribute in a research capacity, working to discover new, better ways to help others. I came to all of this via an unexpected route, but I love the idea of exploring and finding my own ways to help. I'm so satisfied and fulfilled by the work I'm doing now. It has been a wonderful journey." + +As she nears the completion of her degree, Netha reflects upon what she's looking forward to next. An infectious smile appears as she remarks, "I'm really excited to have more time to contribute to projects in the open source community." + +Why is she so enamored with open source? It comes back to utility. "In open source practices, I found a philosophy that closely matched my own ideals and a way of doing things that allowed me to help more people. Open source is fueled by collaboration. I've seen the things that can be accomplished by people working together, and it makes me very excited to think where it will take us in the future." + +### Reaching millions, one edit at a time + +Her first article, written to help an international audience understand her blog post, was only the first of many. Netha has now written 300 articles (200 in English and 100 in Malayalam), contributed 13,000 edits for [Wikipedia][7], added 9,000 images to [Wikimedia Commons][8], and provided 120,000 edits to [Wikidata][9]. Her commitment to bringing useful information to others can also be seen in her five years spent volunteering to translate Mozilla projects and TED talks into the Malayalam language. + +Such prolific output was born out of a simple realization. "I had shared so much on my blog but was only reaching a select audience. On Wikipedia and elsewhere, I had access to a potential audience of millions. There's a lot of power in that." + +Many of the articles Netha has written center on issues relevant to women, and that is very much by design. "I find myself writing on topics that are important to women because I feel they are an underserved community, and it is important to me that Wikipedia, as such a vital repository of information, be reflective of all users, all voices. I care deeply about the visibility of women on Wikipedia." + +Netha's commitment to women's issues led her to organize edit-a-thon initiatives and other activities with women's groups. She was also able to leverage similar strategies to assist the LGBTQ+ community in India during the campaign to legalize gay marriage. + +"In India, there are a lot of taboos around homosexuality, and I saw an opportunity to utilize my experience to help another segment of the population. Together, we were able to generate a lot of awareness, whether through raising up biographical articles on famous members of the LGBTQ+ community or shining a spotlight on anti-LGBTQ+ laws. I'm very proud of the opportunities I've had to support such efforts." + +### A path to the future + +It's clear that Netha believes strongly in empowering people, especially other women who may wish to explore open source methodologies as she has. Her advice is simple, but powerful. "Believe in yourself, and know that you have the skills and talent to do whatever you'd like to do," she finds the words easily, as if she's been waiting to be asked the question. "Follow your passion, and do what you want. There will be times of uncertainty but always move forward. Keep studying. Keep learning new things. That's how you grow, both in your field and as a person." + +Having achieved so much already, it's no surprise that Netha is enthusiastic about new challenges on the horizon. "I've put in a lot of effort to get here, but as you learn new strategies and new ways of collaborating, the work gets easier. Now, I don't consider it work at all. It's mostly fun to me." + +_Also read Jay Barber's [interview with Megan Byrd-Sanicki][10], who won the 2020 Women in Open Source Community Award._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/interview-Netha-Hussain + +作者:[Jay Barber][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jaybarber +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lightbulb-idea-think-yearbook-lead.png?itok=5ZpCm0Jh (Lightbulb) +[2]: https://www.redhat.com/en/about/women-in-open-source +[3]: https://opensource.com/sites/default/files/uploads/netha_headshot.png (Photo by Lvova Anastasiya (Львова Анастасия, Lvova), CC BY-SA) +[4]: https://www.wikimedia.org/ +[5]: https://www.mozilla.org/en-US/ +[6]: https://www.ted.com/ +[7]: https://www.wikipedia.org/ +[8]: https://commons.wikimedia.org/wiki/Main_Page +[9]: https://www.wikidata.org/wiki/Wikidata:Main_Page +[10]: https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki From 247ae60e2a53605215cc15dfff0341c16aaf42b5 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 01:06:34 +0800 Subject: [PATCH 032/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Open?= =?UTF-8?q?=20source=20has=20room=20for=20everyone?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Open source has room for everyone.md --- ...00428 Open source has room for everyone.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 sources/tech/20200428 Open source has room for everyone.md diff --git a/sources/tech/20200428 Open source has room for everyone.md b/sources/tech/20200428 Open source has room for everyone.md new file mode 100644 index 0000000000..bab55ddfbe --- /dev/null +++ b/sources/tech/20200428 Open source has room for everyone.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Open source has room for everyone) +[#]: via: (https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki) +[#]: author: (Jay Barber https://opensource.com/users/jaybarber) + +Open source has room for everyone +====== +Learn how Megan Byrd-Sanicki, 2020 Women in Open Source Community Award +winner, brings people together. +![Dandelion held out over water][1] + +"Growing up, I was a bit of a field marshal," Megan Byrd-Sanicki, 2020 [Women in Open Source Community Award][2] winner, says with a smile. "I was always the one pulling classmates together. 'We're going to play a game. Come on, everyone, I'll teach you the rules.' I'd also have an eye to the sidelines, trying to identify who wasn't being included and how I could draw them in." + +![Photo by Megan Sanicki, Used with permission][3] + +That drive to bring people together and set up a structure for them to excel carries through much of her career and community work. "I look back on who I was in second-grade gym class and have to admit that it's still who I am today." + +Megan has been active in open source for a decade, first as Executive Director of the [Drupal Association][4], and now as the Manager of Research and Operations for Google's Open Source Program Office. "I'm fortunate in my current position because it offers a view into Google's more than 2000 open source projects with different objectives, different governance structures, and different strategies. It's been just a phenomenal learning opportunity." Megan was also recently elected to the [Open Source Initiative][5] Board of Directors, where she strives to strengthen the leadership in open source that the organization offers to projects and businesses around the globe. + +### Lessons from the basement steps + +Far from being set on technology, Megan originally thought she'd go into business. Sitting on the basement steps, listening to her father make sales calls, she knew his entire product line by age 16, but she also internalized other lessons. + +"I learned from him that doing business means solving problems and helping people," Megan says. "And I've kept that front-of-mind throughout my career. In some ways, I'm not surprised by this path; it's a natural extension of who I am, but it's also taken me places I would never have dreamed possible." + +Open source isn't just a career for Megan; she also uses the same strategies in her community involvement. "Right now, I'm working with a great group of engineers, data scientists, and epidemiologists at [Covid Act Now][6]. The team members are volunteering their expertise, collaborating openly to provide data modeling to public officials so that they can make informed decisions as quickly as possible." + +She's also active in [FOSS Responders][7], a group focused on shining a light on open source projects and community members affected by COVID-19-related event cancellations. "In times of turmoil, it can be difficult for projects to find the help they need. We help organizations and individuals who need assistance aggregate and amplify their requests." An important component of the organization is administering the [FOSS Responders Fund][7], a mechanism to capture some of the open source funding requests that may fall through the cracks otherwise. + +### Engaging people in a changing world + +The twin themes that influence Megan's community engagement are a clear commitment to the principles of open source and a drive to bring people together. "When people have dreams, things they're actively trying to accomplish, it creates a shared sense of purpose and a strong 'why.' People engage easily around why. I know I do," Megan says when asked what drives her in these efforts. + +"Whether helping raise funds for Drupal's mission or enabling open source projects to become more sustainable, there's a real human impact. I get really passionate about the butterfly effect that results from helping people meet their goals and realize their dreams and visions." + +As open source becomes a larger and larger part of the technology space, Megan is hopeful for the future. "The exciting thing is that the story isn't done. As a community, we're still figuring things out," she says. "There's so much we need to learn about open source, and it can evolve in so many ways, while the landscape changes around us. We need to have the right conversations and figure out how to evolve together, ensuring there's a place at the table for everyone." + +In her words, it's possible to hear those same lessons learned from listening to her father's business calls—doing business is about solving problems and helping people. "Helping more people understand how to use and contribute to open source to solve problems is really rewarding. Whether it is to drive innovation, accelerate velocity, or achieve business goals, there are lots of ways to gain value from open source." + +### Own your awesome + +When asked what advice she has for other women wanting to engage with the open source community, Megan lights up. "Remember that open source has room for everyone. It can be daunting, but in my experience, people want to help. Ask for help when you need it, but also be clear on where you can contribute, how you can contribute, and what your needs are." + +She also recognizes that among all the voices in open source, a lack of centralized leadership can sometimes be felt, but she cautions against looking at it as a privileged role, reserved for only a few. "Be the leader you need. When there's a void in leadership, each individual can fill that void for themselves. Every contributor to open source is a leader, whether they're leading others, leading the community, or just leading themselves. Don't wait to be given permission and own your awesome." + +The open source journey for Megan has been just that: a trek where her path wasn't always clear. She's never shied away from adventure or run from uncertainty, though. "I look at life as this beautiful tapestry that you're weaving, but day to day, you only get to see the threads in the back. If you could see the full picture, you'd realize that you've contributed to this wonderful work in countless ways just by doing your best every day." + +_Also read Jay Barber's [interview with Netha Hussain][8], who won the 2020 Women in Open Source Academic Award._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/interview-Megan-Byrd-Sanicki + +作者:[Jay Barber][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jaybarber +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water) +[2]: https://www.redhat.com/en/about/women-in-open-source +[3]: https://opensource.com/sites/default/files/uploads/megan_sanicki_headshot_small_0.png (Photo by Megan Sanicki, Used with permission) +[4]: https://www.drupal.org/association +[5]: https://opensource.org/ +[6]: https://www.covidactnow.org/ +[7]: https://fossresponders.com/ +[8]: https://opensource.com/article/20/4/interview-Netha-Hussain From 9460816ae1aa48d049d5b51fa02bbb940cf7454c Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 01:07:55 +0800 Subject: [PATCH 033/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200428=20Learn?= =?UTF-8?q?=20Bash=20with=20this=20book=20of=20puzzles?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200428 Learn Bash with this book of puzzles.md --- ...28 Learn Bash with this book of puzzles.md | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 sources/tech/20200428 Learn Bash with this book of puzzles.md diff --git a/sources/tech/20200428 Learn Bash with this book of puzzles.md b/sources/tech/20200428 Learn Bash with this book of puzzles.md new file mode 100644 index 0000000000..08a07b93c5 --- /dev/null +++ b/sources/tech/20200428 Learn Bash with this book of puzzles.md @@ -0,0 +1,60 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Learn Bash with this book of puzzles) +[#]: via: (https://opensource.com/article/20/4/bash-it-out-book) +[#]: author: (Carlos Aguayo https://opensource.com/users/hwmaster1) + +Learn Bash with this book of puzzles +====== +'Bash it out' covers basic, medium, and advanced Bash scripting using 16 +puzzles. +![Puzzle pieces coming together to form a computer screen][1] + +Computers are both my hobby and my profession. I have about 10 of them scattered around my apartment, all running Linux (including my Macs). Since I enjoy upgrading my computers and my computer skills, when I came across [_Bash it out_][2] by Sylvain Leroux, I jumped on the chance to buy it. I use the command line a lot on Debian Linux, and it seemed like a great opportunity to expand my Bash knowledge. I smiled when the author explained in the preface that he uses Debian Linux, which is one of my two favorite distributions. + +Bash lets you automate tasks, so it's a labor-saving, interesting, and useful tool. Before reading the book, I already had a fair amount of experience with Bash on Unix and Linux. I'm not an expert, in part because the scripting language is so extensive and powerful. I first became intrigued with Bash when I saw it on the welcome screen of [EndeavourOS][3], an Arch-based Linux distribution. + +The following screenshots show some options from EndeavourOS. Beleieve it or not, these panels just point to Bash scripts, each of which accomplish some relatively complex tasks. And because it's all open source, I can modify any of these scripts if I want. + +![EndeavourOS after install][4] + +![EndeavourOS install apps][5] + +### Always something to learn + +My impressions of this book are very favorable. It's not long, but it is well-thought-out. The author has very extensive knowledge of Bash and an uncanny ability to explain how to use it. The book covers basic, medium, and advanced Bash scripting using 16 puzzles, which he calls "challenges." This taught me to see Bash scripting as a programming puzzle to solve, which makes it more interesting to play with. + +An exciting aspect of Bash is that it's deeply integrated with the Linux system. While part of its power lies in its syntax, it's also powerful because it has access to so much. You can script repetitive tasks, or tasks that are easy but you're just tired of performing manually. Nothing is too great or too small, and _Bash it out_ helps you understand both what you can do, and how to achieve it. + +This review would not be complete if I didn't mention David Both's free resource [_A sysadmin's guide to Bash scripting_][6] on Opensource.com. This 17-page PDF guide is different from _Bash it out_, but together they make a winning combination for anyone who wants to learn about it. + +I am not a computer programmer, but _Bash it out_ has increased my desire to get into more advanced levels of Bash scripting—I might inadvertently end up as a computer programmer without planning to. + +One reason I love Linux is because of how powerful and versatile the operating system is. However much I know about Linux, there is always something new to learn that makes me appreciate Linux even more. + +In a competitive and ever-changing job market, it behooves all of us to continuously update our skills. This book helped me learn Bash in a very hands-on way. It almost felt as if the author was in the same room with me, patiently guiding me in my learning. + +The author, Leroux, has an uncanny ability to engage readers. This is a rare gift that I think is even more valuable than his technical expertise. In fact, I am writing this book review to thank the author for anticipating my own learning needs; although we have never met, I have benefited in real ways from his gifts. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/bash-it-out-book + +作者:[Carlos Aguayo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/hwmaster1 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/puzzle_computer_solve_fix_tool.png?itok=U0pH1uwj (Puzzle pieces coming together to form a computer screen) +[2]: https://www.amazon.com/Bash-Out-Strengthen-challenges-difficulties/dp/1521773262/ +[3]: https://endeavouros.com/ +[4]: https://opensource.com/sites/default/files/uploads/endeavouros-welcome.png (EndeavourOS after install) +[5]: https://opensource.com/sites/default/files/uploads/endeavouros-install-apps.png (EndeavourOS install apps) +[6]: https://opensource.com/downloads/bash-scripting-ebook From 7ca0289cffc81041571e36e25b34ebdf50a897ba Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 29 Apr 2020 01:10:48 +0800 Subject: [PATCH 034/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200427=20New=20?= =?UTF-8?q?zine:=20How=20Containers=20Work!?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200427 New zine- How Containers Work.md --- .../20200427 New zine- How Containers Work.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 sources/tech/20200427 New zine- How Containers Work.md diff --git a/sources/tech/20200427 New zine- How Containers Work.md b/sources/tech/20200427 New zine- How Containers Work.md new file mode 100644 index 0000000000..fa2198ebbc --- /dev/null +++ b/sources/tech/20200427 New zine- How Containers Work.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (New zine: How Containers Work!) +[#]: via: (https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work/) +[#]: author: (Julia Evans https://jvns.ca/) + +New zine: How Containers Work! +====== + +On Friday I published a new zine: “How Containers Work!”. I also launched a fun redesign of [wizardzines.com][1]. + +You can get it for $12 at . If you buy it, you’ll get a PDF that you can either print out or read on your computer. Or you can get a pack of [all 8 zines][2] so far. + +Here’s the cover and table of contents: + +[![][3]][4] + +### why containers? + +I’ve spent a lot of time [figuring][5] [out][6] [how to][7] [run][8] [things][9] [in][10] [containers][11] over the last 3-4 years. And at the beginning I was really confused! I knew a bunch of things about Linux, and containers didn’t seem to fit in with anything I thought I knew (“is it a process? what’s a network namespace? what’s happening?“). The whole thing seemed really weird. + +It turns out that containers ARE actually pretty weird. They’re not just one thing, they’re what you get when you glue together 6 different features that were mostly designed to work together but have a bunch of confusing edge cases. + +As usual, the thing that helped me the most in my container adventures is a good understanding of the **fundamentals** – what exactly is actually happening on my server when I run a container? + +So that’s what this zine is about – cgroups, namespaces, pivot_root, seccomp-bpf, and all the other Linux kernel features that make containers work. + +Once I understood those ideas, it got a **lot** easier to debug when my containers were doing surprising things in production. I learned a couple of interesting and strange things about containers while writing this zine too – I’ll probably write a blog post about one of them later this week. + +### containers aren’t magic + +This picture (page 6 of the zine) shows you how to run a fish container image with only 15 lines of bash. This is heavily inspired by [bocker][12], which “implements” Docker in about 100 lines of bash. + + + +The main things I see missing from that script compared to what Docker actually does when running a container (other than using an actual container image and not just a tarball) are: + + * it doesn’t drop any capabilities – the container is still running as root and has full root privileges (just in a different mount + PID namespace) + * it doesn’t block any system calls with seccomp-bpf + + + +### container command line tools + +The zine also goes over a bunch of command line tools & files that you can use to inspect running containers or play with Linux container features. Here’s a list: + + * `mount -t overlay` (create and view overlay filesystems) + * `unshare` (create namespaces) + * `nsenter` (use an existing namespace) + * `getpcaps` (get a process’s capabilities) + * `capsh` (drop or add capabilities, etc) + * `cgcreate` (create a cgroup) + * `cgexec` (run a command in an existing cgroup) + * `chroot` (change root directory. not actually what containers use but interesting to play with anyway) + * `/sys/fs/cgroups` (for information about cgroups, like `memory.usage_in_bytes`) + * `/proc/PID/ns` (all a process’s namespaces) + * `lsns` (another way to view namespaces) + + + +I also made a short youtube video a while back called [ways to spy on a Docker container][13] that demos some of these command line tools. + +### container runtime agnostic + +I tried to keep this zine pretty container-runtime-agnostic – I mention Docker a couple of times because it’s so widely used, but it’s about the Linux kernel features that make containers work in general, not Docker or LXC or systemd-nspawn or Kubernetes or whatever. If you understand the fundamentals you can figure all those things out! + +### we redesigned wizardzines.com! + +On Friday I also launched a redesign of [wizardzines.com][1]! [Melody Starling][14] (who is amazing) did the design. I think now it’s better organized but the tiny touch that I’m most delighted by is that now the zines jump with joy when you hover over them. + +One cool thing about working with a designer is – they don’t just make things _look_ better, they help _organize_ the information better so the website makes more sense and it’s easier to find things! This is probably obvious to anyone who knows anything about design but I haven’t worked with designers very much (or maybe ever?) so it was really cool to see. + +One tiny example of this: Melody had the idea of adding a tiny FAQ on the landing page for each zine, where I can put the answers to all the questions people always ask! Here’s what the little FAQ box looks like: + +[![][15]][4] + +I probably want to edit those questions & answers over time but it’s SO NICE to have somewhere to put them. + +### what’s next: maybe debugging! or working more on flashcards! + +The two projects I’m thinking about the most right now are + + 1. a zine about debugging, which I started last summer and haven’t gotten around to finishing yet + 2. a [flashcards project][16] that I’ve been adding to slowly over the last couple of months. I think could become a nice way to explain basic ideas. + + + +Here’s a link to where to [get the zine][4] again :) + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2020/04/27/new-zine-how-containers-work/ + +作者:[Julia Evans][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://jvns.ca/ +[b]: https://github.com/lujun9972 +[1]: https://wizardzines.com +[2]: https://wizardzines.com/zines/all-the-zines/ +[3]: https://jvns.ca/images/containers-cover.jpg +[4]: https://wizardzines.com/zines/containers +[5]: https://stripe.com/en-ca/blog/operating-kubernetes +[6]: https://jvns.ca/blog/2016/09/15/whats-up-with-containers-docker-and-rkt/ +[7]: https://jvns.ca/blog/2016/10/10/what-even-is-a-container/ +[8]: https://jvns.ca/blog/2016/12/22/container-networking/ +[9]: https://jvns.ca/blog/2016/10/26/running-container-without-docker/ +[10]: https://jvns.ca/blog/2017/02/17/mystery-swap/ +[11]: https://jvns.ca/blog/2016/10/02/a-list-of-container-software/ +[12]: https://github.com/p8952/bocker +[13]: https://www.youtube.com/watch?v=YCVSdnYzH34&t=1s +[14]: https://melody.dev +[15]: https://jvns.ca/images/wizardzines-faq.png +[16]: https://flashcards.wizardzines.com From 08a4631d0d7b1c10131ba93bb1ab691acfd384c2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 29 Apr 2020 08:48:20 +0800 Subject: [PATCH 035/178] translated --- ...ox is an All-in-one Messenger for Linux.md | 98 ------------------- ...ox is an All-in-one Messenger for Linux.md | 98 +++++++++++++++++++ 2 files changed, 98 insertions(+), 98 deletions(-) delete mode 100644 sources/tech/20200331 Rambox is an All-in-one Messenger for Linux.md create mode 100644 translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md diff --git a/sources/tech/20200331 Rambox is an All-in-one Messenger for Linux.md b/sources/tech/20200331 Rambox is an All-in-one Messenger for Linux.md deleted file mode 100644 index 5e6a8a2d5c..0000000000 --- a/sources/tech/20200331 Rambox is an All-in-one Messenger for Linux.md +++ /dev/null @@ -1,98 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Rambox is an All-in-one Messenger for Linux) -[#]: via: (https://itsfoss.com/rambox/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -Rambox is an All-in-one Messenger for Linux -====== - -_**Brief: Rambox is an all-in-one messenger that lets you combine multiple services like Discord, Slack, Facebook Messenger and hundreds of more such services in one place.**_ - -### Rambox: Add multiple messaging Services in a single app - -![][1] - -Rambox is one of the best ways to manage multiple services for communication through a single app installed. You can use [multiple messaging services][2] like Facebook Messenger, Gmail chats, AOL, Discord, Google Duo, [Viber][3] and a lot more from the same interface. - -This way, you don’t need to install individual apps or keep them opened in browser all the time. You can use a master password to lock the Rambox application. You can also use do not disturb feature. - -Rambox offers an [open source community edition][4] which is free to use. The paid pro version gives you access to 600+ apps while the community addition has 99+ apps. Pro version has additional features like themes, hibernation, ad-block, spell check and premium support. - -Don’t worry. The open source community edition itself is quite useful and you may not even need those pro features. - -### Features of Rambox - -![][5] - -While you should find most of the essential features in the open-source edition, you might notice some of them limited to the pro version. - -Here, I’ve mentioned all the essential features available: - - * You get about 100 apps/services to choose from in the open-source edition - * Ability to protect the app with a single Master password lock - * Ability to lock each session that you load up - * Do Not Disturb mode - * Ability to sync apps and configuration across multiple devices. - * You can create and add custom apps - * Support for keyboard shortcuts - * Ability to enable/disable apps without needing to delete them - * JS & CSS injection support to tweak the styling of apps - * Ad-block (**pro version**) - * Hibernation support (**pro version**) - * Theme support (**pro version**) - * Mobile view **(pro version)** - * Spell check **(pro version)** - * Work hours – to schedule a time for incoming notifications **(pro version)** - * Proxies support **(pro version)** - - - -In addition to what I’ve listed here, you might find some more features in the Rambox Pro edition. To know more about it, you can refer to the [official list of features][6]. - -It is also worth noting that you cannot have more than 3 active simultaneous device connections. - -### Installing Rambox on Linux - -You can easily get started using Rambox using the **.AppImage** file available on the [official download page][4]. If you’re curious, you can refer our guide on how to [use the AppImage file on Linux][7]. - -In either case, you can also get it from the [Snap store][8]. Also, feel free to check their [GitHub releases section][9] for **.deb / .rpm** or other packages. - -[Download Rambox Community Edition][4] - -### Wrapping Up - -It can be a little overwhelming to have a lot of apps installed using Rambox. So, I’d suggest you monitor the RAM usage when adding more apps and using them for work. - -There is also a similar app called [Franz][10] which is also part open source and part premium like Rambox. - -Even though solutions like Rambox or Franz are quite useful, they aren’t always resource-friendly, specially if you start using tens of services at the same time. So, keep an eye on your system resources (if you notice a performance impact). - -Otherwise, it’s an impressive app that does the work that you’d expect. Have you tried it out? Feel free to let me know your thoughts! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/rambox/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/rambox-ce.jpg?ssl=1 -[2]: https://itsfoss.com/best-messaging-apps-linux/ -[3]: https://itsfoss.com/viber-linux-client-beta-install/ -[4]: https://rambox.pro/#ce -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/rambox-preferences.png?ssl=1 -[6]: https://rambox.pro/#features -[7]: https://itsfoss.com/use-appimage-linux/ -[8]: https://snapcraft.io/rambox -[9]: https://github.com/ramboxapp/community-edition/releases -[10]: https://itsfoss.com/franz-messaging-app/ diff --git a/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md b/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md new file mode 100644 index 0000000000..28a57f2842 --- /dev/null +++ b/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Rambox is an All-in-one Messenger for Linux) +[#]: via: (https://itsfoss.com/rambox/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Rambox 是 Linux 中多合一的消息收发工具 +====== + +_**简介:Rambox 是一个多合一消息收发工具,允许你将多种服务(如 Discord、Slack、Facebook Messenger)和数百个此类服务结合在一起。**_ + +### Rambox:在单个应用中添加多个消息服务 + +![][1] + +Rambox 是通过安装单个应用管理多个通信服务的最佳方式之一。你可以在一个界面使用[多个消息服务][2],如 Facebook Messenger、Gmail chats、AOL、Discord、Google Duo、[Viber][3] 等。 + +这样,你就不需要安装单独的应用或者在浏览器中保持打开。你可以使用主密码锁定 Rambox 应用。你还可以使用"请勿打扰"功能。 + +Rambox 提供可免费使用的[开源社区版][4]。付费专业版允许你访问 600 多个应用,而社区版则包含 99 多个应用。专业版本具有额外的功能,如主题、休眠、ad-block、拼写检查和高级支持。 + +不用担心。开源社区版本身非常有用,你甚至不需要这些专业功能。 + +### Rambox 的功能 + +![][5] + +虽然你应该在开源版中找到大多数基本功能,但你可能会注意到其中一些功能仅限于专业版。 + +此处,我说下所有的基本功能: + + * 在开源版本中,你有大约 100 个应用/服务可供选择 + * 能够使用单个主密码锁保护应用 + * 能够锁定加载的每个会话 + * 请勿打扰模式 + * 能够跨多个设备同步应用和配置 + * 你可以创建和添加自定义应用 + * 支持键盘快捷键 + * 启用/禁用应用而无需删除它们 + * JS 和 CSS 注入支持,以调整应用的样式 + * Ad-block (**专业版**) + * 休眠支持 (**专业版**) + * 主题支持(**专业版**) + * 移动视图 (**专业版**) + * 拼写检查 (**专业版**) + * 工作时间 - 计划传入通知时间 (**专业版**) + * 代理支持 (**专业版**) + + + +除了我在这里列出的内容外,你还可以在 Rambox Pro 版本中找到更多功能。要了解有关它的更多信息,你可以参考[正式功能列表][6]。 + +还值得注意的是,你不能有超过 3 个活跃并发设备连接。 + +### 在 Linux 上安装 Rambox + +你可以在[官方下载页][4]获取 **.AppImage** 文件来运行 Rambox。如果你好奇,你可以参考我们的指南,了解如何[在 Linux 上使用 AppImage 文件][7]。 + +另外,你也可以从 [Snap 商店][8]获取它。此外,请查看其 [GitHub release][9] 部分的 **.deb / .rpm** 或其他包。 + +[Download Rambox Community Edition][4] + +### 总结 + +使用 Rambox 安装大量应用可能会有点让人不知所措。因此,我建议你在添加更多应用并将其用于工作时监视内存使用情况。 + +还有一个类似的应用称为 [Franz][10],它也像 Rambox 部分开源、部分高级版。 + +尽管像 Rambox 或 Franz 这样的解决方案非常有用,但它们并不总是资源友好,特别是如果你同时使用数十个服务。因此,请留意系统资源(如果你注意到对性能的影响)。 + +除此之外,这是一个令人印象深刻的应用。你有试过了么?欢迎随时让我知道你的想法! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/rambox/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/rambox-ce.jpg?ssl=1 +[2]: https://itsfoss.com/best-messaging-apps-linux/ +[3]: https://itsfoss.com/viber-linux-client-beta-install/ +[4]: https://rambox.pro/#ce +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/rambox-preferences.png?ssl=1 +[6]: https://rambox.pro/#features +[7]: https://itsfoss.com/use-appimage-linux/ +[8]: https://snapcraft.io/rambox +[9]: https://github.com/ramboxapp/community-edition/releases +[10]: https://itsfoss.com/franz-messaging-app/ From bbbc641a6619d6009d59675eeed8d027a40820f4 Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 29 Apr 2020 09:03:15 +0800 Subject: [PATCH 036/178] translating --- ...00428 Using Files and Folders on Desktop Screen in Ubuntu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md b/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md index 40105e3aed..32eeaa9c95 100644 --- a/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md +++ b/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 17709c477be42b43b7f6430e1629e2a77ab8153b Mon Sep 17 00:00:00 2001 From: qfzy1233 Date: Wed, 29 Apr 2020 09:25:21 +0800 Subject: [PATCH 037/178] qfzy1233 is translating --- .../20200424 16 Things to do After Installing Ubuntu 20.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index cca31a0426..5637651850 100644 --- a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (qfzy1233) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 1f2fb8ee41e52ab2ccc6b7a9406489aa0b48b911 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 09:41:42 +0800 Subject: [PATCH 038/178] PRF @wxy --- ... DNF and YUM, Why is Yum Replaced by DNF.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md b/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md index f73a3e180b..98f6e0cefe 100644 --- a/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md +++ b/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md @@ -1,20 +1,22 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (The Difference Between DNF and YUM, Why is Yum Replaced by DNF?) [#]: via: (https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -DNF 和 YUM 的区别,为什么 YUM 会被 DNF 取代? +DNF 和 Yum 的区别,为什么 Yum 会被 DNF 取代? ====== 由于 Yum 中许多长期存在的问题仍未得到解决,因此 [Yum 包管理器][1]已被 [DNF 包管理器][2]取代。这些问题包括性能差、内存占用过多、依赖解析速度变慢等。 -DNF 使用 `libsolv` 进行依赖解析,由 SUSE 开发和维护,旨在提高性能。DNF 主要是用 Python 编写的,它有自己的应对依赖解析的方法。 +DNF 使用 `libsolv` 进行依赖解析,由 SUSE 开发和维护,旨在提高性能。 -Yum 是 RPM 的前端工具,它管理依赖关系和资源库,然后使用 RPM 来安装、下载和删除包。它的 API 没有完整的文档,它的扩展系统只允许 Python 插件。 +Yum 主要是用 Python 编写的,它有自己的应对依赖解析的方法。它的 API 没有完整的文档,它的扩展系统只允许 Python 插件。 + +Yum 是 RPM 的前端工具,它管理依赖关系和资源库,然后使用 RPM 来安装、下载和删除包。 为什么他们要建立一个新的工具,而不是修复现有的问题呢? @@ -33,17 +35,17 @@ Ales Kozamblak 解释说,这个修复在技术上是不可行的,而且 Yum 2 | API 有完整的文档 | API 没有完整的文档 3 | 由 C、C++、Python 编写的 | 只用 Python 编写 4 | DNF 目前在 Fedora、RHEL 8、CentOS 8、OEL 8 和 Mageia 6/7 中使用 | YUM 目前在 RHEL 6/7、CentOS 6/7、OEL 6/7 中使用 -5 | DNf 支持各种扩展 | Yum 只支持基于 Python 的扩展 +5 | DNF 支持各种扩展 | Yum 只支持基于 Python 的扩展 6 | API 有良好的文档,因此很容易创建新的功能 | 因为 API 没有正确的文档化,所以创建新功能非常困难 7 | DNF 在同步存储库的元数据时,使用的内存较少 | 在同步存储库的元数据时,YUM 使用了过多的内存 8 | DNF 使用满足性算法来解决依赖关系解析(它是用字典的方法来存储和检索包和依赖信息)| 由于使用公开 API 的原因,Yum 依赖性解析变得迟钝 9 | 从内存使用量和版本库元数据的依赖性解析来看,性能都不错 | 总的来说,在很多因素的影响下,表现不佳 10 | DNF 更新:在 DNF 更新过程中,如果包中包含不相关的依赖,则不会更新 | YUM 将在没有验证的情况下更新软件包 -11 | 如果启用的存储库没有响应,DNF 将跳过它,并继续使用可用的存储库出来事务 | 如果有存储库不可用,YUM 会立即停止 +11 | 如果启用的存储库没有响应,DNF 将跳过它,并继续使用可用的存储库处理事务 | 如果有存储库不可用,YUM 会立即停止 12 | `dnf update` 和 `dnf upgrade` 是等价的 | 在 Yum 中则不同 13 | 安装包的依赖关系不更新 | Yum 为这种行为提供了一个选项 14 | 清理删除的包:当删除一个包时,DNF 会自动删除任何没有被用户明确安装的依赖包 | Yum 不会这样做 -15 | 存储库缓存更新计划:默认情况下,系统启动后 10 分钟后,DNF 每小时检查一次对配置的存储库的更新。这个动作由系统定时器单元 `/usr/lib/systemd/system/system/dnf-makecache.timer` 控制 | Yum 也会这样做 +15 | 存储库缓存更新计划:默认情况下,系统启动后 10 分钟后,DNF 每小时会对配置的存储库检查一次更新。这个动作由系统定时器单元 `dnf-makecache.timer` 控制 | Yum 也会这样做 16 | 内核包不受 DNF 保护。不像 Yum,你可以删除所有的内核包,包括运行中的内核包 | Yum 不允许你删除运行中的内核 17 | libsolv:用于解包和读取资源库。hawkey: 为 libsolv 提供简化的 C 和 Python API 库。librepo: 提供 C 和 Python(类似 libcURL)API 的库,用于下载 Linux 存储库元数据和软件包。libcomps: 是 yum.comps 库的替代品。它是用纯 C 语言编写的库,有 Python 2 和 Python 3 的绑定。| Yum 不使用单独的库来执行这些功能 18 | DNF 包含 29000 行代码 | Yum 包含 56000 行代码 @@ -56,7 +58,7 @@ via: https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 7d1f1683e0a6f1af8ade655a83dafa6d7289d8a2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 09:42:53 +0800 Subject: [PATCH 039/178] PUB @wxy https://linux.cn/article-12161-1.html --- ...ference Between DNF and YUM, Why is Yum Replaced by DNF.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md (98%) diff --git a/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md b/published/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md similarity index 98% rename from translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md rename to published/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md index 98f6e0cefe..8521ab4291 100644 --- a/translated/tech/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md +++ b/published/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12161-1.html) [#]: subject: (The Difference Between DNF and YUM, Why is Yum Replaced by DNF?) [#]: via: (https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From b7f0e1f5f16340ca1cace9038c9ca98185dee42d Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 29 Apr 2020 10:00:52 +0800 Subject: [PATCH 040/178] Rename sources/tech/20200428 Fedora 32 is officially here.md to sources/news/20200428 Fedora 32 is officially here.md --- sources/{tech => news}/20200428 Fedora 32 is officially here.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20200428 Fedora 32 is officially here.md (100%) diff --git a/sources/tech/20200428 Fedora 32 is officially here.md b/sources/news/20200428 Fedora 32 is officially here.md similarity index 100% rename from sources/tech/20200428 Fedora 32 is officially here.md rename to sources/news/20200428 Fedora 32 is officially here.md From 1152c82f7e00ea69e60a93e896f7d325a23e063f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 29 Apr 2020 10:08:24 +0800 Subject: [PATCH 041/178] Rename sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md to sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md --- ...20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md (100%) diff --git a/sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md similarity index 100% rename from sources/tech/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md rename to sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md From 8892869d68b62bd805f5a72617da4cb30750d486 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 29 Apr 2020 10:15:44 +0800 Subject: [PATCH 042/178] Rename sources/tech/20200428 How I empower and reach millions through open source.md to sources/talk/20200428 How I empower and reach millions through open source.md --- ...200428 How I empower and reach millions through open source.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20200428 How I empower and reach millions through open source.md (100%) diff --git a/sources/tech/20200428 How I empower and reach millions through open source.md b/sources/talk/20200428 How I empower and reach millions through open source.md similarity index 100% rename from sources/tech/20200428 How I empower and reach millions through open source.md rename to sources/talk/20200428 How I empower and reach millions through open source.md From cad41935fea060af0eb836f1b0060f4bedab2423 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 11:24:47 +0800 Subject: [PATCH 043/178] APL --- sources/news/20200428 Fedora 32 is officially here.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20200428 Fedora 32 is officially here.md b/sources/news/20200428 Fedora 32 is officially here.md index 67989780ea..a3066f6ae1 100644 --- a/sources/news/20200428 Fedora 32 is officially here.md +++ b/sources/news/20200428 Fedora 32 is officially here.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 06f5b5082c96092be7b3d64385c52111fad1e4bd Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 11:56:19 +0800 Subject: [PATCH 044/178] TSL&PRF --- .../20200428 Fedora 32 is officially here.md | 77 ------------------- .../20200428 Fedora 32 is officially here.md | 77 +++++++++++++++++++ 2 files changed, 77 insertions(+), 77 deletions(-) delete mode 100644 sources/news/20200428 Fedora 32 is officially here.md create mode 100644 translated/news/20200428 Fedora 32 is officially here.md diff --git a/sources/news/20200428 Fedora 32 is officially here.md b/sources/news/20200428 Fedora 32 is officially here.md deleted file mode 100644 index a3066f6ae1..0000000000 --- a/sources/news/20200428 Fedora 32 is officially here.md +++ /dev/null @@ -1,77 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Fedora 32 is officially here!) -[#]: via: (https://fedoramagazine.org/announcing-fedora-32/) -[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/) - -Fedora 32 is officially here! -====== - -![][1] - -It’s here! We’re proud to announce the release of Fedora 32. Thanks to the hard work of thousands of Fedora community members and contributors, we’re celebrating yet another on-time release. - -If you just want to get to the bits without delay, head over to right now. For details, read on! - -## **All of Fedora’s Flavors** - -Fedora Editions are targeted outputs geared toward specific “showcase” uses. - -Fedora Workstation focuses on the desktop. In particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features [GNOME 3.36][2], which has plenty of great improvements as usual. My favorite is the new lock screen! - -Fedora Server brings the latest in cutting-edge open source server software to systems administrators in an easy-to-deploy fashion. For edge computing use cases, [Fedora IoT][3] provides a strong foundation for IoT ecosystems. - -Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several [update streams][4] that can be followed for automatic updates that occur roughly every two weeks. Currently the **next** stream is based on Fedora 32, with the **testing** and **stable** streams to follow. You can find information about released artifacts that follow the **next** stream from [the download page][5] and information about how to use those artifacts in the [Fedora CoreOS Documentation][6]. - -Of course, we produce more than just the editions. [Fedora Spins][7] and [Labs][8] target a variety of audiences and use cases, including the [Fedora Astronomy Lab][9], which brings a complete open source toolchain to both amateur and professional astronomers, and desktop environments like [KDE Plasma][10] and [Xfce][11]. New in Fedora 32 is the [Comp Neuro Lab][12], developed by our Neuroscience Special Interest Group to enable computational neuroscience. - -And, don’t forget our alternate architectures: [ARM AArch64, Power, and S390x][13]. Of particular note, we have improved support for the Rockchip system-on-a-chip devices including the Rock960, RockPro64, and Rock64. - -**General improvements** - -No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “[First][14]” foundation, we’ve updated key programming language and system library packages, including GCC 10, Ruby 2.7, and Python 3.8. Of course, with Python 2 past end-of-life, we’ve removed most Python 2 packages from Fedora. A legacy python27 package is provided for developers and users who still need it. In Fedora Workstation, we’ve enabled the EarlyOOM service by default to improve the user experience in low-memory situations. - -We’re excited for you to try out the new release! Go to and download it now. Or if you’re already running a Fedora operating system, follow the easy [upgrade instructions][15]. - -## **In the unlikely event of a problem….** - -If you run into a problem, check out the [Fedora 32 Common Bugs][16] page, and if you have questions, visit our [Ask Fedora][17] user-support platform. - -## **Thank you everyone** - -Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other. I invite you to join us in the [Red Hat Summit Virtual Experience][18] 28-29 April to learn more about Fedora and other communities. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/announcing-fedora-32/ - -作者:[Matthew Miller][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/mattdm/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/f32-final-816x345.png -[2]: https://www.gnome.org/news/2020/03/gnome-3-36-released/ -[3]: https://iot.fedoraproject.org/ -[4]: https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/ -[5]: https://getfedora.org/en/coreos/download?stream=next -[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/ -[7]: https://spins.fedoraproject.org/ -[8]: https://labs.fedoraproject.org/ -[9]: https://labs.fedoraproject.org/en/astronomy/ -[10]: https://spins.fedoraproject.org/en/kde/ -[11]: https://spins.fedoraproject.org/en/xfce/ -[12]: https://labs.fedoraproject.org/en/comp-neuro -[13]: https://alt.fedoraproject.org/alt/ -[14]: https://docs.fedoraproject.org/en-US/project/#_first -[15]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/ -[16]: https://fedoraproject.org/wiki/Common_F32_bugs -[17]: http://ask.fedoraproject.org -[18]: https://www.redhat.com/en/summit diff --git a/translated/news/20200428 Fedora 32 is officially here.md b/translated/news/20200428 Fedora 32 is officially here.md new file mode 100644 index 0000000000..5b9bfbf49d --- /dev/null +++ b/translated/news/20200428 Fedora 32 is officially here.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fedora 32 is officially here!) +[#]: via: (https://fedoramagazine.org/announcing-fedora-32/) +[#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/) + +Fedora 32 正式发布! +====== + +![][1] + +它来了! 我们很荣幸地宣布 Fedora 32 的发布。感谢成千上万的 Fedora 社区成员和贡献者的辛勤工作,我们又一次准时发布了。 + +如果你只想马上就能拿到它,请马上访问 。更多详情,请继续阅读本文。 + +### Fedora 的全部变种 + +Fedora Editions 是针对特定的“展示”用途输出的。 + +Fedora Workstation 专注于桌面系统。特别是,它面向的是那些希望获得“可以工作的” Linux 操作系统体验的软件开发者。这个版本采用了 [GNOME 3.36][2],一如既往地有很多很棒的改进。我最喜欢的是新的锁屏! + +Fedora Server 以一种易于部署的方式为系统管理员带来了新锐的开源服务器软件。对于边缘计算用例,[Fedora IoT][3] 为 IoT 生态系统提供了坚实的基础。 + +Fedora CoreOS 是一个新兴的 Fedora Edition。它是一个自动更新的、最小化的操作系统,用于安全地大规模运行容器化工作负载。它提供了几个[更新流][4],遵循大约每两周一次的自动更新。目前,next 流是基于 Fedora 32,后续还有 testing 流和 stable 流。你可以从[下载页面][5]中找到关于按 next 流发布的工件的信息,并在 [Fedora CoreOS 文档][6]中找到关于如何使用这些工件的信息。 + +当然,我们制作的不仅仅是 Editions。[Fedora Spins][7] 和[实验室][8]针对的是不同的受众和用例,包括[Fedora 天文学实验室][9],它为业余和专业的天文学家带来了完整的开源工具链,还有像 [KDE Plasma][10] 和 [Xfce][11] 这样的桌面环境。Fedora 32 中新增的 [计算神经科学实验室][12] 是由我们的神经科学特别兴趣小组开发的,它可以实现计算神经科学。 + +还有,别忘了我们的备用架构,[ARM AArch64、Power 和 S390x][13]。特别值得一提的是,我们改进了对 Rockchip 系统级芯片的支持,包括 Rock960、RockPro64 和 Rock64。 + +### 一般性的改进 + +无论你使用 Fedora 的哪个变体,你都能获得最新的开源世界。遵循我们的“[First][14]”理念,我们更新了关键的编程语言和系统库包,包括 GCC 10、Ruby 2.7 和 Python 3.8。当然,随着 Python 2 已经过了报废期,我们已经从 Fedora 中删除了大部分 Python 2 包,但我们为仍然需要它的开发者和用户提供了一个遗留的 python27 包。在 Fedora Workstation 中,我们默认启用了 EarlyOOM 服务,以改善低内存情况下的用户体验。 + +我们非常期待你能尝试一下新版本的使用体验! 现在就去 下载它。或者如果你已经在运行 Fedora 操作系统,请按照简单的[升级说明][15]进行升级。 + +### 万一出现问题…… + +如果你遇到问题,请查看[Fedora 32 常见错误][16]页面,如果有问题,请访问我们的 [Askedora][17] 用户支持平台。 + +### 谢谢大家 + +感谢在这个发布周期中为 Fedora 项目做出贡献的成千上万的人,特别是感谢那些在大流行期间为又一次准时发布而付出额外努力的人。Fedora 是一个社区,很高兴看到我们彼此之间的支持。我邀请大家参加 4 月28-29 日的[红帽峰会虚拟体验][18],了解更多关于 Fedora 和其他社区的信息。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/announcing-fedora-32/ + +作者:[Matthew Miller][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/mattdm/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/f32-final-816x345.png +[2]: https://www.gnome.org/news/2020/03/gnome-3-36-released/ +[3]: https://iot.fedoraproject.org/ +[4]: https://docs.fedoraproject.org/en-US/fedora-coreos/update-streams/ +[5]: https://getfedora.org/en/coreos/download?stream=next +[6]: https://docs.fedoraproject.org/en-US/fedora-coreos/getting-started/ +[7]: https://spins.fedoraproject.org/ +[8]: https://labs.fedoraproject.org/ +[9]: https://labs.fedoraproject.org/en/astronomy/ +[10]: https://spins.fedoraproject.org/en/kde/ +[11]: https://spins.fedoraproject.org/en/xfce/ +[12]: https://labs.fedoraproject.org/en/comp-neuro +[13]: https://alt.fedoraproject.org/alt/ +[14]: https://docs.fedoraproject.org/en-US/project/#_first +[15]: https://docs.fedoraproject.org/en-US/quick-docs/upgrading/ +[16]: https://fedoraproject.org/wiki/Common_F32_bugs +[17]: http://ask.fedoraproject.org +[18]: https://www.redhat.com/en/summit From 5d5cd0484d61b867d1b3d21cb394ed559b34b025 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 11:59:17 +0800 Subject: [PATCH 045/178] PUB @wxy https://linux.cn/article-12164-1.html --- .../20200428 Fedora 32 is officially here.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/news => published}/20200428 Fedora 32 is officially here.md (98%) diff --git a/translated/news/20200428 Fedora 32 is officially here.md b/published/20200428 Fedora 32 is officially here.md similarity index 98% rename from translated/news/20200428 Fedora 32 is officially here.md rename to published/20200428 Fedora 32 is officially here.md index 5b9bfbf49d..acad6862f5 100644 --- a/translated/news/20200428 Fedora 32 is officially here.md +++ b/published/20200428 Fedora 32 is officially here.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12164-1.html) [#]: subject: (Fedora 32 is officially here!) [#]: via: (https://fedoramagazine.org/announcing-fedora-32/) [#]: author: (Matthew Miller https://fedoramagazine.org/author/mattdm/) From 95109ddc9f3af37e04d426ff034d8c1bc5027d10 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 21:49:06 +0800 Subject: [PATCH 046/178] PRF @geekpi --- ...Addresses, and Interface Speed on Linux.md | 72 +++++++++---------- 1 file changed, 34 insertions(+), 38 deletions(-) diff --git a/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md b/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md index e586660e0c..10d95aab72 100644 --- a/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md +++ b/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md @@ -1,34 +1,30 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux) [#]: via: (https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -如何在 Linux 上检查可用的网络接口、关联的 IP 地址、MAC 地址和接口速度 +如何在 Linux 上检查网卡信息 ====== -默认在设置服务器时,你将配置主网络接口。 +![](https://img.linux.net.cn/data/attachment/album/202004/29/214835m1ms3n00s6qbcycz.jpg) -这是每个人所做的构建工作的一部分。 +默认情况下,在设置服务器时你会配置主网络接口。这是每个人所做的构建工作的一部分。有时出于各种原因,你可能需要配置额外的网络接口。 -有时出于各种原因,你可能需要配置额外的网络接口。 +这可以是通过网络绑定bonding/协作teaming来提供高可用性,也可以是用于应用需求或备份的单独接口。 -这可以是网络绑定/团队合作或高可用性,也可以是用于应用需求或备份的单独接口。 +为此,你需要知道计算机有多少接口以及它们的速度来配置它们。 -为此,你需要知道计算机有多少接口以及它们的配置速度。 - -有许多命令可检查可用的网络接口,但是我们仅使用 IP 命令。 - -稍后,我们将使用所有这些工具编写单独的文章。 +有许多命令可检查可用的网络接口,但是我们仅使用 `ip` 命令。以后,我们会另外写一篇文章来全部介绍这些工具。 在本教程中,我们将向你显示可用网络网卡(NIC)信息,例如接口名称、关联的 IP 地址、MAC 地址和接口速度。 -### 什么是 IP 命令 +### 什么是 ip 命令 -**[IP 命令][1]**类似于 ifconfig, 用于分配静态 IP 地址、路由和默认网关等。 +[ip 命令][1] 类似于 `ifconfig`, 用于分配静态 IP 地址、路由和默认网关等。 ``` # ip a @@ -47,15 +43,15 @@ ### 什么是 ethtool 命令 -ethtool 用于查询或控制网络驱动或硬件设置。 +`ethtool` 用于查询或控制网络驱动或硬件设置。 ``` # ethtool eth0 ``` -### 1)如何在 Linux 上使用 IP 命令检查可用的网络接口 +### 1)如何在 Linux 上使用 ip 命令检查可用的网络接口 -在不带任何参数的情况下运行 IP 命令时,它会提供大量信息,但是,如果仅需要可用的网络接口,请使用以下定制的 IP 命令。 +在不带任何参数的情况下运行 `ip` 命令时,它会提供大量信息,但是,如果仅需要可用的网络接口,请使用以下定制的 `ip` 命令。 ``` # ip a |awk '/state UP/{print $2}' @@ -64,13 +60,13 @@ eth0: eth1: ``` -### 2)如何在 Linux 上使用 IP 命令检查网络接口的 IP 地址 +### 2)如何在 Linux 上使用 ip 命令检查网络接口的 IP 地址 -如果只想查看 IP 地址分配给了哪个接口,请使用以下定制的 IP 命令。 +如果只想查看 IP 地址分配给了哪个接口,请使用以下定制的 `ip` 命令。 ``` # ip -o a show | cut -d ' ' -f 2,7 -or +或 ip a |grep -i inet | awk '{print $7, $2}' lo 127.0.0.1/8 @@ -78,18 +74,18 @@ lo 127.0.0.1/8 192.168.1.102/24 ``` -### 3)如何在 Linux 上使用 IP 命令检查网卡的 MAC 地址 +### 3)如何在 Linux 上使用 ip 命令检查网卡的 MAC 地址 如果只想查看网络接口名称和相应的 MAC 地址,请使用以下格式。 -检查特定的网络接口的 MAC 地址。 +检查特定的网络接口的 MAC 地址: ``` # ip link show dev eth0 |awk '/link/{print $2}' 00:00:00:55:43:5c ``` -检查所有网络接口的 MAC 地址。 +检查所有网络接口的 MAC 地址,创建该脚本: ``` # vi /opt/scripts/mac-addresses.sh @@ -97,12 +93,12 @@ lo 127.0.0.1/8 #!/bin/sh ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output; do -echo $output: -ethtool -P $output + echo $output: + ethtool -P $output done ``` -运行下面的 shell 脚本获取多个网络接口的 MAC 地址。 +运行该脚本获取多个网络接口的 MAC 地址: ``` # sh /opt/scripts/mac-addresses.sh @@ -115,9 +111,9 @@ Permanent address: 00:00:00:55:43:5d ### 4)如何在 Linux 上使用 ethtool 命令检查网络接口速度 -如果要在 Linux 上检查网络接口速度,请使用 ethtool 命令。 +如果要在 Linux 上检查网络接口速度,请使用 `ethtool` 命令。 -检查特定网络接口的速度。 +检查特定网络接口的速度: ``` # ethtool eth0 |grep "Speed:" @@ -125,7 +121,7 @@ Permanent address: 00:00:00:55:43:5d Speed: 10000Mb/s ``` -检查所有网络接口速度。 +检查所有网络接口速度,创建该脚本: ``` # vi /opt/scripts/port-speed.sh @@ -133,12 +129,12 @@ Speed: 10000Mb/s #!/bin/sh ip a |awk '/state UP/{print $2}' | sed 's/://' | while read output; do -echo $output: -ethtool $output |grep "Speed:" + echo $output: + ethtool $output |grep "Speed:" done ``` -运行以下 shell 脚本获取多个网络接口速度。 +运行该脚本获取多个网络接口速度: ``` # sh /opt/scripts/port-speed.sh @@ -151,7 +147,7 @@ Speed: 10000Mb/s ### 5)验证网卡信息的 Shell 脚本 -通过此 **[shell 脚本][2]**,你可以收集上述所有信息,例如网络接口名称、网络接口的 IP 地址,网络接口的 MAC 地址以及网络接口的速度。 +通过此 shell 脚本你可以收集上述所有信息,例如网络接口名称、网络接口的 IP 地址,网络接口的 MAC 地址以及网络接口的速度。创建该脚本: ``` # vi /opt/scripts/nic-info.sh @@ -161,14 +157,14 @@ hostname echo "-------------" for iname in $(ip a |awk '/state UP/{print $2}') do -echo "$iname" -ip a | grep -A2 $iname | awk '/inet/{print $2}' -ip a | grep -A2 $iname | awk '/link/{print $2}' -ethtool $iname |grep "Speed:" + echo "$iname" + ip a | grep -A2 $iname | awk '/inet/{print $2}' + ip a | grep -A2 $iname | awk '/link/{print $2}' + ethtool $iname |grep "Speed:" done ``` -运行以下 shell 脚本检查网卡信息。 +运行该脚本检查网卡信息: ``` # sh /opt/scripts/nic-info.sh @@ -192,7 +188,7 @@ via: https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-spee 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From d4649a859a8783f4009c79b60d5c39ebacac3e21 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 21:49:38 +0800 Subject: [PATCH 047/178] PUB @geekpi https://linux.cn/article-12165-1.html --- ... Addresses, MAC Addresses, and Interface Speed on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md (98%) diff --git a/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md b/published/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md similarity index 98% rename from translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md rename to published/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md index 10d95aab72..8464da643e 100644 --- a/translated/tech/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md +++ b/published/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12165-1.html) [#]: subject: (How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux) [#]: via: (https://www.2daygeek.com/linux-unix-check-network-interfaces-names-nic-speed-ip-mac-address/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From a3a6873ca6a01afc1200612d2c4695acbc13e60a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 23:01:06 +0800 Subject: [PATCH 048/178] APL --- ...200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md index 377a58f34a..dfa900d9cf 100644 --- a/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md +++ b/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 5ca67f76337225ef128bf79ac114127056740d4b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 23:29:56 +0800 Subject: [PATCH 049/178] TSL&PRF --- ...Lysia Arrives with ZFS and Snap Support.md | 112 ------------------ ...Lysia Arrives with ZFS and Snap Support.md | 110 +++++++++++++++++ 2 files changed, 110 insertions(+), 112 deletions(-) delete mode 100644 sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md create mode 100644 translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md diff --git a/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md deleted file mode 100644 index dfa900d9cf..0000000000 --- a/sources/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md +++ /dev/null @@ -1,112 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Manjaro 20 Lysia Arrives with ZFS and Snap Support) -[#]: via: (https://itsfoss.com/manjaro-20-release/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -Manjaro 20 Lysia Arrives with ZFS and Snap Support -====== - -_**Manjaro Linux has refreshed its ISO with Manjaro 20 “Lysia”. It now supports Snap and Flatpak packages in Pamac. ZFS option is added in Manjaro Architect installer and the latest kernel 5.6 is used as the base.**_ - -It’s raining new distribution releases. Ubuntu 20.04 LTS was released last week. Fedora 32 will be releasing shortly and [Manjaro has released version 20][1] codenamed Lysia. - -### What’s new in Manjaro 20 Lysia? - -Plenty actually. Let me show you some of the major new features in Manjaro 20. - -#### New Matcha theme - -Manjaro 20 has a new default theme called Matcha. It gives the desktop a more polished look. - -![][2] - -#### Snap and Flatpak support in Pamac and terminal - -Snap and Flatpak package support is improved. You can use them in command line if you want. - -You can also enable Snap and Flatpak support in the Pamac GUI package manager. - -![Enable Snap support in Pamac Manjaro][3] - -Once enabled, you can find and install Snap/Flatpak applications in the Pamac software manager. - -![Snap applications in Pamac][4] - -#### Pamac offers to install new software based on search (in GNOME) - -In the GNOME variant, if you search for something, Pamac software manager will now offer to install software that match the query. GNOME Software Center does that in other distributions that use GNOME desktop. - -#### ZFS support lands in Manjaro Architect - -You can now easily use ZFS as root in Manjaro Linux. The [ZFS file system][5] support is available in [Manjaro Architect][6]. - -Do note that I am saying Manjaro Architect, the terminal based installer. It’s not the same as the regular graphical [Calamares installer][7]. - -![][8] - -#### Linux kernel 5.6 - -The latest stable [Linux kernel 5.6][9] brings more hardware support for thunderbolt, Nvidia and USB4. You can also use [WireGuard VPN][10]. - -![][11] - -#### Miscellaneous other features - - * New desktop environment versions: Xfce 4.14, GNOME 3.36 and KDE Plasma 5.18 - * zsh is the new default shell - * Display-Profiles allows you to store one or more profiles for your preferred display configuration - * Improved Gnome-Layout-Switcher - * Latest drivers - * Improved and polished Manjaro tools - - - -### How to get Manjaro 20 Lysia? - -_**If you are already using it, just update your Manjaro Linux system and you should already be using version 20.**_ - -Manjaro uses a rolling release model which means you don’t have to manually upgrade from one version to another. You don’t have to reinstall as soon as there is a new version is released. - -If Manjaro is rolling release distribution, why does it release a new version every now and then? It’s because they have to refresh the ISO so that new users downloading Manjaro will not have to install updates for last few years. This is why Arch Linux also refreshes its ISO every month. - -Manjaro ‘ISO refreshes’ are codenamed and have a version because it helps the developers clearly mark each stage of development. - -So, the bottom line is that if you are already using it, just [update your Manjaro Linux system][12] using Pamac or command line. - -If you want to try Manjaro or if you want to use ZFS, then you can [install Manjaro][13] by downloading the ISO from its website: - -[Download Manjaro Linux][14] - -Enjoy the new release of Manjaro Linux. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/manjaro-20-release/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://forum.manjaro.org/t/manjaro-20-0-lysia-released/138633 -[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-lysia.jpeg?resize=800%2C440&ssl=1 -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/enable-snap-in-pamac-manjaro.jpg?resize=800%2C490&ssl=1 -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/snap-app-pacman.jpg?resize=800%2C489&ssl=1 -[5]: https://itsfoss.com/what-is-zfs/ -[6]: https://itsfoss.com/manjaro-architect-review/ -[7]: https://calamares.io/ -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pacman-prompts-install-apps.jpg?resize=800%2C331&ssl=1 -[9]: https://itsfoss.com/linux-kernel-5-6/ -[10]: https://itsfoss.com/wireguard/ -[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-neofetch-screen.jpg?resize=800%2C495&ssl=1 -[12]: https://itsfoss.com/update-arch-linux/ -[13]: https://itsfoss.com/install-manjaro-linux/ -[14]: https://manjaro.org/download/ diff --git a/translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md new file mode 100644 index 0000000000..80a2a95e10 --- /dev/null +++ b/translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md @@ -0,0 +1,110 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manjaro 20 Lysia Arrives with ZFS and Snap Support) +[#]: via: (https://itsfoss.com/manjaro-20-release/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Manjaro 20 Lysia 到来,支持 ZFS 和 Snap +====== + +![](https://img.linux.net.cn/data/attachment/album/202004/29/232925j8paomvp11pfu12v.jpg) + +> Manjaro Linux 刷新了其 Manjaro 20 “Lysia” 的 ISO。现在在 Pamac 中支持了 Snap 和 Flatpak 软件包。在 Manjaro Architect 安装程序中增加了 ZFS 选项,并使用最新的内核 5.6 作为基础。 + +最近新的发行版的发布像下雨一样。在上周发布了 [Ubuntu 20.04 LTS](https://linux.cn/article-12142-1.html) ,紧接着 [Fedora 32](https://linux.cn/article-12164-1.html) 也刚刚发布,而现在 [Manjaro 发布了版本 20][1],代号为 Lysia。 + +### Manjaro 20 Lysia 有什么新东西? + +其实有很多。让我给大家介绍一下 Manjaro 20 中的一些主要新功能。 + +#### 新的抹茶主题 + +Manjaro 20 有一个新的默认主题,名为 Matcha(抹茶)。它让桌面看起来更有质感。 + +![][2] + +#### 对 Snap 和 Flatpak 的支持 + +Snap 和 Flatpak 软件包的支持得到了改进。如果你愿意,你可以在命令行中使用它们。 + +你还可以在 Pamac 图形界面包管理器中启用 Snap 和 Flatpak 支持。 + +![Enable Snap support in Pamac Manjaro][3] + +启用后,你可以在 Pamac 软件管理器中找到并安装 Snap/Flatpak 应用程序。 + +![Snap applications in Pamac][4] + +#### Pamac 提供了基于搜索安装新软件的方式(在 GNOME 中) + +在 GNOME 变种中,如果你搜索某个东西,Pamac 软件管理器会提供安装符合查询的软件。在其他使用 GNOME 桌面的发行版中,GNOME 软件中心也会这样做。 + +#### ZFS 支持登陆了 Manjaro Architect + +现在,你可以在 Manjaro Linux 中轻松地使用 ZFS 作为根文件系统。在 [Manjaro Architect][6] 中提供了对 [ZFS 文件系统][5]的支持。 + +请注意,我说的是 Manjaro Architect,即基于终端的安装程序。它和普通的图形化的 [Calamares 安装程序][7]不一样。 + +![][8] + +#### Linux kernel 5.6 + +最新的稳定版 [Linux 内核 5.6][9] 带来了更多的硬件支持,如 thunderbolt、Nvidia 和 USB4。你也可以使用 [WireGuard VPN][10]。 + +![][11] + +#### 其他杂项变化 + + * 新的桌面环境版本:Xfce 4.14、GNOME 3.36 和 KDE Plasma 5.18。 + * 新的默认 shell 是 zsh。 + * Display-Profiles 允许你存储一个或多个配置文件,用于你的首选显示配置。 + * 改进后的 Gnome-Layout-Switcher。 + * 最新的驱动程序。 + * 改进和完善了 Manjaro 工具。 + +### 如何取得 Manjaro 20 Lysia? + +如果你已经在使用 Manjaro,只需更新你的 Manjaro Linux 系统,你就应该已经在使用 Lysia 了。 + +Manjaro 采用了滚动发布模式,这意味着你不必手动从一个版本升级到另一个版本。只要有新的版本发布,不需要重新安装就可以使用了。 + +既然 Manjaro 是滚动发布的,为什么每隔一段时间就会发布一个新版本呢?这是因为他们要刷新 ISO,这样下载 Manjaro 的新用户就不用再安装过去几年的更新了。这就是为什么 Arch Linux 也会每个月刷新一次 ISO 的原因。 + +Manjaro 的“ISO 刷新”是有代号和版本的,因为它可以帮助开发者清楚地标明每个开发阶段的发展方向。 + +所以,如果你已经在使用它,只需使用 Pamac 或命令行[更新你的 Manjaro Linux 系统][12]即可。 + +如果你想尝试 Manjaro 或者想使用 ZFS,那么你可以通过从它的网站上[下载 ISO][14] 来[安装 Manjaro][13]。 + +愿你喜欢新的 Manjaro Linux 发布。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/manjaro-20-release/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://forum.manjaro.org/t/manjaro-20-0-lysia-released/138633 +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-lysia.jpeg?resize=800%2C440&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/enable-snap-in-pamac-manjaro.jpg?resize=800%2C490&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/snap-app-pacman.jpg?resize=800%2C489&ssl=1 +[5]: https://itsfoss.com/what-is-zfs/ +[6]: https://itsfoss.com/manjaro-architect-review/ +[7]: https://calamares.io/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pacman-prompts-install-apps.jpg?resize=800%2C331&ssl=1 +[9]: https://itsfoss.com/linux-kernel-5-6/ +[10]: https://itsfoss.com/wireguard/ +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/manjaro-20-neofetch-screen.jpg?resize=800%2C495&ssl=1 +[12]: https://itsfoss.com/update-arch-linux/ +[13]: https://itsfoss.com/install-manjaro-linux/ +[14]: https://manjaro.org/download/ From 8a93e2a3fcacdb959b7c30d197e6903435047286 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 29 Apr 2020 23:38:43 +0800 Subject: [PATCH 050/178] PUB @wxy https://linux.cn/article-12166-1.html --- ...0429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/news => published}/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md (98%) diff --git a/translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/published/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md similarity index 98% rename from translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md rename to published/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md index 80a2a95e10..2ed94dd0a8 100644 --- a/translated/news/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md +++ b/published/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12166-1.html) [#]: subject: (Manjaro 20 Lysia Arrives with ZFS and Snap Support) [#]: via: (https://itsfoss.com/manjaro-20-release/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From 2927a899e506b40a719cb0b723d12708a8cfe5f7 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 00:55:03 +0800 Subject: [PATCH 051/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200430=20Three?= =?UTF-8?q?=20Methods=20Boot=20CentOS/RHEL=207/8=20Systems=20in=20Single?= =?UTF-8?q?=20User=20Mode?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md --- ...OS-RHEL 7-8 Systems in Single User Mode.md | 159 ++++++++++++++++++ 1 file changed, 159 insertions(+) create mode 100644 sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md diff --git a/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md new file mode 100644 index 0000000000..a3f3768533 --- /dev/null +++ b/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md @@ -0,0 +1,159 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode) +[#]: via: (https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode +====== + +Single user mode, also referred to as maintenance mode, which allows a single super user to recover/repair system problems. + +Generally, these problems cannot be solved in a multi-user environment. The system can boot but will not function properly or you will not be able to log in. + +It uses `runlevel1.target` or `rescue.target` on **[Red Hat][1]** (RHEL) 7/8 based systems. + +In this mode, the system mount all local file systems, but does not activate network interfaces. + +It only enables certain services and minimal functionality to repair the system. + +This method is mostly useful when you want to run fsck to fix corrupted file systems, or to reset a forgotten root password, or to fix a mount point issue on the system. + +You can boot **[CentOS][2]**/**[RHEL][3]** 7/8 systems in single user mode using the below three methods. + + * **Method-1:** Boot CentOS/RHEL 7/8 systems in single user mode by adding the “rd.break” parameter to the kernel + * **Method-2:** Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “rhgb quiet” word with the “init=/bin/bash or init=/bin/sh” parameter in the kernel + * **Method-3:** Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “ro” word with the “rw init=/sysroot/bin/sh” parameter in the kernel + + + +### Method-1: Boot CentOS/RHEL 7/8 systems in single user mode by adding the “rd.break” parameter to the kernel + +Reboot your system, on the GRUB2 boot screen, press the `"e"` key to edit the selected kernel. You need to select the first line, the first one is the latest kernel whereas you can select the different one if you would like to boot your system with the older kernel. + +![][4] + +Depending on your RHEL/CentOS version, find the word **“linux16”** or **“linux”**, press the “End” button on the keyboard, go to the end of the line, and add the keyword **“rd.break”** as shown below in the screenshot, then press **“Ctrl+x”** or **“F10”** to boot into single-user mode. + +You need to find the word **`linux16`** for RHEL/CentOS 7 systems, while **`linux`** for RHEL/CentOS 8 systems. + +![][4] + +This change mount your root file system into **“read only (RO)”** mode. You can check this by running the command below. Also, the output below clearly shows that you are in **“Emergency Mode”**. + +``` +# mount | grep root +``` + +![][4] + +To make changes to the **“sysroot”** file system you need to remount it with READ and WRITE (RW) mode. + +``` +# mount -o remount,rw /sysroot +``` + +Run the below command to change the environment, commonly known as “jailed directory” or “chroot jail”. + +``` +# chroot /sysroot +``` + +![][4] + +Now, single-user mode is completely ready for use. Once you have fixed your problem to exit single user mode, perform the following steps. + +CentOS/RHEL 7/8 uses SELinux by default, so create the following hidden file, which will automatically perform a relabel of all files on next boot. + +``` +# touch /.autorelabel +``` + +Finally, run the below command to restart the system. Alternatively, type “exit” command twice to restart your system. + +``` +# reboot -f +``` + +### Method-2: Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “rhgb quiet” word with the “init=/bin/bash or init=/bin/sh” parameters in the kernel + +Reboot your system, on the GRUB2 boot screen, press the `"e"` key to edit the selected kernel parameters. + +![][4] + +Find the word **“rhgb quiet”** and replace it with **“init=/bin/bash”** or **“init=/bin/sh”**, then press **“Ctrl+x”** or **“F10”** to boot in single user mode. + +Screenshot for **`init=/bin/bash`**. + +![][4] + +Screenshot for **`init=/bin/sh`**. + +![][4] + +By default, this will mount your “/” partition in read-only (RO) mode, so you will need to remount the “/” file system with READ and WRITE (RW) mode to make changes. + +``` +# mount -o remount,rw / +``` + +![][4] + +You can now perform any task that you want. When you are done, run the following command to enable SELinux relabeling on reboot. + +``` +# touch /.autorelabel +``` + +Finally reboot the system. + +``` +# exec /sbin/init 6 +``` + +### Method-3: Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “ro” word with the “rw init=/sysroot/bin/sh” parameter in the kernel + +To interrupt the automatic boot, reboot your system and press any key on the GRUB2 splash screen. + +This will display the list of kernels available on your system and select the latest kernel and press the **`"e"`** key to edit the selected kernel parameters. + +Find the line that starts with the word **“linux”** or **“linux16”** and replace **“ro”** with **“rw init=/sysroot/bin/sh”**. When finished, press **“Ctrl+x”** or **“F10”** to boot in single user mode. + +Change the environment to “chroot jail” by running the below command. + +``` +# chroot /sysroot +``` + +Make any necessary changes to the system. Once done, run the below command to enable SELinux relabeling on reboot. + +``` +# touch /.autorelabel +``` + +Finally reboot the system. + +``` +# reboot -f +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/red-hat/ +[2]: https://www.2daygeek.com/category/centos/ +[3]: https://www.2daygeek.com/category/rhel/ +[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 From fd5f92e2b1d0964bd33b91d67a2d94246185c34c Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 00:58:02 +0800 Subject: [PATCH 052/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20Java?= =?UTF-8?q?=20security,=20mainframes=20having=20a=20moment,=20and=20more?= =?UTF-8?q?=20industry=20trends?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md --- ...ving a moment, and more industry trends.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md diff --git a/sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md b/sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md new file mode 100644 index 0000000000..beeac5df9d --- /dev/null +++ b/sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Java security, mainframes having a moment, and more industry trends) +[#]: via: (https://opensource.com/article/20/4/java-mainframes-dev-skills-more-industry-trends) +[#]: author: (Tim Hildred https://opensource.com/users/thildred) + +Java security, mainframes having a moment, and more industry trends +====== +A weekly look at open source community and industry trends. +![Person standing in front of a giant computer screen with numbers, data][1] + +As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update. + +## [How secure is Java compared to other languages?][2] + +> In this article, we'll look at how the most commonly used programming languages rank in terms of security. I'll explain some factors that make one language less secure than another, and why identified vulnerabilities have increased so much in the past few years. Finally, I'll suggest a few ways Java developers can reduce vulnerabilities in code.   + +**The impact**: If software is eating the world, then hackers are... I guess the thrush thriving in the gullet? Hyperbole aside, the more stuff made of software, the more incentive clever people have to try and figure out how to do things they probably shouldn't be able to. This applies to Java too. + +## [Mainframes are having a moment][3] + +> In addition to being abundant, mainframe jobs pay well, and so far, appear not to be as affected by the pandemic as other areas of tech employment. Salaries for entry-level enterprise computing jobs [average US $70,100 a year][4] [PDF], according to a 2019 report from tech analyst [Forrester Research][5] commissioned by IBM. As recently as this week, jobs boards such as [Indeed][6] and [Dice.com][7] listed hundreds or in some cases thousands of openings for mainframe positions at all levels. Advertised pay ranges from $30 to $35 an hour for a junior mainframe developer to well over $150,000 a year for a mainframe database administration manager. + +**The impact**: That is much, much better than a poke in the eye. + +## [The developer skills on the rise, and in decline][8] + +> Indeed.com analysed job postings using a list of 500 key technology skill terms to see which ones employers are looking for more these days and which are falling out of favour. Such research has helped identify cutting-edge skills over the past five years, with some previous years’ risers now well establish, thanks to explosive growth. + +**The impact**: The "on the rise" skills outnumber the "in decline" skills. Bad news for browser developers... + +## [The IT Pro Podcast: Building cloud-native apps][9] + +> The cloud is eating enterprise IT, and while on-premise applications are going to be around for a long time to come, the importance of being able to successfully take advantage of cloud technologies should not be understated. However, it’s one thing to simply port an existing application to the cloud, but developing software to be run in cloud environments is a different matter altogether. + +**The impact**: What is technology if not manifested mindset? + +## [Communication is key to culture change][10] + +> The outcome is staggering. Business teams feel invested in the development of the solution, they feel a sense of excitement and ownership. So much so, they go out into the corridors of the organisation to evangelise and promote the solution. Conversely, this improves the status of the developers within the business. It allows them to integrate with other stakeholders, contribute to new processes and help to achieve common goals.  + +**The impact**: As a communications person, I couldn't agree more. Communication is the difference between an organization and a movement. + +_I hope you enjoyed this list and come back next week for more open source community, market, and industry trends._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/java-mainframes-dev-skills-more-industry-trends + +作者:[Tim Hildred][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thildred +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) +[2]: https://www.javaworld.com/article/3537561/how-secure-is-java-compared-to-other-languages.html +[3]: https://spectrum.ieee.org/tech-talk/computing/software/mainframes-programming-language-cobol-news-coronavirus +[4]: https://www.ibm.com/downloads/cas/1EPYAP5D +[5]: https://go.forrester.com/ +[6]: https://www.indeed.com/q-Mainframe-jobs.html +[7]: https://www.dice.com/jobs/q-Mainframe-jobs +[8]: https://www.techcentral.ie/10-developer-skills-on-the-rise-and-five-on-the-decline/ +[9]: https://www.itpro.co.uk/cloud/355348/the-it-pro-podcast-building-cloud-native-apps +[10]: https://www.verdict.co.uk/culture-service-digital-enterprise/ From 4f376efa41c389b881e489cc1e28f64c61be54ec Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 00:58:47 +0800 Subject: [PATCH 053/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20Open?= =?UTF-8?q?=20source=20live=20streaming=20with=20Open=20Broadcaster=20Soft?= =?UTF-8?q?ware?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 Open source live streaming with Open Broadcaster Software.md --- ...treaming with Open Broadcaster Software.md | 137 ++++++++++++++++++ 1 file changed, 137 insertions(+) create mode 100644 sources/tech/20200429 Open source live streaming with Open Broadcaster Software.md diff --git a/sources/tech/20200429 Open source live streaming with Open Broadcaster Software.md b/sources/tech/20200429 Open source live streaming with Open Broadcaster Software.md new file mode 100644 index 0000000000..748786de77 --- /dev/null +++ b/sources/tech/20200429 Open source live streaming with Open Broadcaster Software.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Open source live streaming with Open Broadcaster Software) +[#]: via: (https://opensource.com/article/20/4/open-source-live-stream) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Open source live streaming with Open Broadcaster Software +====== +If you have something to say, a skill to teach, or just something fun to +share, broadcast it to the world with OBS. +![An old-fashioned video camera][1] + +If you have a talent you want to share with the world, whether it's making your favorite sourdough bread or speedrunning through a level of your favorite video game, live streaming is the modern show-and-tell. It's a powerful way to tell the world about your hobby through a medium once reserved for exclusive and expensive TV studios. Not only is the medium available to anyone with a relatively good internet connection, but the most popular software to make it happen is open source. + +[OBS][2] (Open Broadcaster Software) is a cross-platform application that serves as a control center for your live stream. A _stream_, strictly speaking, means _progressive and coherent data_. The data in a stream can be audio, video, graphics, text, or anything else you can represent as digital data. OBS is programmed to accept data as input, combine streams together (technically referred to as _mixing_) into one product, and then broadcast it. + +![OBS flowchart][3] + +A _broadcast_ is data that can be received by some target. If you're live streaming, your primary target is a streaming service that can host your stream, so other people can find it in a web browser or media player. A live stream is a live event, so people have to "tune in" to your stream when it's happening, or else they miss it. However, you can also target your own hard drive so you can record a presentation and then post it on the internet later for people to watch at their leisure. + +### Installing OBS + +To install OBS on Windows or macOS, download an installer package from [OBS's website][2]. + +To install OBS on Linux, either install it with your package manager (such as **dnf**, **zypper**, or **apt**) or [install it as a Flatpak][4]. + +### Join a streaming service + +In order to live stream, you must have a stream broker. That is, you need a central location on the internet for your stream to be delivered, so your viewers can get to what you're broadcasting. There are a few popular streaming services online, like YouTube and Twitch. You can also [set up your own video streaming server][5] using open source software. + +Regardless of which option you choose, before you begin streaming, you must have a destination for your stream. If you do use a streaming service, you must obtain a _streaming key_. A streaming key is a hash value (it usually looks something like **2ae2fad4e33c3a89c21**) that is private and unique to you. You use this key to authenticate yourself through your streaming software. Without it, the streaming service can't know you are who you say you are and won't let you broadcast over your user account. + +* * * + +* * * + +* * * + +**![Streaming key][6]** + + * In Twitch, your **Primary Stream Key** is available in the **Channel** panel of your **Creator Dashboard**. + * On YouTube, you must enable live streaming by verifying your account. Once you've done that, your **Stream Key** is in the **Other Features** menu option of your **Channel Dashboard**. + * If you're using your own server, there's no maze-like GUI to navigate. You just [create your own streaming key][7]. + + + +### Enter your streaming key + +Once you have a streaming key, launch OBS and go to the **File** > **Settings** menu. + +In the **Settings** window, click on the **Stream** category in the left column. Set the **Service** to your stream service (Custom, Twitch, YouTube, etc.), and enter your stream key. Click the **OK** button in the bottom right to save your changes. + +### Create sources + +In OBS, _sources_ represent any input signal you want to stream. By default, sources are listed at the bottom of the OBS window. + +![OBS sources][8] + +This might be a webcam, a microphone, an audio stream (such as the sound of a video game you're playing), a screen capture of your computer (a "screencast"), a slideshow you want to present, an image, and so on. Before you start streaming, you should define all the sources you plan on using for your stream. This means you have to do a little pre-production and consider what you anticipate for your show. Any camera you have set up must be defined as a source in OBS. Any extra media you plan on cutting to during your show must be defined as a source. Any sound effects or background music must be defined as a source. + +Not all sources "happen" at once. By adding media to your **Sources** panel in OBS, you're just assembling the raw components for your stream. Once you make devices and data available to OBS, you can create your **Scenes**. + +#### Setting up audio + +Computers have seemingly dozens of ways to route audio. Here's the workflow to follow when setting up sound for your stream: + + 1. Check your cables: verify that your microphone is plugged in. + 2. Go to your computer's sound control panel and set the input to whatever microphone you want OBS to treat as the main microphone. This might be a gaming headset or a boom mic or a desktop podcasting mic or a Bluetooth device or a fancy audio interface with XLR ports. Whatever it is, make sure your computer "hears" your main sound input. + 3. In OBS, create a source for your main microphone and name it something obvious (e.g., boom mic, master sound, or mic). + 4. Do a test. Make sure OBS "hears" your microphone by referring to the audio-level monitors at the bottom of the OBS window. If it's not responding to the input you believe you've set as input, check your cables, check your computer sound control panel, and check OBS. + + + +I've seen more people panic over audio sources than any other issue when streaming, and we've _all_ made the same dumb mistakes (several times each, probably!) when attempting to set a microphone for a live stream or videoconference call. Breathe deep, check your cables, check your inputs and outputs, and [get comfortable with audio][9]. It'll pay off in the end. + +### Create scenes + +A **Scene** in OBS is a screen layout and consists of one or more sources. + +![Scenes in OBS][10] + +For instance, you might create a scene called **Master shot** that shows you sitting at your desk in front of your computer or at the kitchen counter ready to mix ingredients together. The source could be a webcam mounted on a tripod a meter or two in front of you. Because you want to cut to a detail shot, you might create a second scene called **Close-up**, which uses the computer screen and audio as one input source and your microphone as another source, so you can narrate as you demonstrate what you're doing. If you're doing a baking show, you might want to mount a second webcam above the counter, so you can cut to an overhead shot of ingredients being mixed. Here, your source is a different webcam but probably the same microphone (to avoid making changes in the audio). + +A _scene_, in other words, is a lot like a _shot_ in traditional production vernacular, but it can be the combination of many shots. The fun thing about OBS is that you can mix and match a lot of different sources together, so when you're adding a **Scene**, you can resize and position different sources to achieve picture-in-picture, or split-screen, or any other effect you might want. It's common in video game "let's play" streams to have the video game in full-screen, with the player inset in the lower right or left. Or, if you're recording a panel or a multi-player game like D&D you might have several cameras covering several players in a _Brady Bunch_ grid. + +The possibilities are endless. During streaming, you can cut from one scene to another as needed. This is intended to be a dynamic system, so you can change scenes depending on what the viewer needs to see at any given moment. + +Generally, you want to have some preset scenes before you start to stream. Even if you have a friend willing to do video mixing as you stream, you always want a safe scene to fall back to, so take time beforehand to set up at least a master shot that shows you doing whatever it is you're doing. If all else fails, at least you'll have your main shot you can safely and reliably cut to. + +### Transitions + +When switching from one scene to another, OBS uses a transition. Once you have more than one scene, you can configure what kind of transition it uses in the **Transitions** panel. Simple transitions are usually best. By default, OBS uses a subtle crossfade, but you can experiment with others as you see fit. + +### Go live + +To start streaming, do your vocal exercises, find your motivation, and press the **Start Streaming** button. + +![Start streaming in OBS][11] + +As long as you've set up your streaming service correctly, you're on the air (or on the wires, anyway). + +If you're the talent (the person in front of the camera), it might be easiest to have someone control OBS during streaming. But if that's not possible, you can control it yourself as long as you've practiced a little in advance. If you're screencasting, it helps to have a two-monitor setup so you can control OBS without it being on screen. + +### Streaming for success + +Many of us take streaming for granted now that the internet exists and can broadcast media created by _anyone_. It's a hugely powerful means of communication, and we're all responsible for making the most of it. + +If you have something positive to say, a skill to teach, words of encouragement, or just something fun that you want to share, and you feel like you want to broadcast to the world, then take the time to learn OBS. You might not get a million viewers, but independent media is a vital part of [free culture][12]. The world can always use empowering and positive open source voices, and yours may be one of the most important of all. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/open-source-live-stream + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_film.png?itok=aElrLLrw (An old-fashioned video camera) +[2]: http://obsproject.com +[3]: https://opensource.com/sites/default/files/obs-flowchart.jpg (OBS flowchart) +[4]: https://flatpak.org/setup +[5]: https://opensource.com/article/19/1/basic-live-video-streaming-server +[6]: https://opensource.com/sites/default/files/twitch-key.jpg (Streaming key) +[7]: https://opensource.com/article/19/1/basic-live-video-streaming-server#obs +[8]: https://opensource.com/sites/default/files/uploads/obs-sources.jpg (OBS sources) +[9]: https://opensource.com/article/17/1/linux-plays-sound +[10]: https://opensource.com/sites/default/files/uploads/obs-scenes.jpg (Scenes in OBS) +[11]: https://opensource.com/sites/default/files/uploads/obs-stream-start.jpg (Start streaming in OBS) +[12]: https://opensource.com/article/18/1/creative-commons-real-world From 59e96957f7a828e876de1d425bf9fc5d86b710ca Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 01:00:25 +0800 Subject: [PATCH 054/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20The=20?= =?UTF-8?q?life-changing=20magic=20of=20git=20rebase=20-i?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 The life-changing magic of git rebase -i.md --- ...he life-changing magic of git rebase -i.md | 80 +++++++++++++++++++ 1 file changed, 80 insertions(+) create mode 100644 sources/tech/20200429 The life-changing magic of git rebase -i.md diff --git a/sources/tech/20200429 The life-changing magic of git rebase -i.md b/sources/tech/20200429 The life-changing magic of git rebase -i.md new file mode 100644 index 0000000000..8afb9d7f8e --- /dev/null +++ b/sources/tech/20200429 The life-changing magic of git rebase -i.md @@ -0,0 +1,80 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The life-changing magic of git rebase -i) +[#]: via: (https://opensource.com/article/20/4/git-rebase-i) +[#]: author: (Dave Neary https://opensource.com/users/dneary) + +The life-changing magic of git rebase -i +====== +Make everyone think you write perfect code the first time (and make your +patches easier to review and merge). +![Hands programming][1] + +Software development is messy. So many wrong turns, typos to fix, quick hacks and kludges to correct later, off-by-one errors you find late in the process. With version control, you have a pristine record of every wrong turn and correction made during the process of creating the "perfect" final product—a patch ready to submit upstream. Like the outtakes from movies, they are a little embarrassing and sometimes amusing. + +Wouldn't it be great if you could use version control to save your work regularly at waypoints, and then when you have something you are ready to submit for review, you could hide all of that private drafting work and just submit a single, perfect patch? Meet **git rebase -i**, the perfect way to rewrite history and make everyone think that you produce perfect code the first time! + +### What does git rebase do? + +In case you're not familiar with the intricacies of Git, here is a brief overview. Under the covers, Git associates different versions of your project with a unique identifier, which is made up of a hash of the parent node's unique identifier, and the difference between the new version and its parent node. This creates a tree of revisions, and each person who checks out the project gets their own copy. Different people can take the project in different directions, each starting from potentially different branch points. + +![Master branch vs. private branch][2] + +The master branch in the "origin" repo on the left and the private branch on your personal copy on the right. + +There are two ways to integrate your work back with the master branch in the original repository: one is to use **git merge**, and the other is to use **git rebase**. They work in very different ways. + +When you use **git merge**, a new commit is created on the master branch that includes all of the changes from origin plus all of your local changes. If there are any conflicts (for example, if someone else has changed a file you are also working with), these will be marked, and you have an opportunity to resolve the conflicts before committing this merge commit to your local repository. When you push your changes back to the parent repository, all of your local work will appear as a branch for other users of the Git repository. + +But **git rebase** works differently. It rewinds your commits and replays those commits again from the tip of the master branch. This results in two main changes. First, since your commits are now branching off a different parent node, their hashes will be recalculated, and anyone who has cloned your repository may now have a broken copy of the repository. Second, you do not have a merge commit, so any merge conflicts are identified as your changes are being replayed onto the master branch, and you need to fix them before proceeding with the rebase. When you push your changes now, your work does not appear on a branch, and it looks as though you wrote all of your changes off the very latest commit to the master branch. + +![Merge commits preserve history, and rebase rewrites history.][3] + +Merge commits (left) preserve history, while rebase (right) rewrites history. + +However, both of these options come with a downside: everyone can see all your scribbles and edits as you worked through problems locally before you were ready to share your code. This is where the **\--interactive** (or **-i** for short) flag to **git rebase** comes into the picture. + +### Introducing git rebase -i + +The big advantage of **git rebase** is that it rewrites history. But why stop at just pretending you branched off a later point? There is a way to go even further and rewrite how you arrived at your ready-to-propose code: **git rebase -i**, an interactive **git rebase**. + +This feature is the "magic time machine" function in Git. The flag allows you to make sophisticated changes to revision history while doing a rebase. You can hide your mistakes! Merge many small changes into one pristine feature patch! Reorder how things appear in revision history! + +![output of git rebase -i][4] + +When you run **git rebase -i**, you get an editor session listing all of the commits that are being rebased and a number of options for what you can do to them. The default choice is **pick**. + + * **Pick** maintains the commit in your history. + * **Reword** allows you to change a commit message, perhaps to fix a typo or add additional commentary. + * **Edit** allows you to make changes to the commit while in the process of replaying the branch. + * **Squash** merges multiple commits into one. + * You can reorder commits by moving them around in the file. + + + +When you are finished, simply save the final result, and the rebase will execute. At each stage where you have chosen to modify a commit (either with **reword**, **edit**, **squash**, or when there is a conflict), the rebase stops and allows you to make the appropriate changes before continuing. + +The example above results in "One-liner bug fix" and "Integrate new header everywhere" being merged into one commit, and "New header for docs website" and "D'oh - typo. Fixed" into another. Like magic, the work that went into the other commits is still there on your branch, but the associated commits have disappeared from your history! + +This makes it easy to submit a clean patch to an upstream project using **git send-email** or by creating a pull request against the parent repository with your newly tidied up patchset. This has a number of advantages, including that it makes your code easier to review, easier to accept, and easier to merge. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/git-rebase-i + +作者:[Dave Neary][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dneary +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming) +[2]: https://opensource.com/sites/default/files/uploads/master-private-branches.png (Master branch vs. private branch) +[3]: https://opensource.com/sites/default/files/uploads/merge-commit-vs-rebase.png (Merge commits preserve history, and rebase rewrites history.) +[4]: https://opensource.com/sites/default/files/uploads/git-rebase-i.png (output of git rebase -i) From 90c39408a7210b6e6f61f859cafaff0365dff783 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 01:01:29 +0800 Subject: [PATCH 055/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20Drop?= =?UTF-8?q?=20PNG=20and=20JPG=20for=20your=20online=20images:=20Use=20WebP?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md --- ...nd JPG for your online images- Use WebP.md | 144 ++++++++++++++++++ 1 file changed, 144 insertions(+) create mode 100644 sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md diff --git a/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md new file mode 100644 index 0000000000..d664a9fdfc --- /dev/null +++ b/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Drop PNG and JPG for your online images: Use WebP) +[#]: via: (https://opensource.com/article/20/4/webp-image-compression) +[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) + +Drop PNG and JPG for your online images: Use WebP +====== +Get started with this open source image editing tool to save time and +space. +![Painting art on a computer screen][1] + +WebP is an image format developed by Google in 2010 that provides superior lossless and lossy compression for images on the web. Using WebP, web developers can create smaller, richer images that improve site speed. A faster loading website is critical to the user experience and for the website's marketing effectiveness. + +For optimal loading across all devices and users, images on your site should not be larger than 500 KB in file size. + +WebP lossless images are often at least 25% smaller in size compared to PNGs. WebP lossy images are often anywhere from 25-34% smaller than comparable JPEG images at equivalent SSIM (structural similarity) quality index. + +Lossless WebP supports transparency, as well. For cases when lossy RGB compression is acceptable, lossy WebP also supports transparency, typically providing three times smaller file sizes compared to PNG. + +Google reports a 64% reduction in file size for images converted from animated GIFs to lossy WebP, and a 19% reduction when converted to lossless WebP. + +The WebP file format is based on the RIFF (resource interchange file format) document format. The file signature is **52 49 46 46** (RIFF), as you can see with [hexdump][2]: + + +``` +$ hexdump --canonical pixel.webp +00000000  52 49 46 46 26 00 00 00  [...]  |RIFF&...WEBPVP8 | +00000010  1a 00 00 00 30 01 00 9d  [...]  |....0....*......| +00000020  0e 25 a4 00 03 70 00 fe  [...]  |.%...p...`....| +0000002e +``` + +The standalone libwebp library serves as a reference implementation for the WebP specification and is available from Google's [Git repository][3] or as a tarball. + +The WebP format is compatible with 80% of the web browsers in use worldwide. At the time of this writing, it is not compatible with Apple's Safari browser. The workaround for this is to serve up a JPG/PNG alongside a WebP, and there are methods and Wordpress plugins to do that. + +### Why does this matter? + +Part of my job is to design and maintain our organization's website. Since the website is a marketing tool and site speed is a critical aspect of the user experience, I have been working to improve the speed, and reducing image sizes by converting them to WebP has been a good solution. + +To test the speed of one of the pages, I turned to **web.dev**, which is powered by Lighthouse, released under the Apache 2.0 license, and can be found at . + +According to its official description, "Lighthouse is an open source, automated tool for improving the quality of web pages. You can run it against any web page—public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO, and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it." + +### Creating a smaller WebP image + +The page I tested returned three images. In the report it generates, it provides recommendations and targets. I chose the "app-graphic" image, which, it reported, is 650 KB. By converting it to WebP, I should save 589 KB, reducing the image to 61 KB. I converted the image in Photoshop and saved it with the default WebP settings, and it returned a file size of 44.9 KB. Better than expected! As the screenshot from Photoshop shows, the images look identical in visual quality. + +![WebP vs JPG comparison][4] + +On the left: 650 KB (actual size). On the right: 589 KB (target size after conversion). + +Of course, the open source image editor [GIMP][5] also supports WebP as an export format. It offers several options for quality and compression profile: + +![GIMP dialog for exporting webp, as a webp][6] + +A zoomed-in look of another image: + +![WebP vs PNG comparison][7] + +PNG (left) and WebP (right), both converted from a JPG, shows the WebP, although smaller in size, is superior in visual quality. + +### Convert to an image to WebP + +To convert images on Linux from JPG/PNG to WebP, you can also use the command-line: + +Use **cwebp** on the command line to convert PNG or JPG image files to WebP format. You can convert a PNG image file to a WebP image with a quality range of 80 with the command: + + +``` +`cwebp -q 80 image.png -o image.webp` +``` + +Alternatively, you can also use [Image Magick][8], which is probably available in your distribution's software repository. The subcommand for conversion is **convert**, and all that's needed is an input and output file: + + +``` +`convert pixel.png pixel.webp` +``` + +### Convert an image to WebP with an editor + +To convert images to WebP with a photo editor, use [GIMP][9]. From version 2.10 on, it supports WebP natively. + +If you're a Photoshop user, you need a plugin to convert the files, as Photoshop does not include it natively. WebPShop 0.2.1, released under the Apache License 2.0 license, is a Photoshop module for opening and saving WebP images, including animations, and can be found at: . + +To use the plugin, put the file found in the **bin** folder inside your Photoshop plugin directory: + +Windows x64—C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi + +Mac—Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin + +### WebP on Wordpress + +Many websites are built using Wordpress (that's what I use). So, how does Wordpress handle uploading WebP images? At the time of this writing, it doesn't. But, there are, of course, plugins to enable it so you can serve up both WebP alongside PNG/JPG images (for the Apple crowd). + +Or there are these [instructions][10] from [Marius Hosting][11]: + +"How about directly uploading WebP images to Wordpress? This is easy. Just add some text line on your theme functions.php file. Wordpress does not natively support viewing and uploading WebP files, but I will explain to you how you can make it work in a few simple steps. Log in to your Wordpress admin area and go to Appearance/Theme Editor and find functions.php. Copy and paste the code below at the end of the file and save it.  + + +``` +`//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes');` +``` + +If you want to see the thumbnail image preview when you go to Media/Library, you have to add the code below in the same functions.php file. To find the functions.php file, go to Appearance/Theme Editor and find functions.php, then copy and paste the code below at the end of the file and save it." + + +``` +`//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2);` +``` + +### WebP and the future + +WebP is a robust and optimized format. It looks better, it has better compression ratio, and it has all the features of most other common image formats. There's no need to wait—start using it now. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/webp-image-compression + +作者:[Jeff Macharyas][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeffmacharyas +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) +[2]: https://opensource.com/article/19/8/dig-binary-files-hexdump +[3]: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html +[4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png (WebP vs JPG comparison) +[5]: http://gimp.org +[6]: https://opensource.com/sites/default/files/webp-gimp.webp (GIMP dialog for exporting webp, as a webp) +[7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png (WebP vs PNG comparison) +[8]: https://imagemagick.org +[9]: https://en.wikipedia.org/wiki/GIMP +[10]: https://mariushosting.com/how-to-upload-webp-files-on-wordpress/ +[11]: https://mariushosting.com/ From b8504387469e63d3693bc749226552628ba01766 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 30 Apr 2020 01:02:14 +0800 Subject: [PATCH 056/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200429=20Why=20?= =?UTF-8?q?strace=20doesn't=20work=20in=20Docker?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200429 Why strace doesn-t work in Docker.md --- ...00429 Why strace doesn-t work in Docker.md | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) create mode 100644 sources/tech/20200429 Why strace doesn-t work in Docker.md diff --git a/sources/tech/20200429 Why strace doesn-t work in Docker.md b/sources/tech/20200429 Why strace doesn-t work in Docker.md new file mode 100644 index 0000000000..42414ad377 --- /dev/null +++ b/sources/tech/20200429 Why strace doesn-t work in Docker.md @@ -0,0 +1,161 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why strace doesn't work in Docker) +[#]: via: (https://jvns.ca/blog/2020/04/29/why-strace-doesnt-work-in-docker/) +[#]: author: (Julia Evans https://jvns.ca/) + +Why strace doesn't work in Docker +====== + +While editing the capabilities page of the [how containers work][1] zine, I found myself trying to explain why `strace` doesn’t work in a Docker container. + +The problem here is – if you run `strace` in a Docker container, this happens: + +``` +$ docker run -it ubuntu:18.04 /bin/bash +$ # ... install strace ... +[email protected]:/# strace ls +strace: ptrace(PTRACE_TRACEME, ...): Operation not permitted +``` + +strace works using the `ptrace` system call, so if `ptrace` isn’t allowed, it’s definitely not gonna work! This is pretty easy to fix – on my machine, this fixes it: + +``` +docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash +``` + +But I wasn’t interested in fixing it, I wanted to know why it happens. So why does strace not work, and why does `--cap-add=SYS_PTRACE` fix it? + +### hypothesis 1: container processes are missing the `CAP_SYS_PTRACE` capability + +I always thought the reason was that Docker container processes by default didn’t have the `CAP_SYS_PTRACE` capability. This is consistent with it being fixed by `--cap-add=SYS_PTRACE`, right? + +But this actually doesn’t make sense for 2 reasons. + +**Reason 1**: Experimentally, as a regular user, I can strace on any process run by my user. But if I check if my current process has the `CAP_SYS_PTRACE` capability, I don’t: + +``` +$ getpcaps $$ +Capabilities for `11589': = +``` + +**Reason 2**: `man capabilities` says this about `CAP_SYS_PTRACE`: + +``` +CAP_SYS_PTRACE + * Trace arbitrary processes using ptrace(2); +``` + +So the point of `CAP_SYS_PTRACE` is to let you ptrace **arbitrary** processes owned by any user, the way that root usually can. You shouldn’t need it to just ptrace a regular process owned by your user. + +And I tested this a third way – I ran a Docker container with `docker run --cap-add=SYS_PTRACE -it ubuntu:18.04 /bin/bash`, dropped the `CAP_SYS_PTRACE` capability, and I could still strace processes even though I didn’t have that capability anymore. What? Why? + +### hypothesis 2: something about user namespaces??? + +My next (much less well-founded) hypothesis was something along the lines of “um, maybe the process is in a different user namespace and strace doesn’t work because of… reasons?” This isn’t really coherent but here’s what happened when I looked into it. + +Is the container process in a different user namespace? Well, in the container: + +``` +[email protected]:/# ls /proc/$$/ns/user -l +... /proc/1/ns/user -> 'user:[4026531837]' +``` + +On the host: + +``` +[email protected]:~$ ls /proc/$$/ns/user -l +... /proc/12177/ns/user -> 'user:[4026531837]' +``` + +Because the user namespace ID (`4026531837`) is the same, the root user in the container is the exact same user as the root user on the host. So there’s definitely no reason it shouldn’t be able to strace processes that it created! + +This hypothesis doesn’t make much sense but I hadn’t realized that the root user in a Docker container is the same as the root user on the host, so I thought that was interesting. + +### hypothesis 3: the ptrace system call is being blocked by a seccomp-bpf rule + +I also knew that Docker uses seccomp-bpf to stop container processes from running a lot of system calls. And ptrace is in the [list of system calls blocked by Docker’s default seccomp profile][2]! (actually the list of allowed system calls is a whitelist, so it’s just that ptrace is not in the default whitelist. But it comes out to the same thing.) + +That easily explains why strace wouldn’t work in a Docker container – if the `ptrace` system call is totally blocked, then of course you can’t call it at all and strace would fail. + +Let’s verify this hypothesis – if we disable all seccomp rules, can we strace in a Docker container? + +``` +$ docker run --security-opt seccomp=unconfined -it ubuntu:18.04 /bin/bash +$ strace ls +execve("/bin/ls", ["ls"], 0x7ffc69a65580 /* 8 vars */) = 0 +... it works fine ... +``` + +Yes! It works! Great. Mystery solved, except… + +### why does `--cap-add=SYS_PTRACE` fix the problem? + +What we still haven’t explained is: why does `--cap-add=SYS_PTRACE` would fix the problem? + +The man page for `docker run` explains the `--cap-add` argument this way: + +``` +--cap-add=[] + Add Linux capabilities +``` + +That doesn’t have anything to do with seccomp rules! What’s going on? + +### let’s look at the Docker source code. + +When the documentation doesn’t help, the only thing to do is go look at the source. + +The nice thing about Go is, because dependencies are often vendored in a Go repository, you can just grep the repository to figure out where the code that does a thing is. So I cloned `github.com/moby/moby` and grepped for some things, like `rg CAP_SYS_PTRACE`. + +Here’s what I think is going on. In containerd’s seccomp implementation, in [contrib/seccomp/seccomp_default.go][3], there’s a bunch of code that makes sure that if a process has a capability, then it’s also given access (through a seccomp rule) to use the system calls that go with that capability. + +``` +case "CAP_SYS_PTRACE": + s.Syscalls = append(s.Syscalls, specs.LinuxSyscall{ + Names: []string{ + "kcmp", + "process_vm_readv", + "process_vm_writev", + "ptrace", + }, + Action: specs.ActAllow, + Args: []specs.LinuxSeccompArg{}, + }) +``` + +There’s some other code that seems to do something very similar in [profiles/seccomp/seccomp.go][4] in moby and the [default seccomp profile][5], so it’s possible that that’s what’s doing it instead. + +So I think we have our answer! + +### `--cap-add` in Docker does a little more than what it says + +The upshot seems to be that `--cap-add` doesn’t do exactly what it says it does in the man page, it’s more like `--cap-add-and-also-whitelist-some-extra-system-calls-if-required`. Which makes sense! If you have a capability like `CAP_SYS_PTRACE` which is supposed to let you use the `process_vm_readv` system call but that system call is blocked by a seccomp profile, that’s not going to help you much! + +So allowing the `process_vm_readv` and `ptrace` system calls when you give the container `CAP_SYS_PTRACE` seems like a reasonable choice. + +### that’s all! + +This was a fun small thing to investigate, and I think it’s a nice example of how containers are made of lots of moving pieces that work together in not-completely-obvious ways. + +-------------------------------------------------------------------------------- + +via: https://jvns.ca/blog/2020/04/29/why-strace-doesnt-work-in-docker/ + +作者:[Julia Evans][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://jvns.ca/ +[b]: https://github.com/lujun9972 +[1]: https://wizardzines.com/zines/containers +[2]: https://docs.docker.com/engine/security/seccomp/ +[3]: https://github.com/containerd/containerd/blob/4be98fa28b62e8a012491d655a4d6818ef87b080/contrib/seccomp/seccomp_default.go#L527-L537 +[4]: https://github.com/moby/moby/blob/cc0dfb6e7b22ad120c60a9ce770ea15415767cf9/profiles/seccomp/seccomp.go#L126-L132 +[5]: https://github.com/moby/moby/blob/master/profiles/seccomp/default.json#L723-L739 From 7cac16013bcdabb3f24f680debbc641cf4d46ee2 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 30 Apr 2020 08:42:03 +0800 Subject: [PATCH 057/178] translated --- ... Free and Open Source Editor Pixelorama.md | 108 ------------------ ... Free and Open Source Editor Pixelorama.md | 107 +++++++++++++++++ 2 files changed, 107 insertions(+), 108 deletions(-) delete mode 100644 sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md create mode 100644 translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md diff --git a/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md b/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md deleted file mode 100644 index b4b0949dab..0000000000 --- a/sources/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md +++ /dev/null @@ -1,108 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Create Stunning Pixel Art With Free and Open Source Editor Pixelorama) -[#]: via: (https://itsfoss.com/pixelorama/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -Create Stunning Pixel Art With Free and Open Source Editor Pixelorama -====== - -_**Brief: Pixelorama is a cross-platform, free and open source 2D sprite editor. It provides all the necessary tools to create pixel art in a neat user interface.**_ - -### Pixelorama: open source sprite editor - -[Pixelorama][1] is a tool created by young game developers at [Orama Interactive][2]. They have developed a few 2D games and a couple of them use pixel art. - -While Orama is primarily into game development, the developers are also creating utility tools that help them (and others) create those games. - -The free and open source sprite editor, Pixelorama is such a utility tool. It’s built on top of [Godot Engine][3] and is perfect for creating pixel art. - -![Pixelorama screenshot][4] - -You see the pixel art in the screenshot above? It’s been created using Pixelorama. This video shows a timelapse video of creating the above image. - -### Features of Pixelorama - -Here are the main features Pixelorama provides: - - * Multiple tools like penicl, erase, fill bucket color picker etc - * Multiple layer system that allows you to add, remove, move up and down, clone and merge as many layers as you like - * Support for spritesheets - * Import images and edit them inside Pixelorama - * Animation timeline with [Onion Skinning][5] - * Custom brushes - * Save and open your projects in Pixelorama’s custom file format, .pxo - * Horizontal & vertical mirrored drawing - * Tile Mode for pattern creation - * Split screen mode and mini canvas preview - * Zoom with mouse scroll wheel - * Unlimited undo and redo - * Scale, crop, flip, rotate, color invert and desaturate your images - * Keyboard shortcuts - * Available in several languages - * Supports Linux, Windows and macOS - - - -### Installing Pixelorama on Linux - -Pixelorama is available as a Snap application and if you are using Ubuntu, you can find it in the software center itself. - -![Pixelorama is available in Ubuntu Software Center][6] - -Alternatively, if you have [Snap support enabled on your Linux distribution][7], you can install it using this command: - -``` -sudo snap install pixelorama -``` - -If you don’t want to use Snap, no worries. You can download the latest release of Pixelorama from [their GitHub repository][8], [extract the zip file][9] and you’ll see an executable file. Give this file execute permission and double click on it to run the application. - -[Download Pixelorama][10] - -**Conclusion** - -![Pixelorama Welcome Screen][11] - -In the Pixeloaram features, it says that you can import images and edit them. I guess that’s only true for certain kind of files because when I tried to import PNG or JPEG files, the application crashed. - -However, I could easily doodle like a 3 year old and make random pixel art. I am not that into arts but I think this is a [useful tool for digital artists on Linux][12]. - -I liked the idea that despite being game developers, they are creating tools that could help other game developers and artists. That’s the spirit of open source. - -If you like the project and will be using it, consider supporting them by a donation. [It’s FOSS has made a humble donation][13] of $25 to thank their effort. - -[Donate to Pixelorama (personal Paypal account of the lead developer)][14] - -Do you like Pixelorama? Do you use some other open source sprite editor? Feel free to share your views in the comment section. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/pixelorama/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://www.orama-interactive.com/pixelorama -[2]: https://www.orama-interactive.com/ -[3]: https://godotengine.org/ -[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-v6.jpg?ssl=1 -[5]: https://en.wikipedia.org/wiki/Onion_skinning -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-ubuntu-software-center.jpg?ssl=1 -[7]: https://itsfoss.com/install-snap-linux/ -[8]: https://github.com/Orama-Interactive/Pixelorama -[9]: https://itsfoss.com/unzip-linux/ -[10]: https://github.com/Orama-Interactive/Pixelorama/releases -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama.jpg?ssl=1 -[12]: https://itsfoss.com/best-linux-graphic-design-software/ -[13]: https://itsfoss.com/donations-foss/ -[14]: https://www.paypal.me/erevos diff --git a/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md b/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md new file mode 100644 index 0000000000..530517a482 --- /dev/null +++ b/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create Stunning Pixel Art With Free and Open Source Editor Pixelorama) +[#]: via: (https://itsfoss.com/pixelorama/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +使用免费和开源编辑器 Pixelorama 创建令人惊叹的像素艺术 +====== + +_**简介:Pixelorama 是一个跨平台、免费和开源的 2D Sprite 编辑器。它在干净的用户界面中提供了创建像素艺术所有必要工具。**_ + +### Pixelorama:开源 Sprite 编辑器 + +[Pixelorama][1] 是年轻游戏开发人员在 [Orama 互动][2]创建的工具。他们已经开发了一些 2D 游戏,其中一些使用了像素艺术。 + +虽然 Orama 主要投入游戏开发,但开发人员也在创建实用工具,帮助他们(和其他人)创建这些游戏。 + +免费开源的 Sprite 编辑器,Pixelorama 是这样一个实用工具。它构建在 [Godot 引擎][3]之上,非常适合创作像素艺术。 + +![Pixelorama screenshot][4] + +你在上面的截图中看到像素艺术了吗?它是使用 Pixelorama 创建的。此视频显示创建上面图像的延时视频。 + +### Pixelorama 的功能 + +以下是 Pixelorama 提供的主要功能: + + * 多种工具,如铅笔,擦除,填充桶颜色选择器等 + * 多层系统,你可以根据需要添加、删除、上下移动、克隆和合并尽可能多的层 + * 支持 Spritesheets + * 导入图像并在 Pixelorama 中编辑它们 + * 带有 [Onion Skinning][5] 的动画时间线 + * 自定义画笔 + * 以 Pixelorama 的自定义文件格式 .pxo 保存并打开你的项目 + * 水平和垂直镜像绘图 + * 用于图样创建的磁贴模式 + * 拆分屏幕模式和迷你画布预览 + * 使用鼠标滚轮缩放 + * 无限撤消和重做 + * 缩放、裁剪、翻转、旋转、颜色反转和去饱和图像 + * 键盘快捷键 + * 提供多种语言 + * 支持 Linux、Windows 和 macOS + + + +### 在 Linux 上安装 Pixelorama + +Pixelorama 提供 Snap 应用,如果你使用的是 Ubuntu,那么可以在软件中心找到它。 + +![Pixelorama is available in Ubuntu Software Center][6] + +或者,如果你在 [Linux 发行版上启用了 Snap 支持][7],那么可以使用此命令安装它: + +``` +sudo snap install pixelorama +``` + +如果你不想使用 Snap,不用担心。你可以从[他们的 GitHub 仓库]下载最新版本的 Pixelorama,[解压 zip 文件][9],你会看到一个可执行文件。授予此文件执行权限,并双击它运行应用。 + +[下载 Pixelorama][10] + +**总结** + +![Pixelorama Welcome Screen][11] + +在 Pixeloaram 的功能中,它说你可以导入图像并对其进行编辑。我想,这只是对某些类型的文件,因为当我尝试导入 PNG 或 JPEG 文件,程序崩溃了。 + +然而,我可以像一个 3 岁的孩子那样随意涂鸦并制作像素艺术。我并没有深入艺术,但我认为这是一个[在 Linux 上对数字艺术家有用的工具][12]。 + +我喜欢这样的想法:尽管是游戏开发人员,但他们创建的工具,可以帮助其他游戏开发人员和艺术家。这就是开源的精神。 + +如果你喜欢这个项目,并且会使用它,请考虑通过捐赠来支持他们。[FOSS 捐赠了][13] 25 美元,以感谢他们的努力。 + +[向 Pixelorama 捐赠(主要开发者的个人 Paypal 账户)][14] + +你喜欢 Pixelorama 吗?你是否使用其他开源 Sprite 编辑器?请随时在评论栏分享你的观点。 + +-------------------------------------------------------------------------------- + + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.orama-interactive.com/pixelorama +[2]: https://www.orama-interactive.com/ +[3]: https://godotengine.org/ +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-v6.jpg?ssl=1 +[5]: https://en.wikipedia.org/wiki/Onion_skinning +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama-ubuntu-software-center.jpg?ssl=1 +[7]: https://itsfoss.com/install-snap-linux/ +[8]: https://github.com/Orama-Interactive/Pixelorama +[9]: https://itsfoss.com/unzip-linux/ +[10]: https://github.com/Orama-Interactive/Pixelorama/releases +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/03/pixelorama.jpg?ssl=1 +[12]: https://itsfoss.com/best-linux-graphic-design-software/ +[13]: https://itsfoss.com/donations-foss/ +[14]: https://www.paypal.me/erevos From dd505050c66dd0b7765e8efe3147f85a64edfb1a Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Thu, 30 Apr 2020 08:43:21 +0800 Subject: [PATCH 058/178] Rename sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md to sources/news/20200429 Java security, mainframes having a moment, and more industry trends.md --- ...urity, mainframes having a moment, and more industry trends.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20200429 Java security, mainframes having a moment, and more industry trends.md (100%) diff --git a/sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md b/sources/news/20200429 Java security, mainframes having a moment, and more industry trends.md similarity index 100% rename from sources/tech/20200429 Java security, mainframes having a moment, and more industry trends.md rename to sources/news/20200429 Java security, mainframes having a moment, and more industry trends.md From 168f8315435b9851f2c37692d67d52555d2a48d1 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 30 Apr 2020 08:49:27 +0800 Subject: [PATCH 059/178] translating --- sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md index 1807d5e6a7..5dd426ccfc 100644 --- a/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md +++ b/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From c5fa181b4e130158396d8864458ca0852c8f5fa2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 09:36:28 +0800 Subject: [PATCH 060/178] PRF @geekpi --- ...v5- Why there is IPv4, IPv6 but no IPv5.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md index 71bfe2137a..b0715669ae 100644 --- a/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md +++ b/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5?) @@ -20,19 +20,19 @@ IPv5 发生了什么?为什么有 IPv4、IPv6 但没有 IPv5? ![ARPA Logical Map in 1977 | Image courtesy: Wikipedia][1] -在 1960 年代后期,美国国防部的[高级研究计划局][2] (ARPA) 发起了一个[项目][3]来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 +在 1960 年代后期,美国国防部的[高级研究计划局][2](ARPA)发起了一个[项目][3]来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 -由于这是第一次将如此规模的网络整合在一起,因此他们也在不断发展自己的技术和硬件。他们的第一件工作是名为[传输控制协议][4] (TCP) 的互联网协议 (IP)。该协议“可靠、有序、并会对通过 IP 网络传输的八进制(字节)流错误检测”。基本上,它确保数据安全到达。 +由于这是第一次将如此规模的网络整合在一起,因此他们也在不断发展自己的技术和硬件。他们首先做的工作之一就是开发名为[传输控制协议][4]Transmission Control Protocol(TCP)的互联网协议Internet Protocol(IP)。该协议“可靠、有序,并会对运行于通过 IP 网络传输的主机上的应用的八进制(字节)流通讯进行错误检测”。简单来说,它可以确保数据安全到达。 -最初,TCP 被设计为[“主机级别的端到端协议以及打包和路由协议”][5]。但是,他们意识到他们需要拆分协议以使其更易于管理。于是决定由 IP 处理打包和路由。 +最初,TCP 被设计为[“主机级别的端到端协议以及封装和路由协议”][5]。但是,他们意识到他们需要拆分协议以使其更易于管理。于是决定由 IP 协议处理封装和路由。 那时,TCP 已经经历了三个版本,因此新协议被称为 IPv4。 ### IPv5 的诞生 -IPv5 以不同的名称开始使用:互联网流协议(或 ST)。它是[由 Apple、NeXT 和 Sun Microsystems][6] 创建用于实验流式传输语音和视频。 +IPv5 开始时有个不同的名字:互联网流协议Internet Stream Protocol(ST)。它是[由 Apple、NeXT 和 Sun Microsystems][6] 为试验流式语音和视频而创建的。 -该新协议能够“在保持通信的同时在特定频率上传输数据包”。 +该新协议能够“在保持通信的同时,以特定频率传输数据包”。 ### 那么 IPv5 发生了什么? @@ -40,15 +40,15 @@ IPv5 以不同的名称开始使用:互联网流协议(或 ST)。它是[ IPv5 从未被接受为正式的互联网协议。这主要是由于 32 位限制。 -IPV5 使用与 IPv4 相同的寻址系统。每个地址由 0 到 255 之间的四组数字组成。这将可能的地址数量限制为 [43 亿][6]。 +IPV5 使用与 IPv4 相同的寻址系统。每个地址由 0 到 255 之间的四组数字组成。这将可能的地址数量限制为 [43 亿][6]个。 -在 1970 年代初,这似乎比全世界所需要的还要多。但是,互联网的爆炸性增长证明了这一想法是错误的。2011年,世界正式耗尽了 IPv4 地址。 +在 1970 年代初,这似乎比全世界所需要的还要多。但是,互联网的爆炸性增长证明了这一想法是错误的。2011 年,世界上的IPv4地址正式用完了。 -在 1990 年代,一个新项目开始致力于下一代互联网协议 (IPng)。这导致了 128 位的 IPv6。IPv6 地址包含 [“8 组 4 字符的十六进制数字”][6],它可以包含从 0 到 9 的数字和从 A 到 F 的字母。与 IPv4 不同,IPv6 拥有数万亿个可能的地址,因此我们应该能安全一阵子。 +在 1990 年代,一个新项目开始致力于研究下一代互联网协议(IPng)。这形成了 128 位的 IPv6。IPv6 地址包含 [“8 组 4 字符的十六进制数字”][6],它可以包含从 0 到 9 的数字和从 A 到 F 的字母。与 IPv4 不同,IPv6 拥有数万亿个可能的地址,因此我们应该能安全一阵子。 -同时,IPv5 奠定了 VoIP 的基础,而该技术已被我们用于当今世界范围内的通信。**因此,我想往小了说,你可以说 IPv5 仍然可以保留到了今天**。 +同时,IPv5 奠定了 VoIP 的基础,而该技术已被我们用于当今世界范围内的通信。**因此,我想在某种程度上,你可以说 IPv5 仍然可以保留到了今天**。 -希望你喜欢有关互联网历史的轶事。你可以阅读其他[关于 Linux 和技术的琐事文章] [8]。 +希望你喜欢有关互联网历史的轶事。你可以阅读其他[关于 Linux 和技术的琐事文章][8]。 如果你觉得这篇文章有趣,请花一点时间在社交媒体、Hacker News 或 [Reddit][9] 上分享它。 @@ -59,7 +59,7 @@ via: https://itsfoss.com/what-happened-to-ipv5/ 作者:[John Paul][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 6ba2b00a0204da854ba1b888e2cd02a401185af7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 09:37:51 +0800 Subject: [PATCH 061/178] PUB @geekpi https://linux.cn/article-12168-1.html --- ...t Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md (98%) diff --git a/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md similarity index 98% rename from translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md rename to published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md index b0715669ae..6dc822b7b6 100644 --- a/translated/tech/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md +++ b/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12168-1.html) [#]: subject: (What Happened to IPv5? Why there is IPv4, IPv6 but no IPv5?) [#]: via: (https://itsfoss.com/what-happened-to-ipv5/) [#]: author: (John Paul https://itsfoss.com/author/john/) From 986637082ec52753160022ab4aa39053f0275b3d Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 10:31:01 +0800 Subject: [PATCH 062/178] PRF --- ...hat Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md index 6dc822b7b6..3a8ab6b8bd 100644 --- a/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md +++ b/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md @@ -20,7 +20,7 @@ IPv5 发生了什么?为什么有 IPv4、IPv6 但没有 IPv5? ![ARPA Logical Map in 1977 | Image courtesy: Wikipedia][1] -在 1960 年代后期,美国国防部的[高级研究计划局][2](ARPA)发起了一个[项目][3]来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 +在 1960 年代后期,美国国防部的[高级研究计划局][2](DARPA)发起了一个[项目][3]来连接全国的计算机。最初的目标是创建一个由全国 ARPA 资助的所有计算机组成的网络系统。 由于这是第一次将如此规模的网络整合在一起,因此他们也在不断发展自己的技术和硬件。他们首先做的工作之一就是开发名为[传输控制协议][4]Transmission Control Protocol(TCP)的互联网协议Internet Protocol(IP)。该协议“可靠、有序,并会对运行于通过 IP 网络传输的主机上的应用的八进制(字节)流通讯进行错误检测”。简单来说,它可以确保数据安全到达。 From 80ae229a5ab081c126ca5bae69935ce266117225 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Thu, 30 Apr 2020 13:48:03 +0800 Subject: [PATCH 063/178] Translated --- ...re SFTP Server with Chroot in Debian 10.md | 197 ------------------ ...re SFTP Server with Chroot in Debian 10.md | 197 ++++++++++++++++++ 2 files changed, 197 insertions(+), 197 deletions(-) delete mode 100644 sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md create mode 100644 translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md diff --git a/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md b/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md deleted file mode 100644 index 9d57192270..0000000000 --- a/sources/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md +++ /dev/null @@ -1,197 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to Configure SFTP Server with Chroot in Debian 10) -[#]: via: (https://www.linuxtechi.com/configure-sftp-chroot-debian10/) -[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) - -How to Configure SFTP Server with Chroot in Debian 10 -====== - -**SFTP** stands for Secure File Transfer Protocol / SSH File Transfer Protocol, it is one of the most common method which is used to transfer files securely over ssh from our local system to remote server and vice-versa. The main advantage of sftp is that we don’t need to install any additional package except ‘**openssh-server**’, in most of the Linux distributions ‘openssh-server’ package is the part of default installation. Other benefit of sftp is that we can allow user to use sftp only not ssh. - -[![Configure-sftp-debian10][1]][2] - -Recently Debian 10, Code name ‘Buster’ has been released, in this article we will demonstrate how to configure sftp with Chroot ‘Jail’ like environment in Debian 10 System. Here Chroot Jail like environment means that user’s cannot go beyond from their respective home directories or users cannot change directories from their home directories.  Following are the lab details: - - * OS = Debian 10 - * IP Address = 192.168.56.151 - - - -Let’s jump into SFTP Configuration Steps, - -### Step:1) Create a Group for sftp using groupadd command - -Open the terminal, create a group with a name “**sftp_users**” using below groupadd command, - -``` -root@linuxtechi:~# groupadd sftp_users -``` - -### Step:2) Add Users to Group ‘sftp_users’ and set permissions - -In case you want to create new user and want to add that user to ‘sftp_users’ group, then run the following command, - -**Syntax:** #  useradd -m -G sftp_users <user_name> - -Let’s suppose user name is ’Jonathan’ - -``` -root@linuxtechi:~# useradd -m -G sftp_users jonathan -``` - -set the password using following chpasswd command, - -``` -root@linuxtechi:~# echo "jonathan:" | chpasswd -``` - -In case you want to add existing users to ‘sftp_users’ group then run beneath usermod command, let’s suppose already existing user name is ‘chris’ - -``` -root@linuxtechi:~# usermod -G sftp_users chris -``` - -Now set the required permissions on Users, - -``` -root@linuxtechi:~# chown root /home/jonathan /home/chris/ -``` - -Create an upload folder in both the user’s home directory and set the correct ownership, - -``` -root@linuxtechi:~# mkdir /home/jonathan/upload -root@linuxtechi:~# mkdir /home/chris/upload -root@linuxtechi:~# chown jonathan /home/jonathan/upload -root@linuxtechi:~# chown chris /home/chris/upload -``` - -**Note:** User like Jonathan and Chris can upload files and directories to upload folder from their local systems. - -### Step:3) Edit sftp configuration file (/etc/ssh/sshd_config) - -As we have already stated that sftp operations are done over the ssh, so it’s configuration file is “**/etc/ssh/sshd_config**“, Before making any changes I would suggest first take the backup and then edit this file and add the following content, - -``` -root@linuxtechi:~# cp /etc/ssh/sshd_config /etc/ssh/sshd_config-org -root@linuxtechi:~# vim /etc/ssh/sshd_config -……… -#Subsystem sftp /usr/lib/openssh/sftp-server -Subsystem sftp internal-sftp - -Match Group sftp_users - X11Forwarding no - AllowTcpForwarding no - ChrootDirectory %h - ForceCommand internal-sftp -………… -``` - -Save & exit the file. - -To make above changes into the affect, restart ssh service using following systemctl command - -``` -root@linuxtechi:~# systemctl restart sshd -``` - -In above ‘sshd_config’ file we have commented out the line which starts with “Subsystem” and added new entry “Subsystem       sftp    internal-sftp” and new lines like, - -“**Match Group sftp_users”**  –> It means if a user is a part of ‘sftp_users’ group then apply rules which are mentioned below to this entry. - -“**ChrootDierctory %h**” –> It means users can only change directories within their respective home directories, they cannot go beyond their home directories, or in other words we can say users are not permitted to change directories, they will get jai like environment within their directories and can’t access any other user’s and system’s directories. - -“**ForceCommand internal-sftp**” –> It means users are limited to sftp command only. - -### Step:4) Test and Verify sftp - -Login to any other Linux system which is on the same network of your sftp server and then try to ssh sftp server via the users that we have mapped in ‘sftp_users’ group. - -``` -[root@linuxtechi ~]# ssh root@linuxtechi -root@linuxtechi's password: -Write failed: Broken pipe -[root@linuxtechi ~]# ssh root@linuxtechi -root@linuxtechi's password: -Write failed: Broken pipe -[root@linuxtechi ~]# -``` - -Above confirms that users are not allowed to SSH , now try sftp using following commands, - -``` -[root@linuxtechi ~]# sftp root@linuxtechi -root@linuxtechi's password: -Connected to 192.168.56.151. -sftp> ls -l -drwxr-xr-x 2 root 1001 4096 Sep 14 07:52 debian10-pkgs --rw-r--r-- 1 root 1001 155 Sep 14 07:52 devops-actions.txt -drwxr-xr-x 2 1001 1002 4096 Sep 14 08:29 upload -``` - -Let’s try to download a file using sftp ‘**get**‘ command - -``` -sftp> get devops-actions.txt -Fetching /devops-actions.txt to devops-actions.txt -/devops-actions.txt 100% 155 0.2KB/s 00:00 -sftp> -sftp> cd /etc -Couldn't stat remote file: No such file or directory -sftp> cd /root -Couldn't stat remote file: No such file or directory -sftp> -``` - -Above output confirms that we are able to download file from our sftp server to local machine and apart from this we have also tested that users cannot change directories. - -Let’s try to upload a file under “**upload**” folder, - -``` -sftp> cd upload/ -sftp> put metricbeat-7.3.1-amd64.deb -Uploading metricbeat-7.3.1-amd64.deb to /upload/metricbeat-7.3.1-amd64.deb -metricbeat-7.3.1-amd64.deb 100% 38MB 38.4MB/s 00:01 -sftp> ls -l --rw-r--r-- 1 1001 1002 40275654 Sep 14 09:18 metricbeat-7.3.1-amd64.deb -sftp> -``` - -This confirms that we have successfully uploaded a file from our local system to sftp server. - -Now test the SFTP server with winscp tool, enter the sftp server ip address along user’s credentials, - -[![Winscp-sftp-debian10][1]][3] - -Click on Login and then try to download and upload files - -[![Download-file-winscp-debian10-sftp][1]][4] - -Now try to upload files in upload folder, - -[![Upload-File-using-winscp-Debian10-sftp][1]][5] - -Above window confirms that uploading is also working fine, that’s all from this article. If these steps help you to configure SFTP server with chroot environment in Debian 10 then please do share your feedback and comments. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/configure-sftp-chroot-debian10/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linuxtechi.com/author/pradeep/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-sftp-debian10.jpg -[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Winscp-sftp-debian10.jpg -[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Download-file-winscp-debian10-sftp.jpg -[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Upload-File-using-winscp-Debian10-sftp.jpg diff --git a/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md b/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md new file mode 100644 index 0000000000..a30b6ee817 --- /dev/null +++ b/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md @@ -0,0 +1,197 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Configure SFTP Server with Chroot in Debian 10) +[#]: via: (https://www.linuxtechi.com/configure-sftp-chroot-debian10/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +如何在 Debian 10 中使用 Chroot 配置 SFTP 服务 +====== + +**SFTP** 代表安全文件传输协议 / SSH 文件传输协议,它是最常用的一个方法,用于通过ssh将文件从本地系统安全地传输到远程服务器,反之亦然。sftp 的主要优点是,除 ‘**openssh-server**’ 之外,我们不需要安装任何额外的软件包,在大多数的 Linux 发行版中,‘openssh-server’ 软件包是默认安装的一部分。sftp 的另外一个好处是,我们可以允许用户使用 sftp ,而不允许使用 ssh 。 + +[![配置-sftp-debian10][1]][2] + +当前 Debian 10 ,代号‘Buster’,已经发布,在这篇文章中,我们将演示如何在 Debian 10 系统中使用 Chroot ‘Jail’ 类似的环境配置 sftp 。在这里,Chroot Jail 类似环境意味着,用户不能超出各自的 home 目录,或者用户不能从各自的 home 目录更改目录。下面实验的详细情况: + + * OS = Debian 10 + * IP 地址 = 192.168.56.151 + + + +让我们跳转到 SFTP 配置步骤, + +### 步骤:1) 为 sftp 使用 groupadd 命令创建一个组 + +打开终端,使用下面的 groupadd 命令创建一个名为的“**sftp_users**”组, + +``` +root@linuxtechi:~# groupadd sftp_users +``` + +### 步骤:2) 添加用户到组 ‘sftp_users’ 并设置权限 + +假设你想创建新的用户,并且想添加该用户到 ‘sftp_users’ 组中,那么运行下面的命令, + +**语法:** # useradd -m -G sftp_users + +让我们假设用户名是 ’Jonathan’ + +``` +root@linuxtechi:~# useradd -m -G sftp_users jonathan +``` + +使用下面的 chpasswd 命令设置密码, + +``` +root@linuxtechi:~# echo "jonathan:" | chpasswd +``` + +假设你想添加现有的用户到 ‘sftp_users’ 组中,那么运行下面的 usermod 命令,让我们假设已经存在的用户名称是 ‘chris’ + +``` +root@linuxtechi:~# usermod -G sftp_users chris +``` + +现在设置用户所需的权限, + +``` +root@linuxtechi:~# chown root /home/jonathan /home/chris/ +``` + +在各用户的 home 目录中都创建一个上传目录,并设置正确地所有权, + +``` +root@linuxtechi:~# mkdir /home/jonathan/upload +root@linuxtechi:~# mkdir /home/chris/upload +root@linuxtechi:~# chown jonathan /home/jonathan/upload +root@linuxtechi:~# chown chris /home/chris/upload +``` + +**注意:** 像 Jonathan 和 Chris 之类的用户可以从他们的本地系统上传文件和目录。 + +### 步骤:3) 编辑 sftp 配置文件 (/etc/ssh/sshd_config) + +正如我们已经陈述的,sftp 操作是通过 ssh 完成的,所以它的配置文件是 “**/etc/ssh/sshd_config**“, 在做任何更改前,我建议首先备份文件,然后再编辑该文件,接下来添加下面的内容, + +``` +root@linuxtechi:~# cp /etc/ssh/sshd_config /etc/ssh/sshd_config-org +root@linuxtechi:~# vim /etc/ssh/sshd_config +……… +#Subsystem sftp /usr/lib/openssh/sftp-server +Subsystem sftp internal-sftp + +Match Group sftp_users + X11Forwarding no + AllowTcpForwarding no + ChrootDirectory %h + ForceCommand internal-sftp +………… +``` + +保存并退出文件。 + +为使上述更改生效,使用下面的 systemctl 命令来重新启动 ssh 服务 + +``` +root@linuxtechi:~# systemctl restart sshd +``` + +在上面的 ‘sshd_config’ 文件中,我们已经注释掉了以 “Subsystem”开头的行,并添加了新的条目 “Subsystem sftp internal-sftp” 和新的行,像, + +“**Match Group sftp_users”** –> 它意味着如果用户是 ‘sftp_users’ 组中的一员,那么将应用下面提到的规则到这个条目。 + +“**ChrootDierctory %h**” –> 它意味着用户只能在他们自己各自的 home 目录中更改目录,而不能超出他们各自的 home 目录。或者换句话说,我们可以说用户是不允许更改目录的。他们将在他们的目录中获得 jai 类似环境,并且不能访问其他用户的目录和系统的目录。 + +“**ForceCommand internal-sftp**” –> 它意味着用户仅被限制到 sftp 命令。 + +### 步骤:4) 测试和验证 sftp + +登录到你的 sftp 服务器的同一个网络上的任何其它的 Linux 系统,然后通过我们在 ‘sftp_users’ 组中映射的用户来尝试 ssh sftp 服务。 + +``` +[root@linuxtechi ~]# ssh root@linuxtechi +root@linuxtechi's password: +Write failed: Broken pipe +[root@linuxtechi ~]# ssh root@linuxtechi +root@linuxtechi's password: +Write failed: Broken pipe +[root@linuxtechi ~]# +``` + +以上操作证实用户不允许 SSH ,现在使用下面的命令尝试 sftp , + +``` +[root@linuxtechi ~]# sftp root@linuxtechi +root@linuxtechi's password: +Connected to 192.168.56.151. +sftp> ls -l +drwxr-xr-x 2 root 1001 4096 Sep 14 07:52 debian10-pkgs +-rw-r--r-- 1 root 1001 155 Sep 14 07:52 devops-actions.txt +drwxr-xr-x 2 1001 1002 4096 Sep 14 08:29 upload +``` + +让我们使用 sftp ‘**get**‘ 命令来尝试下载一个文件 + +``` +sftp> get devops-actions.txt +Fetching /devops-actions.txt to devops-actions.txt +/devops-actions.txt 100% 155 0.2KB/s 00:00 +sftp> +sftp> cd /etc +Couldn't stat remote file: No such file or directory +sftp> cd /root +Couldn't stat remote file: No such file or directory +sftp> +``` + +上面的输出证实我们能从我们的 sftp 服务器下载文件到本地机器,除此之外,我们也必需测试用户不能更改目录。 + +让我们在 **upload**”目录下尝试上传一个文件, + +``` +sftp> cd upload/ +sftp> put metricbeat-7.3.1-amd64.deb +Uploading metricbeat-7.3.1-amd64.deb to /upload/metricbeat-7.3.1-amd64.deb +metricbeat-7.3.1-amd64.deb 100% 38MB 38.4MB/s 00:01 +sftp> ls -l +-rw-r--r-- 1 1001 1002 40275654 Sep 14 09:18 metricbeat-7.3.1-amd64.deb +sftp> +``` + +这证实我们已经成功地从我们的本地系统上传一个文件到 sftp 服务中。 + +现在使用 winscp 工具来测试 SFTP 服务,输入 sftp 服务器 ip 地址和用户的凭证, + +[![Winscp-sftp-debian10][1]][3] + +在 Login 上单击,然后尝试下载和上传文件 + +[![下载-文件-winscp-debian10-sftp][1]][4] + +现在,在 upload 文件夹中尝试上传文件, + +[![使用-winscp-Debian10-sftp-上传-文件][1]][5] + +上面的窗口证实上传是完好地工作的,这就是这篇文章的全部。如果这些步骤能帮助你在 Debian 10 中使用 chroot 环境配置 SFTP 服务器s,那么请分享你的反馈和评论。 + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/configure-sftp-chroot-debian10/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[robsean](https://github.com/robsean) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Configure-sftp-debian10.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Winscp-sftp-debian10.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Download-file-winscp-debian10-sftp.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/09/Upload-File-using-winscp-Debian10-sftp.jpg From e9e66c2a0d1a310249a687ef170345a0b93a004f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Thu, 30 Apr 2020 13:54:18 +0800 Subject: [PATCH 064/178] Translating --- sources/tech/20200417 How to compress files on Linux 5 ways.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200417 How to compress files on Linux 5 ways.md b/sources/tech/20200417 How to compress files on Linux 5 ways.md index 5ce33cd18c..f71e90f9fc 100644 --- a/sources/tech/20200417 How to compress files on Linux 5 ways.md +++ b/sources/tech/20200417 How to compress files on Linux 5 ways.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (robsean) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 2273b6a022ac9d3428ea8939372d0a560e39dd60 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 21:56:47 +0800 Subject: [PATCH 065/178] PRF @wxy --- ...3 Difference Between YUM and RPM Package Manager.md | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md b/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md index c0bf8b7352..b882a842db 100644 --- a/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md +++ b/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Difference Between YUM and RPM Package Manager) @@ -10,6 +10,8 @@ YUM 和 RPM 包管理器的不同之处 ====== +![](https://img.linux.net.cn/data/attachment/album/202004/30/215525o4e88nen85d8dzd7.jpg) + 软件包管理器在 Linux 系统中扮演着重要的角色。它允许你安装、更新、查看、搜索和删除软件包,以满足你的需求。 每个发行版都有自己的一套包管理器,依据你的 Linux 发行版来分别使用它们。 @@ -18,7 +20,7 @@ RPM 是最古老的传统软件包管理器之一,它是为基于 Red Hat 的 > 如果你想知道 [YUM 和 DNF 包管理器的区别][1]请参考该文章。 -这意味着 yum 可以自动下载并安装所有需要的依赖项,但 rpm 会告诉你安装一个依赖项列表,然后你必须手动安装。 +这意味着 `yum` 可以自动下载并安装所有需要的依赖项,但 `rpm` 会告诉你安装一个依赖项列表,然后你必须手动安装。 当你想用 [rpm 命令][2] 安装一组包时,这实际上是不可能的,而且很费时间。 @@ -76,13 +78,13 @@ via: https://www.2daygeek.com/comparison-difference-between-yum-vs-rpm/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://www.2daygeek.com/author/magesh/ [b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/comparison-difference-between-dnf-vs-yum/ +[1]: https://linux.cn/article-12161-1.html [2]: https://www.2daygeek.com/linux-rpm-command-examples-manage-packages-fedora-centos-rhel-systems/ [3]: https://www.2daygeek.com/linux-yum-command-examples-manage-packages-rhel-centos-systems/ [4]: https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ From 4073d0b6af97a6fe973a6f244e1e0abc7efb1ada Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 21:57:31 +0800 Subject: [PATCH 066/178] PUB @wxy https://linux.cn/article-12170-1.html --- ...20200423 Difference Between YUM and RPM Package Manager.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200423 Difference Between YUM and RPM Package Manager.md (98%) diff --git a/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md b/published/20200423 Difference Between YUM and RPM Package Manager.md similarity index 98% rename from translated/tech/20200423 Difference Between YUM and RPM Package Manager.md rename to published/20200423 Difference Between YUM and RPM Package Manager.md index b882a842db..bf916108e1 100644 --- a/translated/tech/20200423 Difference Between YUM and RPM Package Manager.md +++ b/published/20200423 Difference Between YUM and RPM Package Manager.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12170-1.html) [#]: subject: (Difference Between YUM and RPM Package Manager) [#]: via: (https://www.2daygeek.com/comparison-difference-between-yum-vs-rpm/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From cc55978baf5e4de22a8fb5bc546c42be16165a8b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 22:36:28 +0800 Subject: [PATCH 067/178] PRF --- ...ox is an All-in-one Messenger for Linux.md | 32 +++++++++---------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md b/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md index 28a57f2842..93a73f4a54 100644 --- a/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md +++ b/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md @@ -1,16 +1,16 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12171-1.html) [#]: subject: (Rambox is an All-in-one Messenger for Linux) [#]: via: (https://itsfoss.com/rambox/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -Rambox 是 Linux 中多合一的消息收发工具 +Rambox:Linux 中多合一的消息收发工具 ====== -_**简介:Rambox 是一个多合一消息收发工具,允许你将多种服务(如 Discord、Slack、Facebook Messenger)和数百个此类服务结合在一起。**_ +> Rambox 是一个多合一消息收发工具,允许你将多种服务(如 Discord、Slack、Facebook Messenger)和数百个此类服务结合在一起。 ### Rambox:在单个应用中添加多个消息服务 @@ -18,7 +18,7 @@ _**简介:Rambox 是一个多合一消息收发工具,允许你将多种服 Rambox 是通过安装单个应用管理多个通信服务的最佳方式之一。你可以在一个界面使用[多个消息服务][2],如 Facebook Messenger、Gmail chats、AOL、Discord、Google Duo、[Viber][3] 等。 -这样,你就不需要安装单独的应用或者在浏览器中保持打开。你可以使用主密码锁定 Rambox 应用。你还可以使用"请勿打扰"功能。 +这样,你就不需要安装单独的应用或者在浏览器中一直打开着。你可以使用主密码锁定 Rambox 应用。你还可以使用“请勿打扰”功能。 Rambox 提供可免费使用的[开源社区版][4]。付费专业版允许你访问 600 多个应用,而社区版则包含 99 多个应用。专业版本具有额外的功能,如主题、休眠、ad-block、拼写检查和高级支持。 @@ -44,24 +44,22 @@ Rambox 提供可免费使用的[开源社区版][4]。付费专业版允许你 * Ad-block (**专业版**) * 休眠支持 (**专业版**) * 主题支持(**专业版**) - * 移动视图 (**专业版**) + * 移动设备视图 (**专业版**) * 拼写检查 (**专业版**) - * 工作时间 - 计划传入通知时间 (**专业版**) - * 代理支持 (**专业版**) - - + * 工作时间 - 计划传入通知的时间 (**专业版**) + * 支持代理 (**专业版**) 除了我在这里列出的内容外,你还可以在 Rambox Pro 版本中找到更多功能。要了解有关它的更多信息,你可以参考[正式功能列表][6]。 -还值得注意的是,你不能有超过 3 个活跃并发设备连接。 +还值得注意的是,你不能超过 3 个活跃并发设备的连接。 ### 在 Linux 上安装 Rambox -你可以在[官方下载页][4]获取 **.AppImage** 文件来运行 Rambox。如果你好奇,你可以参考我们的指南,了解如何[在 Linux 上使用 AppImage 文件][7]。 +你可以在[官方下载页][4]获取 .AppImage 文件来运行 Rambox。如果你不清楚,你可以参考我们的指南,了解如何[在 Linux 上使用 AppImage 文件][7]。 -另外,你也可以从 [Snap 商店][8]获取它。此外,请查看其 [GitHub release][9] 部分的 **.deb / .rpm** 或其他包。 +另外,你也可以从 [Snap 商店][8]获取它。此外,请查看其 [GitHub release][9] 部分的 .deb / .rpm 或其他包。 -[Download Rambox Community Edition][4] +- [下载 Rambox 社区版][4] ### 总结 @@ -69,7 +67,7 @@ Rambox 提供可免费使用的[开源社区版][4]。付费专业版允许你 还有一个类似的应用称为 [Franz][10],它也像 Rambox 部分开源、部分高级版。 -尽管像 Rambox 或 Franz 这样的解决方案非常有用,但它们并不总是资源友好,特别是如果你同时使用数十个服务。因此,请留意系统资源(如果你注意到对性能的影响)。 +尽管像 Rambox 或 Franz 这样的解决方案非常有用,但它们并不总是节约资源,特别是如果你同时使用数十个服务。因此,请留意系统资源(如果你注意到对性能的影响)。 除此之外,这是一个令人印象深刻的应用。你有试过了么?欢迎随时让我知道你的想法! @@ -80,7 +78,7 @@ via: https://itsfoss.com/rambox/ 作者:[Ankush Das][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 1696a7b3369362dfd2db3ef57cdba92cb4bf7e02 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 30 Apr 2020 22:36:53 +0800 Subject: [PATCH 068/178] PUB @geekpi https://linux.cn/article-12171-1.html --- .../20200331 Rambox is an All-in-one Messenger for Linux.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20200331 Rambox is an All-in-one Messenger for Linux.md (100%) diff --git a/translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md b/published/20200331 Rambox is an All-in-one Messenger for Linux.md similarity index 100% rename from translated/tech/20200331 Rambox is an All-in-one Messenger for Linux.md rename to published/20200331 Rambox is an All-in-one Messenger for Linux.md From 3124f64e0addc3396e83f8c3af28cf699d33e97e Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 1 May 2020 01:04:02 +0800 Subject: [PATCH 069/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200501=20How=20?= =?UTF-8?q?to=20Handle=20Automatic=20Updates=20in=20Ubuntu?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200501 How to Handle Automatic Updates in Ubuntu.md --- ...w to Handle Automatic Updates in Ubuntu.md | 125 ++++++++++++++++++ 1 file changed, 125 insertions(+) create mode 100644 sources/tech/20200501 How to Handle Automatic Updates in Ubuntu.md diff --git a/sources/tech/20200501 How to Handle Automatic Updates in Ubuntu.md b/sources/tech/20200501 How to Handle Automatic Updates in Ubuntu.md new file mode 100644 index 0000000000..b62af6da74 --- /dev/null +++ b/sources/tech/20200501 How to Handle Automatic Updates in Ubuntu.md @@ -0,0 +1,125 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Handle Automatic Updates in Ubuntu) +[#]: via: (https://itsfoss.com/auto-updates-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +How to Handle Automatic Updates in Ubuntu +====== + +_**Brief: This tutorial teaches you how to handle the unattended upgrade i.e. the automatic system updates in Ubuntu Linux.**_ + +Sometimes, when you try to [shutdown your Ubuntu system][1], you may come across this screen that stops you from shutting down: + +**Unattended-upgrade in progress during shutdown, please don’t turn off the computer.** + +![Unattended Upgrade In Progress In Ubuntu][2] + +You might wonder what is this “unattended upgrade” and how come it is running without your knowledge. + +The reason is that [Ubuntu][3] takes your system’s security very seriously. By default, it automatically checks for system updates daily and if it finds any security updates, it downloads those updates and install them on its own. For normal system and application updates, it notifies you via the Software Updater tool. + +Since all this happens in the background, you don’t even realize it until you try to shutdown your system or try to install applications on your own. + +Trying to install a new software when these unattended upgrades are in progress leads to the famous [could not get lock error][4]. + +![][5] + +As you can see, the automatic updates present a couple of minor annoyance. You may choose to disable the auto updates but that would mean that you’ll have to check and [update your Ubuntu system][6] manually all the time. + +Do you really need to disable auto updates? + +Please note that this is a security feature. Linux allows you to do practically everything in your system even disabling these security features. +But in my opinion, as a regular user, _**you should not disable the automatic updates**_. It keeps your system safe after all. +For the sake of your system’s security, you may tolerate the minor annoyances that come with the automatic updates. + +Now that you have been warned and you think it is better to take up the additional task of manually updating your system, let’s see how to handle the auto updates. + +As always, there are two ways to do it: GUI and command line. I’ll show you both methods. + +I have used Ubuntu 20.04 here but the steps are valid for Ubuntu 18.04 and any other Ubuntu version. + +### Method 1: Disable automatic updates in Ubuntu graphically + +Go to the menu and look for ‘software & updates’ tool. + +![Software & Updates Settings][7] + +In here, go to Updates tab. Now look for the “Automatically check for updates”. By default it is set to Daily. + +You can change it to Never and your system will never check for updates on its own again. And if it won’t check for updates, it won’t find new updates to install. + +![Disable Auto Updates in Ubuntu Completely][8] + +If you do this, you must manually update your system from time to time. But that’s an additional chore to do and you may not remember it all the time. + +#### Slightly better way to handle auto updates in Ubuntu + +Personally, I would suggest to let it check for updates on its own. If you don’t want it installing the updates automatically, you can change that behavior to get notified about the availability of security updates. + +Keep “Automatically check for updates” to Daily and change “When there are security updates” option to “Display immediately” instead of “Download and install automatically”. + +![Get notified for security updates instead of automatically installing them][9] + +This way, it checks for updates and if there are updates, instead of installing them automatically in the background, the Software Updater tool notifies you that updates are available for your system. Your system already does that for normal system and software updates. + +![Get notified about security updates][10] + +With this setup, you won’t see the “unattended upgrades in progress” when you shutdown your system However, you may still encounter the ‘could not get lock’ error because two separate processes cannot use apt package manager at the same time. + +I believe this is a better solution, don’t you you think? + +As I promised both GUI and command line methods, let me show you how to disable unattended upgrades in the terminal. + +### How to disable automatic updates in Ubuntu using command line + +You’ll find the auto-upgrades settings in the **/etc/apt/apt.conf.d/20auto-upgrades** file. The default text editor in Ubuntu terminal is Nano so you can use this command to edit this configuration file: + +``` +sudo nano /etc/apt/apt.conf.d/20auto-upgrades +``` + +Now, if you don’t want your system to check for updates automatically, you can change the value of APT::Periodic::Update-Package-Lists to 0. + +``` +APT::Periodic::Update-Package-Lists "0"; +APT::Periodic::Unattended-Upgrade "0"; +``` + +If you want it to check for updates but don’t install the unattended-upgrades automatically, you can choose to set it like this: + +``` +APT::Periodic::Update-Package-Lists "1"; +APT::Periodic::Unattended-Upgrade "0"; +``` + +**In the end…** + +The automatic security updates are enabled automatically for a reason and I recommend you keep it like this. A couple of minor annoyances are not really worth risking the security of your system. What do you think? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/auto-updates-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/schedule-shutdown-ubuntu/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/unattended-upgrade-in-progress-in-ubuntu.png?ssl=1 +[3]: https://ubuntu.com/ +[4]: https://itsfoss.com/could-not-get-lock-error/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/12/Could_not_get_lock.jpg?ssl=1 +[6]: https://itsfoss.com/update-ubuntu/ +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-updates-settings-ubuntu-20-04.jpg?ssl=1 +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/disable-auto-updates-ubuntu.jpg?ssl=1 +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/handle-auto-updates-ubuntu.jpg?ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/updates-available-ubuntu.png?ssl=1 From e6d0a79cfef742d1486846b160770f08e40c1094 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 1 May 2020 01:07:37 +0800 Subject: [PATCH 070/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200430=2010=20w?= =?UTF-8?q?ays=20to=20analyze=20binary=20files=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200430 10 ways to analyze binary files on Linux.md --- ...0 ways to analyze binary files on Linux.md | 321 ++++++++++++++++++ 1 file changed, 321 insertions(+) create mode 100644 sources/tech/20200430 10 ways to analyze binary files on Linux.md diff --git a/sources/tech/20200430 10 ways to analyze binary files on Linux.md b/sources/tech/20200430 10 ways to analyze binary files on Linux.md new file mode 100644 index 0000000000..7e05f87298 --- /dev/null +++ b/sources/tech/20200430 10 ways to analyze binary files on Linux.md @@ -0,0 +1,321 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 ways to analyze binary files on Linux) +[#]: via: (https://opensource.com/article/20/4/linux-binary-analysis) +[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) + +10 ways to analyze binary files on Linux +====== +These simple commands and tools can help you sail through the task of +analyzing binary files. +![Tux with binary code background][1] + +"There are 10 types of people in this world: those who understand binary and those who don't." + +We work with binaries daily, yet we understand so little about them. By binaries, I mean the executable files that you run daily, right from your command line tools to full-fledged applications. + +Linux provides a rich set of tools that makes analyzing binaries a breeze! Whatever might be your job role, if you are working on Linux, knowing the basics about these tools will help you understand your system better. + +In this article, we will cover some of the most popular of these Linux tools and commands, most of which will be available natively as part of your Linux distribution. If not, you can always use your package manager to install and explore them. Remember: learning to use the right tool at the right occasion requires plenty of patience and practice. + +### file + +What it does: Help to determine the file type. + +This will be your starting point for binary analysis. We work with files daily. Not everything is an executable type; there is a whole wide range of file types out there. Before you start, you need to understand the type of file that is being analyzed. Is it a binary file, a library file, an ASCII text file, a video file, a picture file, a PDF, a data file, etc.? + +The **file** command will help you identify the exact file type that you are dealing with. + + +``` +$ file /bin/ls +/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=94943a89d17e9d373b2794dcb1f7e38c95b66c86, stripped +$ +$ file /etc/passwd +/etc/passwd: ASCII text +$ +``` + +### ldd + +What it does: Print shared object dependencies. + +If you have already used the **file** command above on an executable binary, you can't miss the "dynamically linked" message in the output. What does it mean? + +When software is being developed, we try not to reinvent the wheel. There are a set of common tasks that most software programs require, like printing output or reading from standard in, or opening files, etc. All of these common tasks are abstracted away in a set of common functions that everybody can then use instead of writing their own variants. These common functions are put in a library called **libc** or **glibc**. + +How does one find which libraries the executable is dependent on? That’s where **ldd** command comes into the picture. Running it against a dynamically linked binary shows all its dependent libraries and their paths. + + +``` +$ ldd /bin/ls +        linux-vdso.so.1 =>  (0x00007ffef5ba1000) +        libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fea9f854000) +        libcap.so.2 => /lib64/libcap.so.2 (0x00007fea9f64f000) +        libacl.so.1 => /lib64/libacl.so.1 (0x00007fea9f446000) +        libc.so.6 => /lib64/libc.so.6 (0x00007fea9f079000) +        libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fea9ee17000) +        libdl.so.2 => /lib64/libdl.so.2 (0x00007fea9ec13000) +        /lib64/ld-linux-x86-64.so.2 (0x00007fea9fa7b000) +        libattr.so.1 => /lib64/libattr.so.1 (0x00007fea9ea0e000) +        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fea9e7f2000) +$ +``` + +### ltrace + +What it does: A library call tracer. + +We now know how to find the libraries an executable program is dependent on using the **ldd** command. However, a library can contain hundreds of functions. Out of those hundreds, which are the actual functions being used by our binary? + +The **ltrace** command displays all the functions that are being called at run time from the library. In the below example, you can see the function names being called, along with the arguments being passed to that function. You can also see what was returned by those functions on the far right side of the output. + + +``` +$ ltrace ls +__libc_start_main(0x4028c0, 1, 0x7ffd94023b88, 0x412950 <unfinished ...> +strrchr("ls", '/')                                                                  = nil +setlocale(LC_ALL, "")                                                               = "en_US.UTF-8" +bindtextdomain("coreutils", "/usr/share/locale")                                    = "/usr/share/locale" +textdomain("coreutils")                                                             = "coreutils" +__cxa_atexit(0x40a930, 0, 0, 0x736c6974756572)                                      = 0 +isatty(1)                                                                           = 1 +getenv("QUOTING_STYLE")                                                             = nil +getenv("COLUMNS")                                                                   = nil +ioctl(1, 21523, 0x7ffd94023a50)                                                     = 0 +<< snip >> +fflush(0x7ff7baae61c0)                                                              = 0 +fclose(0x7ff7baae61c0)                                                              = 0 ++++ exited (status 0) +++ +$ +``` + +### Hexdump + +What it does: Display file contents in ASCII, decimal, hexadecimal, or octal. + +Often, it happens that you open a file with an application that doesn’t know what to do with that file. Try opening an executable file or a video file using vim; all you will see is gibberish thrown on the screen. + +Opening unknown files in Hexdump helps you see what exactly the file contains. You can also choose to see the ASCII representation of the data present in the file using some command-line options. This might help give you some clues to what kind of file it is. + + +``` +$ hexdump -C /bin/ls | head +00000000  7f 45 4c 46 02 01 01 00  00 00 00 00 00 00 00 00  |.ELF............| +00000010  02 00 3e 00 01 00 00 00  d4 42 40 00 00 00 00 00  |..>......B@.....| +00000020  40 00 00 00 00 00 00 00  f0 c3 01 00 00 00 00 00  |@...............| +00000030  00 00 00 00 40 00 38 00  09 00 40 00 1f 00 1e 00  |....@.8...@.....| +00000040  06 00 00 00 05 00 00 00  40 00 00 00 00 00 00 00  |........@.......| +00000050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00  |@.@.....@.@.....| +00000060  f8 01 00 00 00 00 00 00  f8 01 00 00 00 00 00 00  |................| +00000070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00  |................| +00000080  38 02 00 00 00 00 00 00  38 02 40 00 00 00 00 00  |8.......8.@.....| +00000090  38 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |8.@.............| +$ +``` + +### strings + +What it does: Print the strings of printable characters in files. + +If Hexdump seems a bit like overkill for your use case and you are simply looking for printable characters within a binary, you can use the **strings** command. + +When software is being developed, a variety of text/ASCII messages are added to it, like printing info messages, debugging info, help messages, errors, and so on. Provided all this information is present in the binary, it will be dumped to screen using **strings**. + + +``` +`$ strings /bin/ls` +``` + +### readelf + +What it does: Display information about ELF files. + +ELF (Executable and Linkable File Format) is the dominant file format for executable or binaries, not just on Linux but a variety of UNIX systems as well. If you have utilized tools like file command, which tells you that the file is in ELF format, the next logical step will be to use the **readelf** command and its various options to analyze the file further. + +Having a reference of the actual ELF specification handy when using **readelf** can be very useful. You can find the specification [here][2].  + + +``` +$ readelf -h /bin/ls +ELF Header: +  Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 +  Class:                             ELF64 +  Data:                              2's complement, little endian +  Version:                           1 (current) +  OS/ABI:                            UNIX - System V +  ABI Version:                       0 +  Type:                              EXEC (Executable file) +  Machine:                           Advanced Micro Devices X86-64 +  Version:                           0x1 +  Entry point address:               0x4042d4 +  Start of program headers:          64 (bytes into file) +  Start of section headers:          115696 (bytes into file) +  Flags:                             0x0 +  Size of this header:               64 (bytes) +  Size of program headers:           56 (bytes) +  Number of program headers:         9 +  Size of section headers:           64 (bytes) +  Number of section headers:         31 +  Section header string table index: 30 +$ +``` + +### objdump + +What it does: Display information from an object file. + +Binaries are created when you write source code which gets compiled using a tool called, unsurprisingly, a compiler. This compiler generates machine language instructions equivalent to the source code, which can then be executed by the CPU to perform a given task. This machine language code can be interpreted via mnemonics called an assembly language. An assembly language is a set of instructions that help you understand the operations being performed by the program and ultimately being executed on the CPU. + +**objdump** utility reads the binary or executable file and dumps the assembly language instructions on the screen. Knowledge of assembly is critical to understand the output of the **objdump** command. + +Remember: assembly language is architecture-specific. + + +``` +$ objdump -d /bin/ls | head + +/bin/ls:     file format elf64-x86-64 + +Disassembly of section .init: + +0000000000402150 <_init@@Base>: +  402150:       48 83 ec 08             sub    $0x8,%rsp +  402154:       48 8b 05 6d 8e 21 00    mov    0x218e6d(%rip),%rax        # 61afc8 <__gmon_start__> +  40215b:       48 85 c0                test   %rax,%rax +$ +``` + +### strace + +What it does: Trace system calls and signals. + +If you have used **ltrace**, mentioned earlier, think of **strace** being similar. The only difference is that, instead of calling a library, the **strace** utility traces system calls. System calls are how you interface with the kernel to get work done. + +To give an example, if you want to print something to the screen, you will use the **printf** or **puts** function from the standard library **libc**; however, under the hood, ultimately, a system call named **write** will be made to actually print something to the screen. + + +``` +$ strace -f /bin/ls +execve("/bin/ls", ["/bin/ls"], [/* 17 vars */]) = 0 +brk(NULL)                               = 0x686000 +mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f967956a000 +access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory) +open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 +fstat(3, {st_mode=S_IFREG|0644, st_size=40661, ...}) = 0 +mmap(NULL, 40661, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9679560000 +close(3)                                = 0 +<< snip >> +fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 +mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9679569000 +write(1, "R2  RH\n", 7R2  RH +)                 = 7 +close(1)                                = 0 +munmap(0x7f9679569000, 4096)            = 0 +close(2)                                = 0 +exit_group(0)                           = ? ++++ exited with 0 +++ +$ +``` + +### nm + +What it does: List symbols from object files. + +If you are working with a binary that is not stripped, the **nm** command will provide you with the valuable information that was embedded in the binary during compilation. **nm** can help you identify variables and functions from the binary. You can imagine how useful this would be if you don't have access to the source code of the binary being analyzed. + +To showcase **nm**, we will quickly write a small program and compile it with the **-g** option, and we will also see that the binary is not stripped by using the file command. + + +``` +$ cat hello.c +#include <stdio.h> + +int main() { +    printf("Hello world!"); +    return 0; +} +$ +$ gcc -g hello.c -o hello +$ +$ file hello +hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3de46c8efb98bce4ad525d3328121568ba3d8a5d, not stripped +$ +$ ./hello +Hello world!$ +$ + +$ nm hello | tail +0000000000600e20 d __JCR_END__ +0000000000600e20 d __JCR_LIST__ +00000000004005b0 T __libc_csu_fini +0000000000400540 T __libc_csu_init +                 U __libc_start_main@@GLIBC_2.2.5 +000000000040051d T main +                 U printf@@GLIBC_2.2.5 +0000000000400490 t register_tm_clones +0000000000400430 T _start +0000000000601030 D __TMC_END__ +$ +``` + +### gdb + +What it does: The GNU debugger. + +Well, not everything in the binary can be statically analyzed. We did execute some commands which ran the binary, like **ltrace** and **strace**; however, software consists of a variety of conditions that could lead to various alternate paths being executed. + +The only way to analyze these paths is at run time by having the ability to stop or pause the program at any given location and being able to analyze information and then move further down. +That is where debuggers come into the picture, and on Linux, **gdb** is the defacto debugger. It helps you load a program, set breakpoints at specific places, analyze memory and CPU register, and do much more. It complements the other tools mentioned above and allows you to do much more runtime analysis. + +One thing to notice is, once you load a program using **gdb**, you will be presented with its own **(gdb)** prompt. All further commands will be run in this **gdb** command prompt until you exit. + +We will use the "hello" program that we compiled earlier and use **gdb** to see how it works. + + +``` +$ gdb -q ./hello +Reading symbols from /home/flash/hello...done. +(gdb) break main +Breakpoint 1 at 0x400521: file hello.c, line 4. +(gdb) info break +Num     Type           Disp Enb Address            What +1       breakpoint     keep y   0x0000000000400521 in main at hello.c:4 +(gdb) run +Starting program: /home/flash/./hello + +Breakpoint 1, main () at hello.c:4 +4           printf("Hello world!"); +Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_64 +(gdb) bt +#0  main () at hello.c:4 +(gdb) c +Continuing. +Hello world![Inferior 1 (process 29620) exited normally] +(gdb) q +$ +``` + +### Conclusion + +Once you are comfortable with using these native Linux binary analysis tools and understanding the output they provide, you can then move onto more advanced and professional open source binary analysis tools like [radare2][3]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/linux-binary-analysis + +作者:[Gaurav Kamathe][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gkamathe +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tux_linux_penguin_code_binary.jpg?itok=TxGxW0KY (Tux with binary code background) +[2]: http://www.skyfree.org/linux/references/ELF_Format.pdf +[3]: https://github.com/radareorg/radare2 From 5be54b8611613430dfccaae341d33b2c19ef4abf Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 1 May 2020 01:08:33 +0800 Subject: [PATCH 071/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200430=20Edit?= =?UTF-8?q?=20music=20recordings=20with=20Audacity=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200430 Edit music recordings with Audacity on Linux.md --- ...music recordings with Audacity on Linux.md | 161 ++++++++++++++++++ 1 file changed, 161 insertions(+) create mode 100644 sources/tech/20200430 Edit music recordings with Audacity on Linux.md diff --git a/sources/tech/20200430 Edit music recordings with Audacity on Linux.md b/sources/tech/20200430 Edit music recordings with Audacity on Linux.md new file mode 100644 index 0000000000..be6f68f062 --- /dev/null +++ b/sources/tech/20200430 Edit music recordings with Audacity on Linux.md @@ -0,0 +1,161 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Edit music recordings with Audacity on Linux) +[#]: via: (https://opensource.com/article/20/4/audacity) +[#]: author: (David Both https://opensource.com/users/dboth) + +Edit music recordings with Audacity on Linux +====== +How COVID-19 caused me to learn Audacity on the fly and learn to love +it. +![Bird singing and music notes][1] + +In this strange and difficult time of a global pandemic, we are all called upon to do things differently, to change our routines, and to learn new things. + +I have worked from home for many years, so that is nothing new to me. Even though I am allegedly retired, I write articles for Opensource.com and [Enable Sysadmin][2] and books. I also manage my own home network, which is larger than you might think, and my church's network and Linux hosts, and I help a few friends with Linux. All of this keeps me busy doing what I like to do, and all of it is usually well within my comfort zone. + +But COVID-19 has changed all of that. And, like many other types of organizations, my church had to move quickly to a new service-delivery paradigm. And that is what churches do—deliver a specific kind of service. As the church sysadmin and with some knowledge of audio recording and editing (back in the '70s, I mixed the sound and was the only roadie for a couple of regional folk-rock groups in Toledo, Ohio), I decided to learn the open source audio recording and editing software [Audacity][3] to help meet this challenge. + +This is not a comprehensive how-to article about using Audacity. It is about my experiences getting started with this powerful audio-editing tool, but there should be enough information here to help you get started. + +I have learned just what I need to know in order to accomplish my task: combining several separate audio clips into a single MP3 audio file. If you already know Audacity and do things differently or know things that I don't, that is expected. And if you have any suggestions to help me accomplish my task more easily, please share them in the comments. + +### The old way + +I try not to use the term "normal" now because it is hard to know exactly what that is—if such a state even exists. But our old method of producing recordings for our shut-ins, members who are traveling, and anyone else was to record the sermon portion of our regular, in-person church services and post them on our website. + +To do this, I installed a TASCAM SS-R100 solid-state recorder that stores the sermons as MP3 files on a thumb drive. We uploaded the recordings to a special directory of our website so people could download them. The recordings are uploaded using a Bash [program][4] I wrote for the task. _Automate everything!_ I trained a couple of others to perform these tasks using sudo in case I was not available. + +This all worked very well. Until it didn't. + +### The new way + +As soon as the first restrictions on large gatherings occurred, we made some changes. We could still have small gatherings, so four of us met Sunday mornings and recorded an abbreviated service using our in-house recorder and doing the upload the usual way. This worked, but as the crisis deepened and it became more of a risk to meet with even a few people, we had to make more changes. + +Like a huge number of other organizations, we realized we each needed to perform our parts of creating services in separate locations from our own homes. + +Now, depending upon the structure of the service, I receive several recordings that I need to combine to create the full church service. Our music director records each anthem and interlude using her iPhone and sends me the recordings in the M4A (MPEG-4 audio) format. They each range in length from seconds to five minutes and are up to 3MB in size. Likewise, our rector sends me two to six recordings, also in M4A format, that contains his portion of the service. Sometimes, other musicians in our church send solos or duets recorded with their significant others; these can be in MP3 or M4A formats. + +Then, I pull all of this together into a single recording that can be uploaded to our server for people to download. I use Audacity for this because it was available in my repo, and it was easy to get started. + +### Getting started with Audacity + +I had never used [Audacity][5] before this, so, like many others these days, I needed to learn something new just in time to accomplish what I needed to do. I struggled a bit at first, but it turned out to be fun and very enlightening. + +Audacity was easy to install on my Fedora 31 workstation because, as in many distros, it is available from the Fedora repository. + +The first time I opened Audacity with the program launcher icon, the application's window was empty with no projects nor tracks present. Audacity projects have an AUP extension, so if you have an existing project, you could click on the file in your favorite file manager and launch Audacity that way. + +### Convert M4A to MP3 + +As installed by Fedora, Audacity does not recognize M4A files. Regardless of how you proceed, you need to install the [LAME][6] MP3 encoder and [FFmpeg][7] import/export library, both of which are available from the Fedora repository and, most likely, any other distro's repository. + +There are websites that explain how to configure Audacity to use these tools to import and convert audio files from M4A to other types (such as MP3), but I decided to write a script to do it from the command line. For one reason, using a script is faster than doing a lot of extra clicking in a GUI interface, and for another, the file names need some work, so I already needed a script to rename the files. Many people use non-alphanumeric characters to name files, but I don't like dealing with special keyboard characters from the command line. It's easier to manage files with simple alphanumeric names, so my script removes all non-alphanumeric characters from the file names and then converts the files to MP3 format. + +You may choose a different approach, but I like the scripted solution. It is fast, and I only need to run the script once, no matter how many files need to be renamed and converted to MP3. + +### Create a new project + +You can create a new project whether or not any audio tracks are loaded. I recommend creating the project first, before importing any audio files (aka "clips"). From the Menu bar, select **File > Save Project > Save Project As**. This opens a warning dialog window that says, _"'Save project' is for an Audacity project, not an audio file."_ Click the **OK** button to continue to a standard file-save dialog. + +I found that I needed to do this twice. The first time, the warning dialog did not display any buttons, so I had to close the dialog using the window menu or the x icon in the Title bar. + +Name the project whatever you like, and Audacity automatically adds the AUP extension. You now have an empty project. + +### Add audio files to your project + +The first step is to add your audio files to the project. Using the Menu bar, open **File > Import > Audio** and then use the file dialog to select one or more files to import. For my first test project, I loaded all the files at once without sorting the tracks nor aligning the clips in the desired sequence along the timeline. This time, I started by loading the audio files one at a time in the sequence I wanted them from top to bottom. As each file is imported, it is placed into a new track below any existing tracks. The following image shows the files loaded all at one time in the sequence they appear in the working directory. + +![Tracks loaded in Audacity][8] + +There is a timeline across the top of the window's track area. There is also a scroll bar at the bottom of the window, so you can scroll along the timeline when the tracks extend beyond the width of the Audacity window. There is also a vertical scroll bar if there are more tracks than fit into the window. + +Notice the names in the upper-left corner of the waveform section of each track—they are the file names of each track without the extension. These are not there by default, but I find them helpful. To display these names, use the Menu bar to select **Edit > Preferences** and place a check in the **Show Audio Track Name As Overlay** box. + +### Order your audio clips + +Once you have some files loaded into the Audacity workspace, you can start manipulating them. To order your audio clips, select one and use the **Time-Shift** tool (↔︎) to slide them horizontally along the tracks; continue doing this until all the clips line up end to end in the order you want them. Note that the clip you are moving is book-ended by a pair of vertical alignment lines. When they line up perfectly, the end lines of the two aligned tracks change color to alert you. + +You can hover the mouse pointer over the tool icons in the Audacity toolbars to see a pop-up that displays the name of that tool. This helps beginners understand what each tool does. + +![Audacity toolbox][9] + +Here, the **Selection** tool** **is selected in the Audacity toolbar. The **Time-Shift** tool is second from the left on the bottom row. + +The following image shows what happens when you slide the audio clips into place on the project timeline without sorting the tracks into a particular sequence. This may not be optimal for how you like to work. It is not for me. + +![Audio clips in Audacity][10] + +To remove segments of (or complete) audio clips, select them with the **Selection** tool—you can also select multiple adjacent tracks. Then you can press the **Delete** button on your keyboard to delete the selected segment(s). + +In the image above, you can see a vertical black line in track 1 and a vertical green line crossing all the tracks. These are the audio cursors that show the playback positions of a track or the entire project. Choose the **Selection** tool and click the desired position within a track, then click the **Play** button on the transport controls (in the upper-left of the Audacity window) to begin playback. Playback will continue past the end of the selected track and all the way to the end of the project. If tracks overlap on the timeline, they will play simultaneously. + +To begin playback immediately, click the desired starting point on the timeline. To play part of a track, hold down the Left mouse button to select a short segment of the track, and then click the **Play** button. The other transport buttons—Pause, Stop, and so—on are identified with universal icons and work as you would expect. + +You can also click the **Silence Audio Selection** button—the fifth button from the left on the **Edit** toolbar (shown below)—to completely silence a selected segment while leaving it in place for timing purposes. This is how I silenced a number of background clicks and noises. + +![Audacity edit tools][11] + +It took me a while to figure out how to sort the tracks vertically, and it turns out there are a few different ways to accomplish the task. + +You can use the track menu to reorder arrangement. Each track has its own Control Panel on the left side (shown below). The track drop-down Menu bar at the top of the Control Panel opens a menu that provides several track-sequencing options to move a track up, down, to the top, or to the bottom. + +![Moving tracks in Audacity][12] + +The items to move a track up or down move the track one position at a time, so you have to select it as many times as necessary to get the track in the desired position. + +To drag and drop tracks, you must click on the space occupied by the track details. In this screenshot, that's "Mono, 48000Hz 32 bit float". It can be tricky, because if you click too high, you adjust the panning (the left and right stereo position) and if you click too low, you may collapse or select the track. Target the "Mono" or "Stereo" label (whatever your track happens to be) label, and then click and drag the track up or down to reposition it in your workspace. + +### Apply amplification and noise reduction effects + +Some tracks need the overall volume to be adjusted. I used the **Selection** tool to double-click and select the entire track (but you could also select a portion of a track). On the Menu bar, select **Effect > Amplify** to display a small dialog window. You can use the slider or enter a value to specify the amount of amplification. Negative numbers decrease the volume. If you try to increase the volume, you need to place a check in the **Allow Clipping** box. Then click OK. + +I found that amplification is a bit tricky; it is easy to use too much or too little. Start by using small numbers to see the results. You can always use **Ctrl+Z** to undo your changes if you go too far in either direction. + +Another effect I find useful is noise reduction. One of the tracks was recorded with a noticeable 60Hz hum, which is usually due to poor grounding of the microphone or recorder. Fortunately, there were only several seconds of hum and no other sound at the beginning of the recording. + +Applying the noise reduction effect was a little confusing at first. First, I selected a few samples of the humming sound to tell Audacity what sound needed to be reduced, and then I navigated to **Effect > Noise Reduction**. This opens the **Noise Reduction** dialog. I clicked on the **Get Noise Profile** button in the Step 1 section of the dialog, which uses the selected sample as the basis for a set of filter presets. After it gathers the selected sample, though, the dialog disappeared (this is by design). I re-opened the dialog, used the slider to select the noise reduction level in decibels (I set it to 15dB and left the other sliders alone), and then clicked **OK**. + +This worked well—you can hear the residual hum only if you know it is there. I need to experiment with this some more, but since the result was acceptable, so I did not play with the settings any further. + +The reason the dialog box closes after getting a noise profile is actually for the sake of expediency. If you're processing many tracks or segments of audio, each with a different noise profile, you can open the **Noise Reduction** effect, get the current noise profile, and then select the audio you want to clean. You can then run the Noise Reduction filter using **Ctrl+R**, the keyboard shortcut for running the most recent filter. Instead of getting a new noise profile, however, Audacity uses the one you've just stored, and performs the filter instead. This way, you can get a sample with a few clicks but clean lots of audio with just one keyboard shortcut. + +### And so much more + +I have only worked with a few of the basics and have not even begun to scratch the surface of Audacity. I can already see that it has so many more features and tools that will enable me to create even more professional-sounding projects. + +For example, in addition to working with existing audio files, Audacity can make recordings from line inputs, the desktop sound stream, and microphone inputs. It can do special effects like fade in and out and cross-fades. And I have not even tried to figure out what many of the other effects and tools are capable of. + +I have a feeling I will need to learn more in the near future. Hopefully, this story of my very limited experience with Audacity will prompt you to check it out. For much more information, you can find the [Audacity manual][13] online. + +Using Audacity, you can quickly clean up audio file so that any background noise becomes tolerable. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/audacity + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/music-birds-recording-520.png?itok=UoM7brl0 (Bird singing and music notes) +[2]: https://www.redhat.com/sysadmin/ +[3]: https://www.audacityteam.org/ +[4]: https://opensource.com/article/17/12/using-sudo-delegate +[5]: https://opensource.com/education/16/9/audacity-classroom +[6]: https://manual.audacityteam.org/man/installing_and_updating_audacity_on_linux.html#linlame +[7]: https://manual.audacityteam.org/man/installing_and_updating_audacity_on_linux.html#linff +[8]: https://opensource.com/sites/default/files/uploads/audacity1_tracksloaded.png (Tracks loaded in Audacity) +[9]: https://opensource.com/sites/default/files/uploads/audacity2_tools.png (Audacity toolbox) +[10]: https://opensource.com/sites/default/files/uploads/audacity3_audioclips.png (Audio clips in Audacity) +[11]: https://opensource.com/sites/default/files/uploads/audacity4_edittoolbar.png (Audacity edit tools) +[12]: https://opensource.com/sites/default/files/uploads/audacity5_trackmovement.png (Moving tracks in Audacity) +[13]: https://manual.audacityteam.org/# From 79f7456fc5834d2761be91d0ba272b041d7ef035 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 1 May 2020 01:11:55 +0800 Subject: [PATCH 072/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200430=20Indust?= =?UTF-8?q?rial=20robots=20could=20'eat=20metal'=20to=20power=20themselves?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/talk/20200430 Industrial robots could -eat metal- to power themselves.md --- ...s could -eat metal- to power themselves.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 sources/talk/20200430 Industrial robots could -eat metal- to power themselves.md diff --git a/sources/talk/20200430 Industrial robots could -eat metal- to power themselves.md b/sources/talk/20200430 Industrial robots could -eat metal- to power themselves.md new file mode 100644 index 0000000000..45995a9660 --- /dev/null +++ b/sources/talk/20200430 Industrial robots could -eat metal- to power themselves.md @@ -0,0 +1,67 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Industrial robots could 'eat metal' to power themselves) +[#]: via: (https://www.networkworld.com/article/3540194/industrial-robots-could-eat-metal-to-power-themselves.html) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Industrial robots could 'eat metal' to power themselves +====== +Scavenging energy by foraging for metal could power Internet of Things electronics and robots, suggest researchers at University of Pennsylvania. +Jiraroj Praditcharoenkul / Getty Images + +A fundamental manufacturing shift is on the horizon, some say. It's where robots run all elements of our future factories. The machines will operate using brain-copying artificial intelligence and handle not only manufacturing processes, but also supply-chain logistics, planning, and other roles formerly performed by humans. + +This vision of the future anticipates an industrial workplace where Internet-connected machines will mimic humans, yet do the jobs more precisely, faster and cheaper than humans. + +And the human-copying element may not end there. Researchers at the University of Pennsylvania are suggesting that robots could end up eating like humans, too. + +Robots will "eat metal for energy," according to a [news article][1] published in Medium. The researchers' vision for a "metal-air scavenger" could solve one of the quandaries of future IoT-enabled factories. That quandary is how to power a device that moves without adding mass and weight, as one does by adding bulky batteries. + +The answer, according to the University of Pennsylvania researchers, is to try to electromechanically forage for energy from the metal surfaces that a robot or IoT device traverses, thus converting material garnered, using a chemical reaction, into power. + +"Robots and electronics [would] extract energy from large volumes of energy dense material without having to carry the material on-board," the researchers say in a paper they've published in [ACS Energy Letters][2]. + +It would be like "eating metal, breaking down its chemical bonds for energy like humans do with food." Batteries work by repeatedly breaking and creating chemical bonds. + +The research references the dichotomy between computing and power storage. Computing is well suited to miniaturization, and processers have been progressively reduced in size while performance has increased, but battery storage hasn't. You need a bigger battery for more energy. + +Even if swarming, industrial robots became the size of insects (I've [written about][3] the possibility), there's an issue powering the nano devices – the required size of the power source would defeat the object of the miniaturization. The battery alone could crush the device, and even if it didn't, the machine would need excessive amounts of energy to move, because of the battery mass. This conundrum is one of the reasons there's an emphasis in IoT development to find ways to harvest energy ambiently. + +However, with ambient power – such as is found in [solar or potentially magnetism][4], for example – power density comes into play. That's where the harvesting technology can't pull enough energy out of the environment, or it does it so slowly that it's not as power-effective as traditional batteries. + +Enter the metal-eating robot. The University of Pennsylvania researchers' form of harvesting efficiently replicates a power-dense battery. Metal is more dense than the battery chemistry. + +The group performs their foraging energy production with a hydrogel electrolyte sponge towed by the robot. It uses a cathode, dragged over the surface, to extract amperages from the metal fuel source, such as steel or aluminum. + +"Our [metal-air scavenger] has a power density that's ten times better than the best harvesters, to the point that we can compete against batteries," said James Pikul, an assistant professor in the University of Pennsylvania's Department of Mechanical Engineering and Applied Mechanics and one of the paper authors, in the Medium post. "It's using battery chemistry, but doesn't have the associated weight, because it's taking those chemicals from the environment." + +This method is also potentially better than existing lithium-ion battery chemistry, according to Pikul. + +"One day, a robot that needs to recharge its batteries will just need to find some aluminum to 'eat,'" Pikul said. + +The robot, although ultimately likely to be a better worker than the human, is a messy eater. As it oxidizes the metal it passes over, it leaves a "microscopic layer of rust in its wake," according to the article. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3540194/industrial-robots-could-eat-metal-to-power-themselves.html + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://medium.com/penn-engineering/penn-engineerings-new-scavenger-technology-allows-robots-to-eat-metal-for-energy-bd12f3b83893 +[2]: https://pubs.acs.org/doi/10.1021/acsenergylett.9b02661 +[3]: https://www.networkworld.com/article/3429200/self-organizing-micro-robots-may-soon-swarm-the-industrial-iot.html +[4]: https://www.networkworld.com/article/3536697/harvesting-ambient-energy-will-power-iot-scientists-say.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world From 4018bc654c3bb6e82e7ef5953a04fd966701aa93 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 08:21:17 +0800 Subject: [PATCH 073/178] APL --- .../tech/20200430 10 ways to analyze binary files on Linux.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200430 10 ways to analyze binary files on Linux.md b/sources/tech/20200430 10 ways to analyze binary files on Linux.md index 7e05f87298..f43ae83efa 100644 --- a/sources/tech/20200430 10 ways to analyze binary files on Linux.md +++ b/sources/tech/20200430 10 ways to analyze binary files on Linux.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 533ffb6e1152b567d4628e642b7f468f125f01c6 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 09:10:29 +0800 Subject: [PATCH 074/178] TSL --- ...0 ways to analyze binary files on Linux.md | 321 ------------------ ...0 ways to analyze binary files on Linux.md | 314 +++++++++++++++++ 2 files changed, 314 insertions(+), 321 deletions(-) delete mode 100644 sources/tech/20200430 10 ways to analyze binary files on Linux.md create mode 100644 translated/tech/20200430 10 ways to analyze binary files on Linux.md diff --git a/sources/tech/20200430 10 ways to analyze binary files on Linux.md b/sources/tech/20200430 10 ways to analyze binary files on Linux.md deleted file mode 100644 index f43ae83efa..0000000000 --- a/sources/tech/20200430 10 ways to analyze binary files on Linux.md +++ /dev/null @@ -1,321 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (10 ways to analyze binary files on Linux) -[#]: via: (https://opensource.com/article/20/4/linux-binary-analysis) -[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) - -10 ways to analyze binary files on Linux -====== -These simple commands and tools can help you sail through the task of -analyzing binary files. -![Tux with binary code background][1] - -"There are 10 types of people in this world: those who understand binary and those who don't." - -We work with binaries daily, yet we understand so little about them. By binaries, I mean the executable files that you run daily, right from your command line tools to full-fledged applications. - -Linux provides a rich set of tools that makes analyzing binaries a breeze! Whatever might be your job role, if you are working on Linux, knowing the basics about these tools will help you understand your system better. - -In this article, we will cover some of the most popular of these Linux tools and commands, most of which will be available natively as part of your Linux distribution. If not, you can always use your package manager to install and explore them. Remember: learning to use the right tool at the right occasion requires plenty of patience and practice. - -### file - -What it does: Help to determine the file type. - -This will be your starting point for binary analysis. We work with files daily. Not everything is an executable type; there is a whole wide range of file types out there. Before you start, you need to understand the type of file that is being analyzed. Is it a binary file, a library file, an ASCII text file, a video file, a picture file, a PDF, a data file, etc.? - -The **file** command will help you identify the exact file type that you are dealing with. - - -``` -$ file /bin/ls -/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=94943a89d17e9d373b2794dcb1f7e38c95b66c86, stripped -$ -$ file /etc/passwd -/etc/passwd: ASCII text -$ -``` - -### ldd - -What it does: Print shared object dependencies. - -If you have already used the **file** command above on an executable binary, you can't miss the "dynamically linked" message in the output. What does it mean? - -When software is being developed, we try not to reinvent the wheel. There are a set of common tasks that most software programs require, like printing output or reading from standard in, or opening files, etc. All of these common tasks are abstracted away in a set of common functions that everybody can then use instead of writing their own variants. These common functions are put in a library called **libc** or **glibc**. - -How does one find which libraries the executable is dependent on? That’s where **ldd** command comes into the picture. Running it against a dynamically linked binary shows all its dependent libraries and their paths. - - -``` -$ ldd /bin/ls -        linux-vdso.so.1 =>  (0x00007ffef5ba1000) -        libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fea9f854000) -        libcap.so.2 => /lib64/libcap.so.2 (0x00007fea9f64f000) -        libacl.so.1 => /lib64/libacl.so.1 (0x00007fea9f446000) -        libc.so.6 => /lib64/libc.so.6 (0x00007fea9f079000) -        libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fea9ee17000) -        libdl.so.2 => /lib64/libdl.so.2 (0x00007fea9ec13000) -        /lib64/ld-linux-x86-64.so.2 (0x00007fea9fa7b000) -        libattr.so.1 => /lib64/libattr.so.1 (0x00007fea9ea0e000) -        libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fea9e7f2000) -$ -``` - -### ltrace - -What it does: A library call tracer. - -We now know how to find the libraries an executable program is dependent on using the **ldd** command. However, a library can contain hundreds of functions. Out of those hundreds, which are the actual functions being used by our binary? - -The **ltrace** command displays all the functions that are being called at run time from the library. In the below example, you can see the function names being called, along with the arguments being passed to that function. You can also see what was returned by those functions on the far right side of the output. - - -``` -$ ltrace ls -__libc_start_main(0x4028c0, 1, 0x7ffd94023b88, 0x412950 <unfinished ...> -strrchr("ls", '/')                                                                  = nil -setlocale(LC_ALL, "")                                                               = "en_US.UTF-8" -bindtextdomain("coreutils", "/usr/share/locale")                                    = "/usr/share/locale" -textdomain("coreutils")                                                             = "coreutils" -__cxa_atexit(0x40a930, 0, 0, 0x736c6974756572)                                      = 0 -isatty(1)                                                                           = 1 -getenv("QUOTING_STYLE")                                                             = nil -getenv("COLUMNS")                                                                   = nil -ioctl(1, 21523, 0x7ffd94023a50)                                                     = 0 -<< snip >> -fflush(0x7ff7baae61c0)                                                              = 0 -fclose(0x7ff7baae61c0)                                                              = 0 -+++ exited (status 0) +++ -$ -``` - -### Hexdump - -What it does: Display file contents in ASCII, decimal, hexadecimal, or octal. - -Often, it happens that you open a file with an application that doesn’t know what to do with that file. Try opening an executable file or a video file using vim; all you will see is gibberish thrown on the screen. - -Opening unknown files in Hexdump helps you see what exactly the file contains. You can also choose to see the ASCII representation of the data present in the file using some command-line options. This might help give you some clues to what kind of file it is. - - -``` -$ hexdump -C /bin/ls | head -00000000  7f 45 4c 46 02 01 01 00  00 00 00 00 00 00 00 00  |.ELF............| -00000010  02 00 3e 00 01 00 00 00  d4 42 40 00 00 00 00 00  |..>......B@.....| -00000020  40 00 00 00 00 00 00 00  f0 c3 01 00 00 00 00 00  |@...............| -00000030  00 00 00 00 40 00 38 00  09 00 40 00 1f 00 1e 00  |....@.8...@.....| -00000040  06 00 00 00 05 00 00 00  40 00 00 00 00 00 00 00  |........@.......| -00000050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00  |@.@.....@.@.....| -00000060  f8 01 00 00 00 00 00 00  f8 01 00 00 00 00 00 00  |................| -00000070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00  |................| -00000080  38 02 00 00 00 00 00 00  38 02 40 00 00 00 00 00  |8.......8.@.....| -00000090  38 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |8.@.............| -$ -``` - -### strings - -What it does: Print the strings of printable characters in files. - -If Hexdump seems a bit like overkill for your use case and you are simply looking for printable characters within a binary, you can use the **strings** command. - -When software is being developed, a variety of text/ASCII messages are added to it, like printing info messages, debugging info, help messages, errors, and so on. Provided all this information is present in the binary, it will be dumped to screen using **strings**. - - -``` -`$ strings /bin/ls` -``` - -### readelf - -What it does: Display information about ELF files. - -ELF (Executable and Linkable File Format) is the dominant file format for executable or binaries, not just on Linux but a variety of UNIX systems as well. If you have utilized tools like file command, which tells you that the file is in ELF format, the next logical step will be to use the **readelf** command and its various options to analyze the file further. - -Having a reference of the actual ELF specification handy when using **readelf** can be very useful. You can find the specification [here][2].  - - -``` -$ readelf -h /bin/ls -ELF Header: -  Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 -  Class:                             ELF64 -  Data:                              2's complement, little endian -  Version:                           1 (current) -  OS/ABI:                            UNIX - System V -  ABI Version:                       0 -  Type:                              EXEC (Executable file) -  Machine:                           Advanced Micro Devices X86-64 -  Version:                           0x1 -  Entry point address:               0x4042d4 -  Start of program headers:          64 (bytes into file) -  Start of section headers:          115696 (bytes into file) -  Flags:                             0x0 -  Size of this header:               64 (bytes) -  Size of program headers:           56 (bytes) -  Number of program headers:         9 -  Size of section headers:           64 (bytes) -  Number of section headers:         31 -  Section header string table index: 30 -$ -``` - -### objdump - -What it does: Display information from an object file. - -Binaries are created when you write source code which gets compiled using a tool called, unsurprisingly, a compiler. This compiler generates machine language instructions equivalent to the source code, which can then be executed by the CPU to perform a given task. This machine language code can be interpreted via mnemonics called an assembly language. An assembly language is a set of instructions that help you understand the operations being performed by the program and ultimately being executed on the CPU. - -**objdump** utility reads the binary or executable file and dumps the assembly language instructions on the screen. Knowledge of assembly is critical to understand the output of the **objdump** command. - -Remember: assembly language is architecture-specific. - - -``` -$ objdump -d /bin/ls | head - -/bin/ls:     file format elf64-x86-64 - -Disassembly of section .init: - -0000000000402150 <_init@@Base>: -  402150:       48 83 ec 08             sub    $0x8,%rsp -  402154:       48 8b 05 6d 8e 21 00    mov    0x218e6d(%rip),%rax        # 61afc8 <__gmon_start__> -  40215b:       48 85 c0                test   %rax,%rax -$ -``` - -### strace - -What it does: Trace system calls and signals. - -If you have used **ltrace**, mentioned earlier, think of **strace** being similar. The only difference is that, instead of calling a library, the **strace** utility traces system calls. System calls are how you interface with the kernel to get work done. - -To give an example, if you want to print something to the screen, you will use the **printf** or **puts** function from the standard library **libc**; however, under the hood, ultimately, a system call named **write** will be made to actually print something to the screen. - - -``` -$ strace -f /bin/ls -execve("/bin/ls", ["/bin/ls"], [/* 17 vars */]) = 0 -brk(NULL)                               = 0x686000 -mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f967956a000 -access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory) -open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 -fstat(3, {st_mode=S_IFREG|0644, st_size=40661, ...}) = 0 -mmap(NULL, 40661, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9679560000 -close(3)                                = 0 -<< snip >> -fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 -mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9679569000 -write(1, "R2  RH\n", 7R2  RH -)                 = 7 -close(1)                                = 0 -munmap(0x7f9679569000, 4096)            = 0 -close(2)                                = 0 -exit_group(0)                           = ? -+++ exited with 0 +++ -$ -``` - -### nm - -What it does: List symbols from object files. - -If you are working with a binary that is not stripped, the **nm** command will provide you with the valuable information that was embedded in the binary during compilation. **nm** can help you identify variables and functions from the binary. You can imagine how useful this would be if you don't have access to the source code of the binary being analyzed. - -To showcase **nm**, we will quickly write a small program and compile it with the **-g** option, and we will also see that the binary is not stripped by using the file command. - - -``` -$ cat hello.c -#include <stdio.h> - -int main() { -    printf("Hello world!"); -    return 0; -} -$ -$ gcc -g hello.c -o hello -$ -$ file hello -hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3de46c8efb98bce4ad525d3328121568ba3d8a5d, not stripped -$ -$ ./hello -Hello world!$ -$ - -$ nm hello | tail -0000000000600e20 d __JCR_END__ -0000000000600e20 d __JCR_LIST__ -00000000004005b0 T __libc_csu_fini -0000000000400540 T __libc_csu_init -                 U __libc_start_main@@GLIBC_2.2.5 -000000000040051d T main -                 U printf@@GLIBC_2.2.5 -0000000000400490 t register_tm_clones -0000000000400430 T _start -0000000000601030 D __TMC_END__ -$ -``` - -### gdb - -What it does: The GNU debugger. - -Well, not everything in the binary can be statically analyzed. We did execute some commands which ran the binary, like **ltrace** and **strace**; however, software consists of a variety of conditions that could lead to various alternate paths being executed. - -The only way to analyze these paths is at run time by having the ability to stop or pause the program at any given location and being able to analyze information and then move further down. -That is where debuggers come into the picture, and on Linux, **gdb** is the defacto debugger. It helps you load a program, set breakpoints at specific places, analyze memory and CPU register, and do much more. It complements the other tools mentioned above and allows you to do much more runtime analysis. - -One thing to notice is, once you load a program using **gdb**, you will be presented with its own **(gdb)** prompt. All further commands will be run in this **gdb** command prompt until you exit. - -We will use the "hello" program that we compiled earlier and use **gdb** to see how it works. - - -``` -$ gdb -q ./hello -Reading symbols from /home/flash/hello...done. -(gdb) break main -Breakpoint 1 at 0x400521: file hello.c, line 4. -(gdb) info break -Num     Type           Disp Enb Address            What -1       breakpoint     keep y   0x0000000000400521 in main at hello.c:4 -(gdb) run -Starting program: /home/flash/./hello - -Breakpoint 1, main () at hello.c:4 -4           printf("Hello world!"); -Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_64 -(gdb) bt -#0  main () at hello.c:4 -(gdb) c -Continuing. -Hello world![Inferior 1 (process 29620) exited normally] -(gdb) q -$ -``` - -### Conclusion - -Once you are comfortable with using these native Linux binary analysis tools and understanding the output they provide, you can then move onto more advanced and professional open source binary analysis tools like [radare2][3]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/linux-binary-analysis - -作者:[Gaurav Kamathe][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/gkamathe -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tux_linux_penguin_code_binary.jpg?itok=TxGxW0KY (Tux with binary code background) -[2]: http://www.skyfree.org/linux/references/ELF_Format.pdf -[3]: https://github.com/radareorg/radare2 diff --git a/translated/tech/20200430 10 ways to analyze binary files on Linux.md b/translated/tech/20200430 10 ways to analyze binary files on Linux.md new file mode 100644 index 0000000000..80577dfbcf --- /dev/null +++ b/translated/tech/20200430 10 ways to analyze binary files on Linux.md @@ -0,0 +1,314 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (10 ways to analyze binary files on Linux) +[#]: via: (https://opensource.com/article/20/4/linux-binary-analysis) +[#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) + +在 Linux 上分析二进制文件的 10 种方法 +====== + +> 这些简单的命令和工具可以帮助你轻松完成分析二进制文件的任务。 + +![Tux with binary code background][1] + +“这个世界上有 10 种人:懂二进制的人和不懂二进制的人。” + +我们每天都在与二进制文件打交道,但我们对二进制文件却知之甚少。我所说的二进制,是指你每天运行的可执行文件,从你的命令行工具到成熟的应用程序都是。 + +Linux 提供了一套丰富的工具,让分析二进制文件变得轻而易举。无论你的工作角色是什么,如果你在 Linux 上工作,了解这些工具的基本知识将帮助你更好地理解你的系统。 + +在这篇文章中,我们将介绍其中一些最流行的 Linux 工具和命令,其中大部分都是 Linux 发行版的一部分。如果没有找到,你可以随时使用你的软件包管理器来安装和探索它们。请记住:学习在正确的场合使用正确的工具需要大量的耐心和练习。 + +### file + +它的作用:帮助确定文件类型。 + +这将是你进行二进制分析的出发点。我们每天都在与文件打交道。并非所有的文件都是可执行类型,除此之外还有各种各样的文件类型。在你开始之前,你需要了解要分析的文件类型。它是二进制文件、库文件、ASCII 文本文件、视频文件、图片文件、PDF、数据文件等等。 + +`file` 命令将帮助你确定你所处理的文件类型。 + +``` +$ file /bin/ls +/bin/ls: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=94943a89d17e9d373b2794dcb1f7e38c95b66c86, stripped +$ +$ file /etc/passwd +/etc/passwd: ASCII text +$ +``` + +### ldd + +它的作用:打印共享对象依赖关系。 + +如果你已经在一个可执行的二进制文件上使用了上面的 `file` 命令,你肯定会看到输出中的“动态链接dynamically linked”信息。它是什么意思呢? + +在开发软件的时候,我们尽量不要重造轮子。有一组常见的任务是大多数软件程序需要的,比如打印输出或从标准输入/打开的文件中读取等。所有这些常见的任务都被抽象成一组通用的函数,然后每个人都可以使用,而不是写出自己的变体。这些常用的函数被放在一个叫 `libc` 或 `glibc` 的库中。 + +如何找到可执行程序所依赖的库?这就是 `ldd` 命令的作用了。对动态链接的二进制文件运行该命令会显示出所有依赖库和它们的路径。 + +``` +$ ldd /bin/ls + linux-vdso.so.1 => (0x00007ffef5ba1000) + libselinux.so.1 => /lib64/libselinux.so.1 (0x00007fea9f854000) + libcap.so.2 => /lib64/libcap.so.2 (0x00007fea9f64f000) + libacl.so.1 => /lib64/libacl.so.1 (0x00007fea9f446000) + libc.so.6 => /lib64/libc.so.6 (0x00007fea9f079000) + libpcre.so.1 => /lib64/libpcre.so.1 (0x00007fea9ee17000) + libdl.so.2 => /lib64/libdl.so.2 (0x00007fea9ec13000) + /lib64/ld-linux-x86-64.so.2 (0x00007fea9fa7b000) + libattr.so.1 => /lib64/libattr.so.1 (0x00007fea9ea0e000) + libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fea9e7f2000) +$ +``` + +### ltrace + +它的作用:一个库调用跟踪器。 + +我们现在知道如何使用 `ldd` 命令找到一个可执行程序所依赖的库。然而,一个库可以包含数百个函数。在这几百个函数中,哪些是我们的二进制程序正在使用的实际函数? + +`ltrace` 命令可以显示在运行时从库中调用的所有函数。在下面的例子中,你可以看到被调用的函数名称,以及传递给该函数的参数。你也可以在输出的最右边看到这些函数返回的内容。 + +``` +$ ltrace ls +__libc_start_main(0x4028c0, 1, 0x7ffd94023b88, 0x412950 +strrchr("ls", '/') = nil +setlocale(LC_ALL, "") = "en_US.UTF-8" +bindtextdomain("coreutils", "/usr/share/locale") = "/usr/share/locale" +textdomain("coreutils") = "coreutils" +__cxa_atexit(0x40a930, 0, 0, 0x736c6974756572) = 0 +isatty(1) = 1 +getenv("QUOTING_STYLE") = nil +getenv("COLUMNS") = nil +ioctl(1, 21523, 0x7ffd94023a50) = 0 +<< snip >> +fflush(0x7ff7baae61c0) = 0 +fclose(0x7ff7baae61c0) = 0 ++++ exited (status 0) +++ +$ +``` + +### hexdump + +它的作用:以 ASCII、十进制、十六进制或八进制显示文件内容。 + +通常情况下,当你用一个应用程序打开一个文件,而它不知道如何处理该文件时,就会出现这种情况。尝试用 `vim` 打开一个可执行文件或视频文件,你会看到的只是屏幕上抛出的乱码。 + +在 `hexdump` 中打开未知文件,可以帮助你看到文件的具体内容。你也可以选择使用一些命令行选项来查看用 ASCII 表示的文件数据。这可能会帮助你了解到它是什么类型的文件。 + +``` +$ hexdump -C /bin/ls | head +00000000  7f 45 4c 46 02 01 01 00  00 00 00 00 00 00 00 00  |.ELF............| +00000010  02 00 3e 00 01 00 00 00  d4 42 40 00 00 00 00 00  |..>......B@.....| +00000020  40 00 00 00 00 00 00 00  f0 c3 01 00 00 00 00 00  |@...............| +00000030  00 00 00 00 40 00 38 00  09 00 40 00 1f 00 1e 00  |....@.8...@.....| +00000040  06 00 00 00 05 00 00 00  40 00 00 00 00 00 00 00  |........@.......| +00000050  40 00 40 00 00 00 00 00  40 00 40 00 00 00 00 00  |@.@.....@.@.....| +00000060  f8 01 00 00 00 00 00 00  f8 01 00 00 00 00 00 00  |................| +00000070  08 00 00 00 00 00 00 00  03 00 00 00 04 00 00 00  |................| +00000080  38 02 00 00 00 00 00 00  38 02 40 00 00 00 00 00  |8.......8.@.....| +00000090  38 02 40 00 00 00 00 00  1c 00 00 00 00 00 00 00  |8.@.............| +$ +``` + +### strings + +它的作用:打印文件中的可打印字符的字符串。 + +如果你只是在二进制中寻找可打印的字符,那么 `hexdump` 对于你的使用场景来说似乎有点矫枉过正,你可以使用 `strings` 命令。 + +在开发软件的时候,各种文本/ASCII 信息会被添加到其中,比如打印信息、调试信息、帮助信息、错误等。只要这些信息都存在于二进制文件中,就可以用 `strings` 命令将其转储到屏幕上。 + +``` +$ strings /bin/ls +``` + +### readelf + +它的作用:显示有关 ELF 文件的信息。 + +ELF(可执行和可链接文件格式Executable and Linkable File Format)是可执行文件或二进制文件的主流格式,不仅是 Linux 系统,也是各种 UNIX 系统的主流文件格式。如果你已经使用了像 `file` 命令这样的工具,它告诉你文件是 ELF 格式,那么下一步就是使用 `readelf` 命令和它的各种选项来进一步分析文件。 + +在使用 `readelf` 命令时,有一个实际的 ELF 规范的参考是非常有用的。你可以在[这里][2]找到规范。  + +``` +$ readelf -h /bin/ls +ELF Header: +  Magic:   7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 +  Class:                             ELF64 +  Data:                              2's complement, little endian +  Version:                           1 (current) +  OS/ABI:                            UNIX - System V +  ABI Version:                       0 +  Type:                              EXEC (Executable file) +  Machine:                           Advanced Micro Devices X86-64 +  Version:                           0x1 +  Entry point address:               0x4042d4 +  Start of program headers:          64 (bytes into file) +  Start of section headers:          115696 (bytes into file) +  Flags:                             0x0 +  Size of this header:               64 (bytes) +  Size of program headers:           56 (bytes) +  Number of program headers:         9 +  Size of section headers:           64 (bytes) +  Number of section headers:         31 +  Section header string table index: 30 +$ +``` + +### objdump + +它的作用:从对象文件中显示信息。 + +二进制文件是通过你编写源码的创建的,这些源码会通过一个叫做编译器的工具进行编译。这个编译器会生成相当于源代码的机器语言指令,然后由 CPU 执行,以执行特定的任务。这些机器语言代码可以通过被称为汇编语言的助记词来解读。汇编语言是一组指令,它可以帮助你理解由程序所进行并最终在 CPU 上执行的操作。 + +`objdump` 实用程序读取二进制或可执行文件,并将汇编语言指令转储到屏幕上。汇编语言知识对于理解 `objdump` 命令的输出是至关重要的。 + +请记住:汇编语言是特定于体系结构的。 + +``` +$ objdump -d /bin/ls | head + +/bin/ls: file format elf64-x86-64 + + +Disassembly of section .init: + +0000000000402150 <_init@@Base>: + 402150: 48 83 ec 08 sub $0x8,%rsp + 402154: 48 8b 05 6d 8e 21 00 mov 0x218e6d(%rip),%rax # 61afc8 <__gmon_start__> + 40215b: 48 85 c0 test %rax,%rax +$ +``` + +### strace + +它的作用:跟踪系统调用和信号。 + +如果你用过前面提到的 `ltrace`,那就把 `strace` 想成是类似的。唯一的区别是,`strace` 工具不是追踪调用的库,而是追踪系统调用。系统调用是你与内核对接来完成工作的。 + +举个例子,如果你想把一些东西打印到屏幕上,你会使用标准库 `libc` 中的 `printf` 或 `puts` 函数;但是,在底层,最终会有一个名为 `write` 的系统调用来实际把东西打印到屏幕上。 + +``` +$ strace -f /bin/ls +execve("/bin/ls", ["/bin/ls"], [/* 17 vars */]) = 0 +brk(NULL) = 0x686000 +mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f967956a000 +access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) +open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 +fstat(3, {st_mode=S_IFREG|0644, st_size=40661, ...}) = 0 +mmap(NULL, 40661, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9679560000 +close(3) = 0 +<< snip >> +fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0 +mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f9679569000 +write(1, "R2 RH\n", 7R2 RH +) = 7 +close(1) = 0 +munmap(0x7f9679569000, 4096) = 0 +close(2) = 0 +exit_group(0) = ? ++++ exited with 0 +++ +$ +``` + +### nm + +它的作用:列出对象文件中的符号。 + +如果你所使用的二进制文件没有被剥离,`nm` 命令将为你提供在编译过程中嵌入到二进制文件中的有价值的信息。`nm` 可以帮助你从二进制文件中识别变量和函数。你可以想象一下,如果你无法访问二进制文件的源代码,这将是多么有用。 + +为了展示 `nm`,我们快速编写了一个小程序,用 `-g` 选项编译,我们会看到这个二进制文件没有被剥离。 + +``` +$ cat hello.c +#include + +int main() { + printf("Hello world!"); + return 0; +} +$ +$ gcc -g hello.c -o hello +$ +$ file hello +hello: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=3de46c8efb98bce4ad525d3328121568ba3d8a5d, not stripped +$ +$ ./hello +Hello world!$ +$ + + +$ nm hello | tail +0000000000600e20 d __JCR_END__ +0000000000600e20 d __JCR_LIST__ +00000000004005b0 T __libc_csu_fini +0000000000400540 T __libc_csu_init + U __libc_start_main@@GLIBC_2.2.5 +000000000040051d T main + U printf@@GLIBC_2.2.5 +0000000000400490 t register_tm_clones +0000000000400430 T _start +0000000000601030 D __TMC_END__ +$ +``` + +### gdb + +它的作用:GNU 调试器。 + +好吧,不是所有的二进制文件中的东西都可以进行静态分析。我们确实执行了一些运行二进制文件(进行分析)的命令,比如 `ltrace` 和 `strace`;然而,软件由各种条件组成,这些条件可能会导致执行不同的替代路径。 + +分析这些路径的唯一方法是在运行时环境,在任何给定的位置停止或暂停程序,并能够分析信息,然后再往下执行。 + +这就是调试器的作用,在 Linux 上,`gdb` 就是调试器的事实标准。它可以帮助你加载程序,在特定的地方设置断点,分析内存和 CPU 的寄存器,还有更多的功能。它是对上面提到的其他工具的补充,可以让你做更多的运行时分析。 + +有一点需要注意的是,一旦你使用 `gdb` 加载一个程序,你会看到它自己的 `(gdb)` 提示符。所有进一步的命令都将在这个 `gdb` 命令提示符中运行,直到你退出。 + +我们将使用我们之前编译的 `hello` 程序,使用 `gdb` 来看看它的工作原理。 + +``` +$ gdb -q ./hello +Reading symbols from /home/flash/hello...done. +(gdb) break main +Breakpoint 1 at 0x400521: file hello.c, line 4. +(gdb) info break +Num Type Disp Enb Address What +1 breakpoint keep y 0x0000000000400521 in main at hello.c:4 +(gdb) run +Starting program: /home/flash/./hello + +Breakpoint 1, main () at hello.c:4 +4 printf("Hello world!"); +Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_64 +(gdb) bt +#0 main () at hello.c:4 +(gdb) c +Continuing. +Hello world![Inferior 1 (process 29620) exited normally] +(gdb) q +$``` + +### 结语 + +一旦你习惯了使用这些原生的 Linux 二进制分析工具,并理解了它们提供的输出,你就可以转向更高级和专业的开源二进制分析工具,比如 [radare2][3]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/linux-binary-analysis + +作者:[Gaurav Kamathe][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/gkamathe +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tux_linux_penguin_code_binary.jpg?itok=TxGxW0KY (Tux with binary code background) +[2]: http://www.skyfree.org/linux/references/ELF_Format.pdf +[3]: https://github.com/radareorg/radare2 From 4329ba8022652a95dac36fd077841d42f802d404 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Fri, 1 May 2020 10:28:45 +0800 Subject: [PATCH 075/178] Update 20200401 The ins and outs of high-performance computing as a service.md --- ...uts of high-performance computing as a service.md | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md index 45d258e08c..0f60833aa4 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -11,22 +11,24 @@ The ins and outs of high-performance computing as a service 高性能计算即 ====== HPC services can be a way to meet expanding supercomputing needs, but depending on the use case, they’re not necessarily better than on-premises supercomputers. Dell EMC -HPC服务可以满足不断扩展的超级计算需求,但根据使用情况,它们不一定比本地超级计算机更好。 戴尔EMC +高性能计算(HPC)服务可能是一种满足不断增长的超级计算需求的方式,但依赖于使用场景,它们不一定比使用本地超级计算机好。 Electronics on missiles and military helicopters need to survive extreme conditions. Before any of that physical hardware can be deployed, defense contractor McCormick Stevenson Corp. simulates the real-world conditions it will endure, relying on finite element analysis software like Ansys, which requires significant computing power. -导弹和军用直升机上的电子设备需要在极端条件下生存。 在部署任何物理硬件之前,国防承包商麦考密克·史蒂文森公司(McCormick Stevenson Corp.)都依赖于像Ansys这样的有限元素分析软件来模拟它会承受的现实条件,该软件需要强大的计算能力。 +戴尔EMC +导弹和军用直升机上的电子设备需要工作在极端条件下。国防承包商麦考密克·史蒂文森公司(McCormick Stevenson Corp.)在部署任何物理设备之前都会事先模拟它所能承受的真实条件。模拟依赖于像Ansys这样的有限元素分析软件,该软件需要强大的算力。 Then one day a few years ago, it unexpectedly ran up against its computing limits. 几年前的一天,它出乎意料地超出了计算极限。 [10 of the world's fastest supercomputers][1] +[世界上最快的10个超级计算机][1] "We had some jobs that would have overwhelmed the computers that we had in office," says Mike Krawczyk, principal engineer at McCormick Stevenson. "It did not make economic or schedule sense to buy a machine and install software." Instead, the company contracted with Rescale, which could sell them cycles on a supercomputer-class system for a tiny fraction of what they would've spent on new hardware. -麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们从事的某些工作会使我们在办公室使用的计算机不堪重负。” “购买机器并安装软件在经济上或计划上都不合理。”取而代之的是,该公司与Rescale签约,可以在超级计算机级系统上向他们出售自行车的周期,而这只花费了他们在新硬件上花费的一小部分。 +麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们的一些工作会使办公室的计算机不堪重负。” “购买机器并安装软件在经济上或计划上都不划算。” 相反,该公司与Rescale签约,从其购买在超级计算机系统上运行的周期(cycles),而这只花费了他们购买新硬件上所需的一小部分。 McCormick Stevenson had become an early adopter in a market known as supercomputing as a service or high-performance computing (HPC) as a service – two terms that are closely related. HPC is the application of supercomputers to computationally complex problems, while supercomputers are those computers at the cutting edge of processing capacity, according to the National Institute for Computational Sciences. -麦考密克·史蒂文森(McCormick Stevenson)已成为市场上的早期采用者,该市场被称为超级计算即服务或高性能计算(HPC)即服务–这两个紧密相关的术语。根据国家计算科学研究所的说法,HPC是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 +麦考密克·史蒂文森(McCormick Stevenson)已成为被称为超级计算即服务或高性能计算(HPC)即服务(两个紧密相关的术语)市场的早期采用者之一。根据国家计算科学研究所(的定义),HPC是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 Whatever it's called, these services are upending the traditional supercomputing market and bringing HPC power to customers who could never afford it before. But it's no panacea, and it's definitely not plug-and-play – at least not yet. -无论以何种方式称呼,这些服务都在颠覆传统的超级计算市场,并将HPC功能带给以前买不起的客户。但这不是万能药,而且绝对不是即插即用的,至少现在还没有。 +无论叫它什么,这些服务都在颠覆传统的超级计算市场,并将HPC能力带给以前买不起的客户。但这不是万能的,而且绝对不是即插即用的,至少现在还不是。 ### HPC services in practice HPC服务实践 From 548d4b7a2e4780b5773fe1a50ca43aed5a12cae0 Mon Sep 17 00:00:00 2001 From: Brooke Lau Date: Fri, 1 May 2020 11:13:36 +0800 Subject: [PATCH 076/178] APL 20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode --- ... Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md index a3f3768533..5e2fb84c2b 100644 --- a/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md +++ b/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (lxbwolf) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 8f276cb52bf4914db548b9a9fdb5b9fac427ffb7 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Fri, 1 May 2020 18:21:24 +0800 Subject: [PATCH 077/178] Update 20200401 The ins and outs of high-performance computing as a service.md --- ...high-performance computing as a service.md | 32 +++++++++---------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md index 0f60833aa4..fc36ff7082 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -33,52 +33,52 @@ Whatever it's called, these services are upending the traditional supercomputing ### HPC services in practice HPC服务实践 From the end user's perspective, HPC as a service resembles the batch-processing model that dates back to the early mainframe era. "We create an Ansys batch file and send that up, and after it runs, we pull down the result files and import them locally here," Krawczyk says. -从最终用户的角度来看,HPC即服务类似于可追溯到大型机早期的批处理模型。 “我们创建一个Ansys批处理文件并将其发送出去,然后运行它,我们将结果文件下拉并在此处本地导入,” Krawczyk说。 +从最终用户的角度来看,HPC即服务类似于早期大型机时代的批处理模型。 “我们创建一个Ansys批处理文件并将其发送过去,运行它,然后将结果文件取下来并在本地导入它们,” Krawczyk说。 Behind the scenes, cloud providers are running the supercomputing infrastructure in their own data centers – though that doesn't necessarily imply the sort of cutting-edge hardware you might be visualizing when you hear "supercomputer." As Dave Turek, Vice President of Technical Computing at IBM OpenPOWER, explains it, HPC services at their core are "a collection of servers that are strung together with an interconnect. You have the ability to invoke this virtual computing infrastructure that allows you to bring a lot of different servers to work together in a parallel construct to solve the problem when you present it." -在幕后,云提供商正在其自己的数据中心中运行超级计算基础结构,尽管这不一定意味着您在听到“超级计算机”时可能会看到的最先进的硬件。正如IBM OpenPOWER技术计算副总裁Dave Turek解释说的那样,HPC服务的核心是“由互连串在一起的服务器的集合。您可以调用此虚拟计算基础结构,使您能够当您提出问题时,许多不同的服务器可以并行构造在一起以解决问题。” +在HPC服务背后,云提供商在其自己的数据中心中运行超级计算基础设施,尽管这不一定意味着当您听到“超级计算机”时你就会看到最先进的硬件。正如IBM OpenPOWER计算技术副总裁Dave Turek解释的那样,HPC服务的核心是“相互互连的服务器集合。您可以调用该虚拟计算基础设施,它能够在您提出问题时,使得许多不同的服务器并行工作来解决问题。” [][2] Sounds simple in theory. But making it viable in practice required some chipping away at technical problems, according to Theo Lynn, Professor of Digital Business at Dublin City University. What differentiates ordinary computing from HPC is those interconnects – high-speed, low-latency, and expensive – so those needed to be brought to the world of cloud infrastructure. Storage performance and data transport also needed to be brought up to a level at least in the same ballpark as on-prem HPC before HPC services could be viable. -理论上听起来很简单。都柏林城市大学数字业务教授西奥·林恩(Theo Lynn)表示,但要使其在实践中可行,需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速,低延迟和昂贵-因此需要将这些互连引入云基础架构领域。在HPC服务可行之前,还至少需要将存储性能和数据传输提升到与本地HPC相同的水平。 +理论上听起来很简单。但都柏林城市大学数字业务教授西奥·林恩(Theo Lynn)表示,要使其在实践中可行,需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速的,低延时的而且昂贵的-因此需要将这些互连引入云基础设施领域。在HPC服务可行之前,至少需要将存储性能和数据传输也提升到与本地HPC相同的水平。 But Lynn says that some of the innovations that have helped HPC services take off have been more institutional than technological. In particular, "we are now seeing more and more traditional HPC applications adopting cloud-friendly licensing models – a barrier to adoption in the past." -但是林恩说,一些帮助高性能计算服务起飞的创新比技术更具有制度性。特别是,“我们现在看到越来越多的传统HPC应用程序采用云友好型许可模式-过去是采用这种模式的障碍。” +但是林恩说,一些制度创新相比技术更好的帮助了HPC服务的起飞。特别是,“我们现在看到越来越多的传统HPC应用采用云友好的许可模式-过去是采用这种模式的障碍。” And the economics have also shifted the potential customer base, he says. "Cloud service providers have opened up the market more by targeting low-end HPC buyers who couldn’t afford the capex associated with traditional HPC and opening up the market to new users. As the markets open up, the hyperscale economic model becomes more and more feasible, costs start coming down." -他说,经济也改变了潜在的客户群。 “云服务提供商通过针对那些负担不起与传统HPC相关的资本支出的低端HPC买家,并向新用户开放市场,进一步开放了市场。随着市场的开放,超大规模经济模型变得越来越多,更可行,成本开始下降。” +他说,经济也改变了潜在的客户群。 “云服务提供商通过向那些负担不起传统HPC所需的投资成本的低端HPC买家开放,进一步开放了市场。随着市场的开放,超大规模经济模型变得越来越多,更可行,成本开始下降。” -Avoid on-premises CAPEX** 避免内部资本支出** +Avoid on-premises CAPEX** 避免本地资本支出** ** HPC services are attractive to private-sector customers in the same fields where traditional supercomputing has long held sway. These include sectors that rely heavily on complex mathematical modeling, including defense contractors like McCormick Stevenson, along with oil and gas companies, financial services firms, and biotech companies. Dublin City University's Lynn adds that loosely coupled workloads are a particularly good use case, which meant that many early adopters used it for 3D image rendering and related applications. -在传统超级计算长期占据主导地位的相同领域,HPC服务对私营部门客户具有吸引力。这些行业包括严重依赖复杂数学模型的行业,包括麦考密克·史蒂文森(McCormick Stevenson)等国防承包商,以及石油和天然气公司,金融服务公司和生物技术公司。都柏林城市大学的Lynn补充说,松散耦合的工作负载是一个特别好的用例,这意味着许多早期采用者将其用于3D图像渲染和相关应用程序。 +HPC服务对有志于传统超级计算长期把持的领域的私营行业客户具有吸引力。这些客户包括严重依赖复杂数学模型的行业,包括麦考密克·史蒂文森(McCormick Stevenson)等国防承包商,以及油气公司,金融服务公司和生物技术公司。都柏林城市大学的Lynn补充说,松耦合的工作负载是一个特别好的用例,这意味着许多早期采用者将其用于3D图像渲染和相关应用。 But when does it make sense to consider HPC services over on-premises HPC? For hhpberlin, a German company that simulates smoke propagation in and fire damage to structural components of buildings, the move came as it outgrew its current resources. -但是,何时在本地HPC上考虑HPC服务才有意义?对于德国的hhpberlin公司,该公司模拟烟雾在建筑物中的传播和火灾对建筑物结构部件的破坏,此举是因为它超出了其现有资源。 +但是,何时考虑HPC服务而不是本地HPC才有意义?对于德国的模拟烟雾在建筑物中的蔓延和火灾对建筑物结构部件的破坏的hhpberlin公司来说,答案是在它超出了其现有资源时。 "For several years, we had run our own small cluster with up to 80 processor cores," says Susanne Kilian, hhpberlin's scientific head of numerical simulation. "With the rise in application complexity, however, this constellation has increasingly proven to be inadequate; the available capacity was not always sufficient to handle projects promptly." -hhpberlin数值模拟的科学负责人Susanne Kilian说:“几年来,我们一直在运行自己的小型集群,该集群具有多达80个处理器内核。” “但是,随着应用程序复杂性的提高,这种架构已经越来越不足够;可用容​​量并不总是足够迅速地处理项目。” +Hpberlin公司数值模拟的科学负责人Susanne Kilian说:“几年来,我们一直在运行自己的小型集群,该集群具有多达80个处理器核。” “但是,随着应用复杂性的提高,这种架构(constellation)已经越来越不足以支撑;可用容量并不总是够快速地处理项目。” But just spending money on a new cluster wasn't an ideal solution, she says: "In view of the size and administrative environment of our company, the necessity of constant maintenance of this cluster (regular software and hardware upgrades) turned out to be impractical. Plus, the number of required simulation projects is subject to significant fluctuations, such that the utilization of the cluster was not really predictable. Typically, phases with very intensive use alternate with phases with little to no use." By moving to an HPC service model, hhpberlin shed that excess capacity and the need to pay up front for upgrades. -她说:“但是,仅仅花钱买一个新的集群并不是一个理想的解决方案:鉴于我们公司的规模和管理环境,持续维护该集群(定期进行软件和硬件升级)的必要性非常明显。另外,所需的模拟项目的数量会出现很大的波动,因此群集的使用情况并不是真正可预测的。通常,使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为HPC服务模式,hhpberlin消除了过剩的容量,无需支付升级费用。 +她说:“但是,仅仅花钱买一个新的集群并不是一个理想的解决方案:鉴于我们公司的规模和管理环境,强制持续维护该集群(定期进行软件和硬件升级)是不现实的。另外,需要模拟的项目数量会出现很大的波动,因此集群的利用率并不是真正可预测的。通常,使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为HPC服务模式,hhpberlin释放了过剩的容量,并无需支付升级费用。 IBM's Turek explains the calculus that different companies go through while assessing their needs. For a biosciences startup with 30 people, "you need computing, but you really can't afford to have 15% of your staff dedicated to it. It's just like you might also say you don't want to have on-staff legal representation, so you'll get that as a service as well." For a bigger company, though, it comes down to weighing the operational expense of an HPC service against the capacity expense of buying an in-house supercomputer or HPC cluster. -IBM的Turek解释了不同公司在评估其需求时所经历的计算过程。对于拥有30名员工的生物科学初创公司来说,“您需要计算,但您实在负担不起15%的员工专心致志。这就像您可能还说过,您不想拥有在职法律代表,因此您也可以将其作为服务获得。”但是,对于一家较大的公司而言,归结为权衡HPC服务的运营费用与购买内部超级计算机或HPC集群的容量费用。 +IBM的Turek解释了不同公司在评估其需求时所经历的计算过程。对于拥有30名员工的生物科学初创公司来说,“您需要计算,但您实在负担不起15%的员工专门从事它。这就像您可能也说过,您不想拥有在职法律代表,因此您也可以通过服务获得它。”但是,对于一家较大的公司而言,最终归结为权衡HPC服务的运营费用与购买内部超级计算机或HPC集群的费用。 So far, those are the same sorts of arguments you'd have over adopting any cloud service. But the opex vs. capex dilemma can be weighted towards the former by some of the specifics of the HPC market. Supercomputers aren't commodity hardware like storage or x86 servers; they're very expensive, and technological advances can swiftly render them obsolete. As McCormick Stevenson's Krawczyk puts it, "It's like buying a car: as soon as you drive off the lot it starts to depreciate." And for many companies –especially larger and less nimble ones – the process of buying a supercomputer can get hopelessly bogged down. "You're caught up in planning issues, building issues, construction issues, training issues, and then you have to execute an RFP," says IBM's Turek. "You have to work through the CIO. You have to work with your internal customers to make sure there's continuity of service. It's a very, very complex process and not something that a lot of institutions are really excellent at executing." -到目前为止,这些都是您采用任何云服务时都会遇到的相同类型的争论。但是,可以通过HPC市场的某些细节将运营支出与资本支出的困境加权为前者。超级计算机不是诸如存储或x86服务器之类的商用硬件;它们非常昂贵,技术进步会很快使其过时。正如麦考密克·史蒂文森(McCormick Stevenson)的克拉维奇(Krawczyk)所说,“这就像在买车:开车一走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说:“您陷入了计划问题,建筑问题,施工问题,培训问题,然后必须执行RFP。” “您必须通过CIO进行工作。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程,并不是很多机构在执行方面都非常出色。” +到目前为止,这些都是您采用任何云服务时都会遇到的类似的争论。但是,可以HPC市场的某些特点将使得衡量运营支出与资本支出时选择前者。超级计算机不是诸如存储或x86服务器之类的商用硬件;它们非常昂贵,技术进步很快会使其过时。正如麦考密克·史蒂文森(McCormick Stevenson)的克拉维奇(Krawczyk)所说,“这就像买车:只要车一开走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说:“您陷入了计划问题,建筑问题,施工问题,培训问题,然后必须执行RFP。” “您必须得到CIO的支持。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程,并没有很多机构有非常出色的执行力。” -Once you choose to go down the services route for HPC, you'll find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, Senior Director and Analyst at Gartner, says bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC  services. -选择了HPC的服务路线后,您会发现您将从云服务中获得了许多期望,特别是仅在需要时才需要为HPC功能付费的能力,从而可以有效利用资源。 Gartner高级总监兼分析师Chirag Dekate表示,当您对高性能计算有短期需求时,突发性工作负载是推动HPC服务采用的关键用例。 +Once you choose to go down the services route for HPC, you'll find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, Senior Director and Analyst at Gartner, says bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC services. +一旦您选择了HPC服务的路线后,您会发现您会得到您期望从云服务中得到的许多好处,特别是仅在业务需要时才需付费的能力,从而可以带来资源的高效利用。 Gartner高级总监兼分析师Chirag Dekate表示,当您对高性能计算有短期需求时的突发性负载是推动选择HPC服务的关键用例。 "In the manufacturing industry, you tend to have a high peak of HPC activity around the product design stage," he says. "But once the product is designed, HPC resources are less utilized during the rest of the product-development cycle." In contrast, he says, "when you have large, long-running jobs, the economics of the cloud wear down." 他说:“在制造业中,在产品设计阶段,HPC活动往往会达到很高的峰值。” “但是,一旦产品设计完成,在其余产品开发周期中,HPC资源的利用率就会降低。” 相比之下,他说:“当您拥有大量长期运行的工作时,云的经济就会逐渐减弱。” With clever system design, you can integrate those HPC-services bursts of activity with your own in-house conventional computing. Teresa Tung, managing director in Accenture Labs, gives an example: "Accessing HPC via APIs makes it seamless to mix with traditional computing. A traditional AI pipeline might have its training done on a high-end supercomputer at the stage when the model is being developed, but then the resulting trained model that runs predictions over and over would be deployed on other services in the cloud or even devices at the edge." -通过巧妙的系统设计,您可以将这些HPC服务突发事件与您自己的内部常规计算集成在一起。 埃森哲实验室常务董事董德丽举了一个例子:“通过API访问HPC可以无缝地与传统计算混合。在模型构建阶段,传统的AI管道可能会在高端超级计算机上进行培训。 开发出来的软件,但是最终生成的经过反复训练的模型将部署在云中的其他服务上,甚至部署在边缘设备上。” +通过巧妙的系统设计,您可以将这些HPC服务突发活动与您自己的内部常规计算集成在一起。 埃森哲(Accenture)实验室常务董事Teresa Tung举了一个例子:“通过API访问HPC可以无缝地与传统计算混合。在模型构建阶段,传统的AI流水线可能会在高端超级计算机上进行训练,但是最终经过反复按预期运行的训练好的模型将部署在云中的其他服务上,甚至部署在边缘设备上。” -### It's not for all use cases** +### It's not for all use cases** 它并不适合所有的应用场景 ** From e241cfe3b2eae7841b3adb1014e6cf9820b77c75 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 19:37:10 +0800 Subject: [PATCH 078/178] PRF @wxy --- ...ython to visualize COVID-19 projections.md | 25 +++++++++---------- 1 file changed, 12 insertions(+), 13 deletions(-) diff --git a/translated/tech/20200421 Using Python to visualize COVID-19 projections.md b/translated/tech/20200421 Using Python to visualize COVID-19 projections.md index 3768686e8c..af82cbd349 100644 --- a/translated/tech/20200421 Using Python to visualize COVID-19 projections.md +++ b/translated/tech/20200421 Using Python to visualize COVID-19 projections.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Using Python to visualize COVID-19 projections) @@ -10,11 +10,11 @@ 使用 Python 来可视化 COVID-19 预测 ====== -> 我将演示如何使用开源库利用提供的全球病毒传播的开放数据来创建两个可视效果。 +> 我将演示如何利用提供的全球病毒传播的开放数据,使用开源库来创建两个可视效果。 -![Colorful sound wave graph][1] +![](https://img.linux.net.cn/data/attachment/album/202005/01/193624a2p2osojyf0yg4go.jpg) -使用 [Python][2] 和一些图形库,你可以预测出 COVID-19 确诊病例的总数,也可以显示一个国家(本文以印度为例)在给定日期的死亡总数。人们有时需要帮助解释和处理数据的意义,所以本文还演示了如何为五个国家创建一个动画横条形图,以显示按日期显示病例的变化。 +使用 [Python][2] 和一些图形库,你可以预测 COVID-19 确诊病例总数,也可以显示一个国家(本文以印度为例)在给定日期的死亡总数。人们有时需要帮助解释和处理数据的意义,所以本文还演示了如何为五个国家创建一个动画横条形图,以显示按日期显示病例的变化。 ### 印度的确诊病例和死亡人数预测 @@ -28,7 +28,6 @@ 直接将数据加载到 Pandas `DataFrame` 中。Pandas 提供了一个函数 `read_csv()`,它可以获取一个 URL 并返回一个 `DataFrame` 对象,如下所示。 - ``` import pycountry import plotly.express as px @@ -87,8 +86,8 @@ print(df_india.head(3)) 在这里,我们创建一个条形图。我们将把日期放在 X 轴上,把确诊的病例数和死亡人数放在 Y 轴上。这一部分的脚本有以下几个值得注意的地方。 - * `plt.rcParams["_figure.figure.figsize"_]=20,20` 这一行代码只适用于 Jupyter。所以如果你使用其他 IDE,请删除它。 - * 注意这行代码:`ax1 = plt.gca()`。为了确保两个图,即确诊病例和死亡病例的图都被绘制在同一个图上,我们需要给第二个图的 `ax` 对象。所以我们使用 `gca()` 来完成这个任务。(顺便说一下,`gca` 代表“get current axis”) + * `plt.rcParams["figure.figsize"]=20,20` 这一行代码只适用于 Jupyter。所以如果你使用其他 IDE,请删除它。 + * 注意这行代码:`ax1 = plt.gca()`。为了确保两个图,即确诊病例和死亡病例的图都被绘制在同一个图上,我们需要给第二个图的 `ax` 对象。所以我们使用 `gca()` 来完成这个任务。(顺便说一下,`gca` 代表 “获取当前坐标轴get current axis”) 完整的脚本如下所示。 @@ -120,9 +119,9 @@ plt.show() 整个脚本[可在 GitHub 上找到][4]。 -#### 为五个国家创建一个动画水平条形图 +### 为五个国家创建一个动画水平条形图 -关于 Jupyter 的注意事项:要在 Jupyter 中以动态动画的形式运行,而不是静态 png 的形式,你需要在单元格的开头添加一个神奇的命令,即: `%matplotlib notebook`。这将使图形保持动态,而不是显示静态的 png 文件,因此也可以显示动画。如果你在其他 IDE 上,请删除这一行。 +关于 Jupyter 的注意事项:要在 Jupyter 中以动态动画的形式运行,而不是静态 png 的形式,你需要在单元格的开头添加一个神奇的命令,即: `%matplotlib notebook`。这将使图形保持动态,而不是显示为静态的 png 文件,因此也可以显示动画。如果你在其他 IDE 上,请删除这一行。 #### 1、下载数据 @@ -130,11 +129,11 @@ plt.show() #### 2、创建一个所有日期的列表 -如果你检查你下载的数据,你会发现它有一列 `Date`。现在,这一列对每个国家都有一个日期值。因此,同一个日期会出现多次。我们需要创建一个只具有唯一值的日期列表。这会用在我们条形图的 X 轴上。我们有一行代码,如 `list_dates = df[_'Date'_].unique()`。`unique()` 方法将只提取每个日期的唯一值。 +如果你检查你下载的数据,你会发现它有一列 `Date`。现在,这一列对每个国家都有一个日期值。因此,同一个日期会出现多次。我们需要创建一个只具有唯一值的日期列表。这会用在我们条形图的 X 轴上。我们有一行代码,如 `list_dates = df[‘Date’].unique()`。`unique()` 方法将只提取每个日期的唯一值。 #### 3、挑选五个国家并创建一个 `ax` 对象。 -做一个五个国家的名单。(你可以选择你喜欢的国家,甚至可以增加或减少国家的数量。)我也做了一个五个颜色的列表,每个国家的条形图的颜色对应一种。(如果你喜欢的话,也可以改一下。)这里有一行重要的代码是:`fig, ax = plt.subplots(figsize=(15, 8))`。这是创建一个 `ax` 对象所需要的。 +做一个五个国家的名单。(你可以选择你喜欢的国家,也可以增加或减少国家的数量。)我也做了一个五个颜色的列表,每个国家的条形图的颜色对应一种。(如果你喜欢的话,也可以改一下。)这里有一行重要的代码是:`fig, ax = plt.subplots(figsize=(15, 8))`。这是创建一个 `ax` 对象所需要的。 #### 4、编写回调函数 @@ -148,7 +147,7 @@ plt.show() ``` my_anim = animation.FuncAnimation(fig = fig, func = plot_bar, - frames= list_dates, blit=True, + frames = list_dates, blit = True, interval=20) ``` @@ -226,7 +225,7 @@ via: https://opensource.com/article/20/4/python-data-covid-19 作者:[AnuragGupta][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 47b50990ea772269e11c728ad114ba528ce89b0c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 19:39:56 +0800 Subject: [PATCH 079/178] PUB @wxy https://linux.cn/article-12172-1.html --- ...20200421 Using Python to visualize COVID-19 projections.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200421 Using Python to visualize COVID-19 projections.md (99%) diff --git a/translated/tech/20200421 Using Python to visualize COVID-19 projections.md b/published/20200421 Using Python to visualize COVID-19 projections.md similarity index 99% rename from translated/tech/20200421 Using Python to visualize COVID-19 projections.md rename to published/20200421 Using Python to visualize COVID-19 projections.md index af82cbd349..aaf5fc72d0 100644 --- a/translated/tech/20200421 Using Python to visualize COVID-19 projections.md +++ b/published/20200421 Using Python to visualize COVID-19 projections.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12172-1.html) [#]: subject: (Using Python to visualize COVID-19 projections) [#]: via: (https://opensource.com/article/20/4/python-data-covid-19) [#]: author: (AnuragGupta https://opensource.com/users/999anuraggupta) From bdaa3f168a7d09f38a496e2e09ccb4c277109d6a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 23:19:22 +0800 Subject: [PATCH 080/178] PRF @geekpi --- ... Free and Open Source Editor Pixelorama.md | 42 +++++++++---------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md b/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md index 530517a482..93fa31a461 100644 --- a/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md +++ b/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md @@ -1,52 +1,50 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Create Stunning Pixel Art With Free and Open Source Editor Pixelorama) [#]: via: (https://itsfoss.com/pixelorama/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -使用免费和开源编辑器 Pixelorama 创建令人惊叹的像素艺术 +使用 Pixelorama 创建令人惊叹的像素艺术 ====== -_**简介:Pixelorama 是一个跨平台、免费和开源的 2D Sprite 编辑器。它在干净的用户界面中提供了创建像素艺术所有必要工具。**_ +> Pixelorama 是一个跨平台、自由开源的 2D 精灵编辑器。它在一个整洁的用户界面中提供了创建像素艺术所有必要工具。 ### Pixelorama:开源 Sprite 编辑器 -[Pixelorama][1] 是年轻游戏开发人员在 [Orama 互动][2]创建的工具。他们已经开发了一些 2D 游戏,其中一些使用了像素艺术。 +[Pixelorama][1] 是 [Orama 互动][2]公司的年轻游戏开发人员创建的一个工具。他们已经开发了一些 2D 游戏,其中一些使用了像素艺术。 -虽然 Orama 主要投入游戏开发,但开发人员也在创建实用工具,帮助他们(和其他人)创建这些游戏。 +虽然 Orama 主要从事于游戏开发,但开发人员也创建实用工具,帮助他们(和其他人)创建这些游戏。 -免费开源的 Sprite 编辑器,Pixelorama 是这样一个实用工具。它构建在 [Godot 引擎][3]之上,非常适合创作像素艺术。 +自由开源的精灵Sprite编辑器 Pixelorama 就是这样一个实用工具。它构建在 [Godot 引擎][3]之上,非常适合创作像素艺术。 ![Pixelorama screenshot][4] -你在上面的截图中看到像素艺术了吗?它是使用 Pixelorama 创建的。此视频显示创建上面图像的延时视频。 +你看到上面截图中的像素艺术了吗?它是使用 Pixelorama 创建的。这段视频展示了制作上述图片的时间推移视频。 ### Pixelorama 的功能 以下是 Pixelorama 提供的主要功能: - * 多种工具,如铅笔,擦除,填充桶颜色选择器等 - * 多层系统,你可以根据需要添加、删除、上下移动、克隆和合并尽可能多的层 + * 多种工具,如铅笔、橡皮擦、填充桶、取色器等 + * 多层系统,你可以根据需要添加、删除、上下移动、克隆和合并多个层 * 支持 Spritesheets * 导入图像并在 Pixelorama 中编辑它们 * 带有 [Onion Skinning][5] 的动画时间线 * 自定义画笔 * 以 Pixelorama 的自定义文件格式 .pxo 保存并打开你的项目 * 水平和垂直镜像绘图 - * 用于图样创建的磁贴模式 + * 用于创建图样的磁贴模式 * 拆分屏幕模式和迷你画布预览 * 使用鼠标滚轮缩放 - * 无限撤消和重做 + * 无限次撤消和重做 * 缩放、裁剪、翻转、旋转、颜色反转和去饱和图像 * 键盘快捷键 * 提供多种语言 * 支持 Linux、Windows 和 macOS - - ### 在 Linux 上安装 Pixelorama Pixelorama 提供 Snap 应用,如果你使用的是 Ubuntu,那么可以在软件中心找到它。 @@ -59,33 +57,33 @@ Pixelorama 提供 Snap 应用,如果你使用的是 Ubuntu,那么可以在 sudo snap install pixelorama ``` -如果你不想使用 Snap,不用担心。你可以从[他们的 GitHub 仓库]下载最新版本的 Pixelorama,[解压 zip 文件][9],你会看到一个可执行文件。授予此文件执行权限,并双击它运行应用。 +如果你不想使用 Snap,不用担心。你可以从[他们的 GitHub 仓库][8]下载最新版本的 Pixelorama,[解压 zip 文件][9],你会看到一个可执行文件。授予此文件执行权限,并双击它运行应用。 -[下载 Pixelorama][10] +- [下载 Pixelorama][10] -**总结** +### 总结 ![Pixelorama Welcome Screen][11] 在 Pixeloaram 的功能中,它说你可以导入图像并对其进行编辑。我想,这只是对某些类型的文件,因为当我尝试导入 PNG 或 JPEG 文件,程序崩溃了。 -然而,我可以像一个 3 岁的孩子那样随意涂鸦并制作像素艺术。我并没有深入艺术,但我认为这是一个[在 Linux 上对数字艺术家有用的工具][12]。 +然而,我可以像一个 3 岁的孩子那样随意涂鸦并制作像素艺术。我对艺术不是很感兴趣,但我认为这[对 Linux 上的数字艺术家是个有用的工具][12]。 我喜欢这样的想法:尽管是游戏开发人员,但他们创建的工具,可以帮助其他游戏开发人员和艺术家。这就是开源的精神。 -如果你喜欢这个项目,并且会使用它,请考虑通过捐赠来支持他们。[FOSS 捐赠了][13] 25 美元,以感谢他们的努力。 +如果你喜欢这个项目,并且会使用它,请考虑通过捐赠来支持他们。[It’s FOSS 捐赠了][13] 25 美元,以感谢他们的努力。 -[向 Pixelorama 捐赠(主要开发者的个人 Paypal 账户)][14] +- [向 Pixelorama 捐赠(主要开发者的个人 Paypal 账户)][14] -你喜欢 Pixelorama 吗?你是否使用其他开源 Sprite 编辑器?请随时在评论栏分享你的观点。 +你喜欢 Pixelorama 吗?你是否使用其他开源精灵编辑器?请随时在评论栏分享你的观点。 -------------------------------------------------------------------------------- - +via: https://itsfoss.com/pixelorama/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 8cc7dc8871ef32efb5770a18c488daf23bda8448 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 1 May 2020 23:22:46 +0800 Subject: [PATCH 081/178] PUB @geekpi https://linux.cn/article-12173-1.html --- ...g Pixel Art With Free and Open Source Editor Pixelorama.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md (98%) diff --git a/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md b/published/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md similarity index 98% rename from translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md rename to published/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md index 93fa31a461..7a267a672e 100644 --- a/translated/tech/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md +++ b/published/20200317 Create Stunning Pixel Art With Free and Open Source Editor Pixelorama.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12173-1.html) [#]: subject: (Create Stunning Pixel Art With Free and Open Source Editor Pixelorama) [#]: via: (https://itsfoss.com/pixelorama/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From c7d4f93bfee652f09c6696f133f19ea562d97917 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 2 May 2020 00:52:31 +0800 Subject: [PATCH 082/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200501=20Using?= =?UTF-8?q?=20mergerfs=20to=20increase=20your=20virtual=20storage?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200501 Using mergerfs to increase your virtual storage.md --- ...rgerfs to increase your virtual storage.md | 147 ++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 sources/tech/20200501 Using mergerfs to increase your virtual storage.md diff --git a/sources/tech/20200501 Using mergerfs to increase your virtual storage.md b/sources/tech/20200501 Using mergerfs to increase your virtual storage.md new file mode 100644 index 0000000000..06f26abf51 --- /dev/null +++ b/sources/tech/20200501 Using mergerfs to increase your virtual storage.md @@ -0,0 +1,147 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using mergerfs to increase your virtual storage) +[#]: via: (https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/) +[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) + +Using mergerfs to increase your virtual storage +====== + +![][1] + +What happens if you have multiple disks or partitions that you’d like to use for a media project and you don’t want to lose any of your existing data, but you’d like to have everything located or mounted under one drive. That’s where mergerfs can come to your rescue! + +[mergerfs][2] is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. + +You will need to grab the latest RPM from their github page [here][3]. The releases for Fedora have _**fc**_ and the version number in the name. For example here is the version for Fedora 31: + +[mergerfs-2.29.0-1.fc31.x86_64.rpm][4] + +### Installing and configuring mergerfs + +Install the mergerfs package that you’ve downloaded using sudo: + +``` +$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm +``` + +You will now be able to mount multiple disks as one drive. This comes in handy if you have a media server and you’d like all of your media files to show up under one location. If you upload new files to your system, you can copy them to your mergerfs directory and mergerfs will automatically copy them to which ever drive has enough free space available. + +Here is an example to make it easier to understand: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 + +$ ls -l /disk1/Videos/ +total 1 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv + +$ ls -l /disk2/Videos/ +total 2 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv +``` + +In this example there are two disks mounted as _disk1_ and _disk2_. Both drives have a _**Videos**_ directory with existing files. + +Now we’re going to mount those drives using mergerfs to make them appear as one larger drive. + +``` +$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media +``` + +The mergerfs man page is quite extensive and complex so we’ll break down the options that were specified. + + * _defaults_: This will use the default settings unless specified. + * _allow_other_: allows users besides sudo or root to see the filesystem. + * _use_ino_: Causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is recommended it be enabled so that linked files share the same inode value. + * _category.create=mfs_: Spreads files out across your drives based on available space. + * _moveonenospc=true_: If enabled, if writing fails, a scan will be done looking for the drive with the most free space. + * _minfreespace=1M_: The minimum space value used. + * _disk1_: First hard drive. + * _disk2_: Second hard drive. + * _/media_: The directory folder where the drives are mounted. + + + +Here is what it looks like: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 + +$ df -hT | grep media +1:2 fuse.mergerfs 66M 1.4M 60M 3% /media +``` + +You can see that the mergerfs mount now shows a total capacity of 66M which is the combined total of the two hard drives. + +Continuing with the example: + +There is a 30Mb video called _Baby’s second Xmas.mkv_. Let’s copy it to the _/media_ folder which is the mergerfs mount. + +``` +$ ls -lh "Baby's second Xmas.mkv" +-rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv +$ cp "Baby's second Xmas.mkv" /media/Videos/ +``` + +Here is the end result: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 31M 9.8M 76% /disk2 + +$ df -hT | grep media +1:2 fuse.mergerfs 66M 31M 30M 51% /media +``` + +You can see from the disk space utilization that mergerfs automatically copied the file to disk2 because disk1 did not have enough free space. + +Here is a breakdown of all of the files: + +``` +$ ls -l /disk1/Videos/ +total 1 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv + +$ ls -l /disk2/Videos/ +total 30003 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv + +$ ls -l /media/Videos/ +total 30004 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv +``` + +When you copy files to your mergerfs mount, it will always copy the files to the hard disk that has enough free space. If none of the drives in the pool have enough free space, then you won’t be able to copy them. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/ + +作者:[Curt Warfield][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/rcurtiswarfield/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/mergerfs-816x346.png +[2]: https://github.com/trapexit/mergerfs +[3]: https://github.com/trapexit/mergerfs/releases +[4]: https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm From ebf9e99900df30f8e0718e4808baca61585d9ae3 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 2 May 2020 00:57:44 +0800 Subject: [PATCH 083/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200430=20Linux?= =?UTF-8?q?=20and=20Kubernetes:=20Serving=20The=20Common=20Goals=20of=20En?= =?UTF-8?q?terprises?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200430 Linux and Kubernetes- Serving The Common Goals of Enterprises.md --- ...Serving The Common Goals of Enterprises.md | 77 +++++++++++++++++++ 1 file changed, 77 insertions(+) create mode 100644 sources/tech/20200430 Linux and Kubernetes- Serving The Common Goals of Enterprises.md diff --git a/sources/tech/20200430 Linux and Kubernetes- Serving The Common Goals of Enterprises.md b/sources/tech/20200430 Linux and Kubernetes- Serving The Common Goals of Enterprises.md new file mode 100644 index 0000000000..c3c0c36b66 --- /dev/null +++ b/sources/tech/20200430 Linux and Kubernetes- Serving The Common Goals of Enterprises.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux and Kubernetes: Serving The Common Goals of Enterprises) +[#]: via: (https://www.linux.com/articles/linux-and-kubernetes-serving-the-common-goals-of-enterprises/) +[#]: author: (Swapnil Bhartiya https://www.linux.com/author/swapnil/) + +Linux and Kubernetes: Serving The Common Goals of Enterprises +====== + +[![][1]][2] + +For [Stefanie Chiras,][3] VP & GM, Red Hat Enterprise Linux (RHEL) Business Unit at [Red Hat][4], aspects such as security and resiliency have always been important for Red Hat. More so, in the current situation when everyone has gone fully remote and it’s much harder to get people in front of the hardware for carrying out updates, patching, etc. + +“As we look at our current situation, never has it been more important to have an operating system that is resilient and secure, and we’re focused on that,” she said. + +The recently released version of [Red Hat Enterprise Linux (RHEL) 8.2][5] inadvertently address these challenge as it makes it easier for technology leaders to embrace the latest, production-ready innovations swiftly which offering security and resilience that their IT teams need. + +RHEL’s embrace of a predictable 6-month minor release cycle also helped customers plan upgrades more efficiently. + +“There is value for customers in having predictability of minor releases on a six-month cycle. Without knowing when they were coming was causing disruptions for them. The launch of 8.2 is now the second time we have delivered on our commitment of having minor releases every six months,” said Stefanie Chiras. + +In addition to offering security updates, the new version adds insights capabilities and forays into newer areas of innovation. + +The upgrade has expanded the earlier capability called ‘Adviser’ dramatically. Additional functionalities such as drift monitoring and CVE coverage allow for a much deeper granularity into how the infrastructure is running. + +“It really amplifies the skills that are already present in ops and sysadmin teams, and this provides a Red Hat consultation, if you will, directly into the data center,” claimed Charis. + +As containers are increasingly being leveraged for digital transformation, RHEL 8.2 offers an updated application stream of Red Hat’s container tools. It also has new, containerized versions of Buildah and Skopeo. + +[Skopeo][6] is an open-source image copying tool, while Buildah is a tool for building Docker- and Kubernetes-compatible images easily and quickly. + +RHEL has also ensured in-place upgrades in the new version. Customers can now directly in-place upgrade from version 7 to version 8.2. + +Chiras believes Linux has emerged as the go-to-platform for innovations such as Machine Learning, Deep Learning, and Artificial Intelligence. + +“Linux has now become the springboard of innovation,” she argued. “AI, machine learning, and deep learning are driving a real change in not just the software but also the hardware. In the context of these emerging technologies, it’s all about making them consumable into an enterprise.” + +“We’re very focused on our ecosystem, making sure that we’re working in the right upstream communities with the right ISVs, with the right hardware partners to make all of that magic come together,” Chiras said. + +Towards this end, Red Hat has been partnering with multiple architectures for a long time — be it an x86 architecture, ARM, Power, or mainframe with IBM Z. Its partnership with Nvidia pulls in capabilities such as FPGAs, and GPU. + +**Synergizing Kubernetes and Linux ** + +Kubernetes is fast finding favor in enterprises.  So how do Linux and Kubernetes serve the common goals of enterprises? + +“Kubernetes is a new way to deploy Linux. We’re very focused on providing operational consistency by leveraging our technology in RHEL and then bringing in that incredible capability of Kubernetes within our OpenShift product line,” Chiras said. + +The deployment of Linux within a Kubernetes environment is much more complicated than in a traditional deployment. RHEL, therefore, made some key changes. The company created Red Hat Enterprise Linux CoreOS — an optimized version of RHEL for the OpenShift experience. + +“It’s deployed as an immutable. It’s tailored, narrow, and gets updated as part of your OpenShift update to provide consistent user experience and comprehensive security. + +The launch of the Red Hat Universal Base Image (UBI) offers users greater security, reliability, and performance of official Red Hat container images where OCI-compliant Linux containers run. + +“Kubernetes is a new way to deploy Linux. It really is a tight collaboration but what we’re really focused on is the customer experience. We want them to get easy updates with consistency and reliability, resilience and security. We’re pulling all of that together. With such advancements going on, it’s a fascinating space to watch,” added Chiras. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/articles/linux-and-kubernetes-serving-the-common-goals-of-enterprises/ + +作者:[Swapnil Bhartiya][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/author/swapnil/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/wp-content/uploads/2019/12/computer-2930704_1280-1068x634.jpg (computer-2930704_1280) +[2]: https://www.linux.com/wp-content/uploads/2019/12/computer-2930704_1280.jpg +[3]: https://www.linkedin.com/in/stefanie-chiras-9022144/ +[4]: https://www.redhat.com/en +[5]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/8.2_release_notes/index +[6]: https://github.com/containers/skopeo From 19fc1b2822309a15529af2bf18f3bde65eeee79d Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 2 May 2020 00:58:34 +0800 Subject: [PATCH 084/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200502=20Pop=20?= =?UTF-8?q?OS=2020.04=20Review:=20Best=20Ubuntu-based=20Distribution=20Jus?= =?UTF-8?q?t=20Got=20Better?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md --- ...untu-based Distribution Just Got Better.md | 232 ++++++++++++++++++ 1 file changed, 232 insertions(+) create mode 100644 sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md diff --git a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md new file mode 100644 index 0000000000..760307eeb1 --- /dev/null +++ b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md @@ -0,0 +1,232 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better) +[#]: via: (https://itsfoss.com/pop-os-20-04-review/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better +====== + +_**Brief: Pop OS 20.04 is an impressive Linux distribution based on Ubuntu. I review the major new features in this review and share my experience with the latest release.**_ + +Now that Ubuntu 20.04 LTS and its official flavours are here – it’s time to take a look at one of best Ubuntu-based distro i.e Pop!_OS 20.04 by [System76][1]. + +To be honest, Pop!_OS is my favorite Linux distro that I primarily use for everything I do. + +Now that Pop!_OS 20.04 has finally arrived. It’s time to take a look at what it offers and whether you should upgrade or not? + +### What’s New In Pop!_OS 20.04 LTS? + +![][2] + +Visually, Pop!_OS 20.04 LTS isn’t really very different from Pop!_OS 19.10. However, you can find several new features and improvements. + +But, if you were using **Pop!_OS 18.04 LTS**, you have a lot of things to try. + +With [GNOME 3.36][3] onboard along with some newly added features, Pop!_OS 20.04 is an exciting release. + +Overall, to give you an overview here are some key highlights: + + * Automatic Window Tiling + * New Application Switcher and Launcher + * Flatpack support added in Pop!_Shop + * GNOME 3.36 + * Linux Kernel 5.4 + * Improved hybrid graphics support + + + +While this sounds fun, let us take a look at a detailed look on what has changed and how’s the experience of Pop!_OS 20.04 so far. + +### User Experience Improvements in Pop OS 20.04 + +Undoubtedly, a lot of Linux distros offer a pleasant user experience out of the box. Likewise, [Ubuntu 20.04 LTS has had top-notch improvements and features][4] as well. + +And, when it comes to Pop!_OS by System 76, they always try to go a mile further. And, the majority of new features aim to improve the user experience by providing useful functionalities. + +Here, I’m going to take a look at some of the improvements that include [GNOME 3.36][3] and Pop!_OS-specific features. + +#### Support For System Tray Icons + +Finally! This may not be a big change – but Pop!_OS did not have the support for system tray icons (or applet icons). + +![][5] + +With 20.04 LTS release, it’s here by default. No need of any extension. + +There may not be a whole lot of programs depending on system tray icons – but it is still something important to have. + +In my case, I wasn’t able to use [ActivityWatch][6] on Pop!_OS 19.10 – but now I can. + +#### Automatic Window Tiling + +![][7] + +**Automatic Window Tiling** is something I always wanted to try – but never invested any time to set it up using a [tiling window manager][8] like [i3][9], not even with [Regolith Desktop][10]. + +With Pop!_OS 20.04, you don’t need to do that anyway. The automatic window tiling feature comes baked in without needing you to set it up. + +It also features an option to **Show Active Hint** i.e it will highlight the active window to avoid confusion. And, you can also adjust the gap between the windows. + +![][11] + +You can see it in action in their official video: + +[Subscribe to our YouTube channel for more Linux videos][12] + +And, I must say that it is one of the biggest additions on Pop!_OS 20.04 that could potentially help you multi-task more efficiently. + +Even though the feature comes in handy everytime you use it. To make the most out of it, a display screen bigger than 21-inches (at least) should be the best way to go! And, for this reason – I’m really tempted to upgrade my monitor as well! + +#### New Extensions App + +![][13] + +Pop!_OS comes baked in with some unique GNOME extensions. But, you don’t need GNOME Tweaks the manage the extension anymore. + +The newly added **Extensions** app lets you configure and manage the extensions on Pop!_OS 20.04. + +#### Improved Notification Center + +![][14] + +With the new GNOME 3.36 release, the notification center includes a revamped look. Here, I have the dark mode enabled. + +#### New Application Switcher & Launcher + +![][15] + +You can still **ALT+TAB** or **Super key + TAB** to go through the running applications. + +But, that’s time-consuming when you have a lot of things going on. So, on Pop!_OS 20.04, you get an application switcher and launcher which you can activate using **Super key + /** + +Once you get used to the keyboard shortcut, it will be very convenient thing to have. + +In addition to this, you may find numerous other subtle improvements visually with the icons/windows on Pop!_OS 20.04. + +#### New Login Screen + +Well, with GNOME 3.36, it’s an obvious change. But, it does look good! + +![][16] + +### Flatpak Support on Pop!_Shop + +Normally, Pop!_Shop is already something useful with a huge repository along with [Pop!_OS’s own repositories.][17] + +Now, with Pop!_OS 20.04, you can choose to install either Flatpak (via Flathub) or the Debian package of any available software on Pop!_Shop. Of course, only if a Flatpak package exists for the particular software. + +You might want to check [how to use Flatpak on Linux][18] if you don’t have Pop!_OS 20.04. + +![][19] + +Personally, I’m not a fan of Flatpak but some applications like GIMP requires you to install the Flatpak package to get the latest version. So, it is definitely a good thing to have the support for Flatpak on Pop!_Shop baked right into it. + +### Keyboard Shortcut Changes + +This can be annoying if you’re comfortable with the existing keyboard shortcuts on Pop!_OS 19.10 or older. + +In either case, there are a few important keyboard shortcut changes to potentially improve your experience, here they are: + + * Lock Screen: **Super + L** _changed to_ **Super + Escape** + * Move Workspace: **Super + Up/Down Arrow** _changed to_ **Super + CTRL + Up/Down Arrow** + * Close Window: **Super + W** _changed_ to **Super + Q** + * Toggle Maximize: **Super + Up Arrow** _changed to_ **Super + M** + + + +### Linux Kernel 5.4 + +Similar to most of the other latest Linux distros, Pop!_OS 20.04 comes loaded with [Linux Kernel 5.4][20]. + +So, obviously, you can expect the [exFAT support][21] and an improved AMD graphics compatibility along with all the other features that come with it. + +### Performance Improvements + +Even though Pop!_OS doesn’t pitch itself as a lightweight Linux distro, it is still a resource-efficient distro. And, with GNOME 3.36 onboard, it should be fast enough. + +Considering that I’ve been using Pop!_OS as my primary distro for about a year, I’ve never had any performance issues. And, this is how the resource usage will probably look like (depending on your system configuration) after you install Pop!_OS 20.04. + +![][22] + +To give you an idea, my desktop configuration involves an i5-7400 processor, 16 GB RAM (2400 MHz), NVIDIA GTX 1050ti graphics card, and an SSD. + +I’m not really a fan of system benchmarks because it does not really give you the idea of how a specific application or a game would perform unless you try it. + +You can try the [Phoronix Test Suite][23] to analyze how your system performs. But, Pop!_OS 20.04 LTSshould be a snappy experience! + +### Package Updates & Other Improvements + +While every Ubuntu-based distro benefits from the [improvements in Ubuntu 20.04 LTS][4], there are some Pop OS specific bug fixes and improvements as well. + +In addition to it, some major apps/packages like **Firefox 75.0** have been updated to their latest version. + +As of now, there should be no critical bugs present and at least none for me. + +You can check out their [development progress on GitHub][24] to check the details of issues they’ve already fixed during the beta testing and the issues they will be fixing right after the release. + +### Download & Support Pop!_OS 20.04 + +![][25] + +With this release, System76 has finally added a subscription model (optional) to support Pop!_OS development. + +You can download **Pop!_OS 20.04** for free – but if you want to support them I’d suggest you go for the subscription with just **$1/month**. + +[Pop!_OS 20.04][26] + +### My Thoughts on Pop OS 20.04 + +I must mention that I was rooting for a fresh new wallpaper with the latest 20.04 release. But, that’s not a big deal. + +With the window tiling feature, flatpak support, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far. Also, it’s great to see that they are highlighting their focus on creative professionals with out-of-the-box support for some popular software. + +![][27] + +All the good things about Ubuntu 20.04 and some extra toppings on it by System76, I’m impressed! + +_**Have you tried the Pop!_OS 20.04 yet? Let me know your thoughts in the comments below.**_ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/pop-os-20-04-review/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://system76.com +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/pop_os_20_04_review.jpg?ssl=1 +[3]: https://itsfoss.com/gnome-3-36-release/ +[4]: https://itsfoss.com/ubuntu-20-04-release-features/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/system-tray-icons-pop-os.jpg?ssl=1 +[6]: https://activitywatch.net/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-automatic-screen-tiling.png?ssl=1 +[8]: https://en.wikipedia.org/wiki/Tiling_window_manager +[9]: https://i3wm.org/ +[10]: https://itsfoss.com/regolith-linux-desktop/ +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/tile-feature-options-popos.jpg?ssl=1 +[12]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-extensions.jpg?ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/notification-center-pop-os.jpg?ssl=1 +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-application-launcher.jpg?ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-20-04-lock-screen.jpg?ssl=1 +[17]: https://launchpad.net/~system76/+archive/ubuntu/pop +[18]: https://itsfoss.com/flatpak-guide/ +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-flatpak-deb.jpg?ssl=1 +[20]: https://itsfoss.com/linux-kernel-5-4/ +[21]: https://itsfoss.com/mount-exfat/ +[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/pop-os-20-04-performance.jpg?ssl=1 +[23]: https://www.phoronix-test-suite.com/ +[24]: https://github.com/orgs/pop-os/projects/13 +[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/support-pop-os.jpg?ssl=1 +[26]: https://pop.system76.com/ +[27]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/pop-os-stem-focus.jpg?ssl=1 From 432ad8d1c6a54c315091b14fbb7d5e88afc5272d Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 2 May 2020 01:00:57 +0800 Subject: [PATCH 085/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200501=20Transp?= =?UTF-8?q?arent,=20open=20source=20alternative=20to=20Google=20Analytics?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200501 Transparent, open source alternative to Google Analytics.md --- ... source alternative to Google Analytics.md | 123 ++++++++++++++++++ 1 file changed, 123 insertions(+) create mode 100644 sources/tech/20200501 Transparent, open source alternative to Google Analytics.md diff --git a/sources/tech/20200501 Transparent, open source alternative to Google Analytics.md b/sources/tech/20200501 Transparent, open source alternative to Google Analytics.md new file mode 100644 index 0000000000..d4ce222a39 --- /dev/null +++ b/sources/tech/20200501 Transparent, open source alternative to Google Analytics.md @@ -0,0 +1,123 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Transparent, open source alternative to Google Analytics) +[#]: via: (https://opensource.com/article/20/5/plausible-analytics) +[#]: author: (Marko Saric https://opensource.com/users/markosaric) + +Transparent, open source alternative to Google Analytics +====== +Plausible Analytics is a leaner, more transparent option, with the +essential data you need but without all the privacy baggage. +![Digital creative of a browser on the internet][1] + +Google Analytics is the most popular website analytics tool. Millions of developers and creators turn to it to collect and analyze their website statistics. + +More than 53% of all sites on the web track their visitors using Google Analytics. [84%][2] of sites that do use a known analytics script use Google Analytics. + +Google Analytics has, for years, been one of the first tools I installed on a newly launched site. It is a powerful and useful analytics tool. Installing Google Analytics was a habit I didn't think much about until the introduction of the [GDPR][3] (General Data Protection Regulation) and other privacy regulations. + +Using Google Analytics these days comes with several pitfalls, including the need for a privacy policy, the need for cookie banners, and the need for a GDPR consent prompt. All these may negatively impact the site loading time and visitor experience. + +This has made me try to [de-Google-ify websites][4] that I work on, and it's made me start working on independent solutions that are open source and more privacy-friendly. This is where Plausible Analytics enters the story. + +[Plausible Analytics][5] is an open source and lightweight alternative to Google Analytics. It doesn't use cookies and it doesn't collect any personal data, so you don't need to show any cookie banners or get GDPR or CCPA consent. Let's take a closer look. + +### Main differences between Google Analytics and Plausible + +Plausible Analytics is not designed to be a clone of Google Analytics. It is meant as a simple-to-use replacement and a privacy-friendly alternative. Here are the main differences between the two web analytics tools: + +#### Open source vs. closed source + +Google Analytics may be powerful and useful, but it is closed source. It is a proprietary tool run by one of the largest companies in the world, a company that is a key player in the ad-tech industry. There's simply no way of knowing what's going on behind the scenes. You have to put your trust in Google. + +Plausible is a fully open source tool. You can read our code [on GitHub][6]. We're "open" in other ways, too, such as our [public roadmap][7], which is based around the feedback and features submitted by the members of our community. + +#### Privacy of your website visitors + +Google Analytics places [several cookies][8] on the devices of your visitors, and it tracks and collects a lot of data. This means that there are several requirements if you want to use Google Analytics and be compliant with the different regulations: + + * You need to have a privacy policy about analytics + * You need to show a cookie banner + * You need to obtain a GDPR/CCPA consent + + + +Plausible is made to be fully compliant with the privacy regulations. No cookies are used, and no personal data is collected. This means that you don't need to display the cookie banner, you don't need a privacy policy, and you don't need to ask for the GDPR/CCPA consent when using Plausible. + +#### Page weight and loading time + +The recommended way of installing Google Analytics is to use the Google Tag Manager. Google Tag Manager script weights 28 KB, and it downloads another JavaScript file called the Google Analytics tag, which adds an additional 17.7 KB to your page size. That's 45.7 KB of page weight combined. + +Plausible script weights only 1.4 KB. That's 33 times smaller than the Google Analytics Global Site Tag. Every KB matters when you want to keep your site fast to load. + +#### Accuracy of visitor stats + +Google Analytics is being blocked by an increasing number of web users. It's blocked by those who use open source browsers such as [Firefox][9] and [Brave][10]. It's also blocked by those who use open source browser add-ons such as the [uBlock Origin][11]. It's not uncommon to see 40% or more of the audience on a tech site blocking Google Analytics. + +Plausible is a new player on this market and it's privacy-friendly by default, so it doesn't see the same level of blockage. + +#### Simple vs. complex web analytics + +[Google Analytics is overkill][12] for many website owners. It's a complex tool that takes time to understand and requires training. Google Analytics presents hundreds of different reports and metrics for you to get insights from. Many users end up creating custom dashboards while ignoring all the rest. + +Plausible cuts through all the noise that Google Analytics creates. It presents everything you need to know on one single page—all the most valuable metrics at a glance. You can get an overview of the most actionable insights about your website in one minute. + +### A guided tour of Plausible Analytics + +Plausible Analytics is not a full-blown replacement and a feature-by-feature reproduction of Google Analytics. It's not designed for all the different use-cases of Google Analytics. + +It's built with simplicity and speed in mind. There is no navigational menu. There are no additional sub-menus. There is no need to create custom reports. You get one simple and useful web analytics dashboard out of the box. + +Rather than tracking every metric imaginable, many of them that you will never find a use for, Plausible focuses on the essential website stats only. It is easy to use and understand with no training or prior experience: + +![Plausible analytics in action][13] + + * Choose the time range that you want to analyze. The visitor numbers are automatically presented on an hourly, daily, or monthly graph. The default time frame is set at the last 30 days. + * See the number of unique visitors, total page views, and the bounce rate. These metrics include a percentage comparison to the previous time period, so you understand if the trends are going up or down. + * See all the referral sources of traffic and all the most visited pages on your site. Bounce rates of the individual referrals and pages are included too. + * See the list of countries your traffic is coming from. You can also see the device, browser, and operating system your visitors are using. + * Track events and goals to identify the number of converted visitors, the conversion rate, and the referral sites that send the best quality traffic. + + + +Take a look at the [live demo][14] where you can follow the traffic to the Plausible website. + +### Give Plausible Analytics a chance + +With Plausible Analytics, you get all the important web analytics at a glance so you can focus on creating a better site without needing to annoy your visitors with all the different banners and prompts. + +You can try Plausible Analytics on your site alongside Google Analytics. [Register today][15] to try it out, and see what you like and what you don't. Share your feedback with the community. This helps us learn and improve. We'd love to hear from you. + +Take a look at five great open source alternatives to Google Docs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/plausible-analytics + +作者:[Marko Saric][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/markosaric +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet) +[2]: https://w3techs.com/technologies/details/ta-googleanalytics +[3]: https://gdpr-info.eu/ +[4]: https://markosaric.com/degoogleify/ +[5]: https://plausible.io/ +[6]: https://github.com/plausible-insights/plausible +[7]: https://feedback.plausible.io/roadmap +[8]: https://developers.google.com/analytics/devguides/collection/analyticsjs/cookie-usage +[9]: https://www.mozilla.org/en-US/firefox/new/ +[10]: https://brave.com/ +[11]: https://github.com/gorhill/uBlock +[12]: https://plausible.io/vs-google-analytics +[13]: https://opensource.com/sites/default/files/plausible-analytics.png (Plausible analytics in action) +[14]: https://plausible.io/plausible.io +[15]: https://plausible.io/register From b994526987b2789220543819caf9c654ce35110a Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sat, 2 May 2020 01:04:15 +0800 Subject: [PATCH 086/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200501=20Red=20?= =?UTF-8?q?Hat=20Summit=202020=20virtual=20experience?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/talk/20200501 Red Hat Summit 2020 virtual experience.md --- ... Red Hat Summit 2020 virtual experience.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 sources/talk/20200501 Red Hat Summit 2020 virtual experience.md diff --git a/sources/talk/20200501 Red Hat Summit 2020 virtual experience.md b/sources/talk/20200501 Red Hat Summit 2020 virtual experience.md new file mode 100644 index 0000000000..334c52c1f9 --- /dev/null +++ b/sources/talk/20200501 Red Hat Summit 2020 virtual experience.md @@ -0,0 +1,64 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Red Hat Summit 2020 virtual experience) +[#]: via: (https://www.networkworld.com/article/3541289/red-hat-summit-2020-virtual-experience.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Red Hat Summit 2020 virtual experience +====== + +[Virginiambe][1] [(CC BY-SA 3.0)][2] + +In the last couple days, Red Hat was able to demonstrate that an online technical conference can succeed. The Summit, normally held in Boston or San Francisco, was held online thanks to the Covid-19 pandemic still gripping the world. + +The fact that 80,000 people attended the online event warrants a huge applause. By comparison, last year’s in-person conference broke the record with only 8,900 attendees. + +[[Get regularly scheduled insights by signing up for Network World newsletters.]][3] + +### **Being “there”** + +The experience of attending the conference was in many ways what you would expect when attending a large conference in person. There were keynotes, general sessions and breakout sessions. There were many opportunities to ask questions. And it was often difficult but necessary to choose between parallel sessions. I attended both days and was very impressed. + +I also enjoyed some nostalgia about how we’ve all arrived at the places we are today with respect to Linux. It was clear that many attendees were overwhelmed by the progress that has been made just since last year. Linux, and [RHEL][4] in particular, is becoming more innovative, more clever in the ways that it can detect and respond to problems and yet in some important ways easier to manage because of the way the tools have evolved. + +Announcements at the conference included Red Hat OpenShift 4.4, OpenShift virtualization and Red Hat Advanced Container Management for Kubernetes. + +What was novel about attending a technical conference online was that we didn’t have to leave our home or office and that we could review sessions that we missed by selecting them later from the session layout pages. In fact, the sessions are still online and may well be for the coming year. If you didn’t participate in Red Hat Summit 2020, you can still sign up and you can still watch the sessions at your convenience. Just go to the [summit site][5]. And, did I mention, that it's free? + +### Catching up + +Once you’re signed up, you can click on the Watch and Learn at the top of the page and choose General Sessions or Sessions and Labs. The presentations will now all be labeled On Demand though they once displayed upcoming time slots. The individuals presenting information are excellent and the material is exciting. Even if you’re not working with Red Hat Enterprise Linux, you will learn a lot about Linux in general and how open source has evolved over the decades and is still evolving in important and critical ways. + +Topics covered at the conference include OpenShift, open hybrid cloud, future technologies, robotics and automation, advances on the edge and the power of open source. Red Hat Summit also includes joint sessions with both Red Hat and technology collaborators such as Ford, Verizon, Intel, Microsoft and Credit Suisse. + +### What’s next? + +Watching the conference online at a time when I can't leave my home was informative, but also encouraging and comforting. Linux has been an important part of my life for decades. It felt good to be connected to the larger community and to sense the currents of progress through my desktop system. + +While there’s no way to know at this point whether future Red Hat Summits or other Linux conferences will be held or made available online, the fact that Red Hat Summit 2020 was available online when so many of us are still huddled up at home wondering when our world will reopen was a testament not just to great technology but to the deep-seated conviction that it is critical that we work together and that open source can make that happen in ways that nothing else can. + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3541289/red-hat-summit-2020-virtual-experience.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://commons.wikimedia.org/wiki/File:Red_hat_with_bow2.JPG +[2]: https://creativecommons.org/licenses/by-sa/3.0/legalcode +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://www.networkworld.com/article/3540189/red-hat-enterprise-linux-82-hits-the-stage.html +[5]: https://www.redhat.com/summit +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world From 374267ef01a8658d23a722ab96b6a0a88c4f90d8 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Sat, 2 May 2020 10:08:12 +0800 Subject: [PATCH 087/178] Almost done --- ...high-performance computing as a service.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md index fc36ff7082..cb7e33856a 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -83,42 +83,42 @@ With clever system design, you can integrate those HPC-services bursts of activi ** Use of HPC services lends itself to batch-processing and loosely-coupled use cases. That ties into a common HPC downside: data transfer issues. High-performance computing by its very nature often involves huge data sets, and sending all that information over the internet to a cloud service provider is no simple thing. "We have clients I talk to in the biotech industry who spend $10 million a month on just the data charges," says IBM's Turek. -HPC服务的使用适合批处理和松散耦合的用例。这与HPC的常见缺点有关:数据传输问题。从本质上讲,高性能计算通常涉及庞大的数据集,而将所有这些信息通过Internet发送到云服务提供商并不是一件容易的事。 IBM的Turek说:“我们与生物技术行业的客户交流,他们每月仅在数据费用上就花费1000万美元。” +HPC服务适合批处理和松耦合的场景。这与HPC的普遍缺点有关:数据传输问题。高性能计算本身通常涉及庞大的数据集,而将所有这些信息通过Internet发送到云服务提供商并不容易。IBM的Turek说:“我们与生物技术行业的客户交流,他们每月仅在数据费用上就花费1000万美元。” And money isn't the only potential problem. Building a workflow that makes use of your data can challenge you to work around the long times required for data transfer. "When we had our own HPC cluster, local access to the simulation results already produced – and thus an interactive interim evaluation — was of course possible at any time," says hhpberlin's Kilian. "We're currently working on being able to access and evaluate the data produced in the cloud even more efficiently and interactively at any desired time of the simulation without the need to download large amounts of simulation data." -金钱并不是唯一的潜在问题。建立一个利用您的数据的工作流程可能会挑战您在数据传输所需的长时间内工作。 hhpberlin的Kilian说:“当我们拥有自己的HPC集群时,当然可以随时本地访问已经产生的仿真结果,从而进行交互式的临时评估。” “我们目前正在努力,能够在任何所需的模拟时间更高效,交互地访问和评估云中生成的数据,而无需下载大量的模拟数据。” +钱并不是唯一的潜在问题。已制定的需要使用数据的工作流可能会使您在数据传输所需的时间内无法工作。hhpberlin的Kilian说:“当我们拥有自己的HPC集群时,当然可以随时访问已经产生的仿真结果,从而进行交互式的临时评估。” “我们目前正努力达到在仿真的任意时刻都可以更高效地,交互地访问和评估云中生成的数据,而无需下载大量的模拟数据。” Mike Krawczyk cites another stumbling block: compliance issues. Any service a defense contractor uses needs to be complaint with the International Traffic in Arms Regulations (ITAR), and McCormick Stevenson went with Rescale in part because it was the only vendor they found that checked that box. While more do today, any company looking to use cloud services should be aware of the legal and data-protection issues involved in living on someone else's infrastructure, and the sensitive nature of many of HPC's use cases makes this doubly true for HPC as a service. -Mike Krawczyk提到了另一个绊脚石:合规性问题。国防承包商使用的任何服务都需要向《国际武器交易条例》(ITAR)进行投诉,麦考密克·史蒂文森(McCormick Stevenson)之所以选择Rescale,部分原因是这是他们发现该复选框的唯一供应商。如今,尽管有更多的公司这样做,但任何希望使用云服务的公司都应该意识到生活在其他人的基础架构上所涉及的法律和数据保护问题,而且许多HPC用例的敏感性质使得这对于HPC即服务而言是双重事实。 。 +Mike Krawczyk提到了另一个绊脚石:合规性问题。国防承包商使用的任何服务都需要遵从(原文是complaint, 应该是笔误)《国际武器交易条例》(ITAR),麦考密克·史蒂文森(McCormick Stevenson)之所以选择Rescale,部分原因是因为这是他们发现的唯一符合的供应商。如今,尽管有更多的公司(使用云服务),但任何希望使用云服务的公司都应该意识到使用其他人的基础设施时所涉及的法律和数据保护问题,而且许多HPC场景的敏感性使得更HPC即服务的这个问题更加突出。 In addition, the IT governance that HPC services require goes beyond regulatory needs. For instance, you'll need to keep track of whether your software licenses permit cloud use ­– especially with specialized software packages written to run on an on-premises HPC cluster. And in general, you need to keep track of how you use HPC services, which can be a tempting resource, especially if you've transitioned from in-house systems where staff was used to having idle HPC capabilities available. For instance, Ron Gilpin, senior director and Azure Platform Services global lead at Avanade, suggests dialing back how many processing cores you use for tasks that aren't time sensitive. "If a job only needs to be completed in an hour instead of ten minutes," he says, "that might use 165 processors instead of 1,000, a savings of thousands of dollars." -此外,HPC服务所需的IT治理超出了监管需求。例如,您需要跟踪您的软件许可证是否允许云使用­ –尤其是针对专门编写在本地HPC群集上运行的软件包。通常,您需要跟踪HPC服务的使用方式,这可能是一个诱人的资源,尤其是当您从习惯于工作人员的内部系统过渡到具有可用的空闲HPC功能时。例如,Avanade全球平台高级主管兼Azure平台服务全球负责人Ron Gilpin建议,回拨您用于非时间敏感任务的处理核心数量。他说:“如果一项工作只需要一个小时而不是十分钟就可以完成,那么它可以使用165个处理器而不是1,000个,从而节省了数千美元。” +此外,HPC服务所需的IT治理超出了目前的监管范围。例如,您需要跟踪您的软件许可证是否允许云使用­ –尤其是专门为本地HPC群集上运行而编写的软件包。通常,您需要跟踪HPC服务的使用方式,它可能是一个诱人的资源,尤其是当您从员工习惯的内部系统过渡到有可用的空闲的HPC能力时。例如,Avanade全球平台高级主管兼Azure平台服务全球负责人Ron Gilpin建议,回调您用于时间不敏感任务的处理核心数量。他说:“如果一项工作只需要用一小时来完成而不需要在十分钟内就完成,那么它可以使用165个处理器而不是1,000个,从而节省了数千美元。” -### A premium on HPC skills** +### A premium on HPC skills** 独特的HPC技能 ** One of the biggest barriers to HPC adoption has always been the unique in-house skills it requires, and HPC services don't magically make that barrier vanish. "Many CIOs have migrated a lot of their workloads into the cloud and they have seen cost savings and increased agility and efficiency, and believe that they can achieve similar results in HPC ecosystems," says Gartner's Dekate. "And a common misperception is that they can somehow optimize human resource cost by essentially moving away from system admins and hiring new cloud experts who can solve their HPC workloads." -一直以来,采用HPC的最大障碍之一就是其所需的独特内部技能,而HPC服务并不能使这种障碍消失。 Gartner的Dekate表示:“许多CIO将许多工作负载迁移到了云中,他们看到了节省成本,提高敏捷性和效率的信念,并且相信他们可以在HPC生态系统中实现类似的结果。” “一个普遍的误解是,他们可以从本质上远离系统管理员,并聘用可以解决其HPC工作负载的新云专家,从而以某种方式优化人力资源成本。” +一直以来,采用HPC的最大障碍之一就是其所需的独特的内部技能,而HPC服务并不能使这种障碍消失。Gartner的Dekate表示:“许多CIO将许多工作负载迁移到了云上,他们看到了成本的节约,敏捷性和效率的提升,因此相信在HPC生态中也可以达成类似的效果。” “一个普遍的误解是,他们可以通过彻底地免去系统管理员,并聘用能解决其HPC工作负载的新的云专家,从而以某种方式优化人力成本。” "But HPC is not one of the main enterprise environments," he says. "You're dealing with high-end compute nodes interconnected with high-bandwidth, low-latency networking stacks, along with incredibly complicated application and middleware stacks. Even the filesystem layers in many cases are unique to HPC environments. Not having the right skills can be destabilizing." -他说:“但是HPC并不是主要的企业环境之一。” “您正在处理与高带宽,低延迟网络堆栈以及难以置信的复杂应用程序和中间件堆栈互连的高端计算节点。在许多情况下,甚至文件系统层也是HPC环境所独有的。没有适当的技能 可能会破坏稳定。” +“但是HPC并不是主流的企业环境之一。” 他说。“您正在处理通过高带宽,低延迟的网络互联的高端计算节点,以及相当复杂的应用和中间件技术栈。许多情况下,甚至连文件系统层也是HPC环境所独有的。没有对应的技能可能会破坏稳定性。” But supercomputing skills are in shortening supply, something Dekate refers to as the workforce "greying," in the wake of a generation of developers going to splashy startups rather than academia or the more staid firms where HPC is in use. As a result, vendors of HPC services are doing what they can to bridge the gap. IBM's Turek says that many HPC vets will always want to roll their own exquisitely fine-tuned code and will need specialized debuggers and other tools to help them do that for the cloud. But even HPC newbies can make calls to code libraries built by vendors to exploit supercomputing's parallel processing. And third-party software providers sell turnkey software packages that abstract away much of HPC's complication. -但是超级计算技术却在缩短供应,Dekate将其称为劳动力“灰色”,这是因为一代开发人员将目光投向了新兴的初创公司,而不是学术界或使用HPC的更老套的公司。结果,HPC服务的供应商正在尽其所能弥合差距。 IBM的Turek表示,许多HPC兽医将始终希望推出他们自己精心调整过的代码,并且将需要专门的调试器和其他工具来帮助他们在云中实现这一目标。但是,即使是HPC新手也可以调用供应商构建的代码库,以利用超级计算的并行处理。第三方软件提供商出售的交钥匙软件包可以消除许多HPC复杂性。 +但是超级计算技能的供给却在减少,Dekate将其称为劳动力“灰化”,这是因为一代开发人员将目光投向了新兴的初创公司,而不是学术界或使用HPC的更老套的公司。因此,HPC服务供应商正在尽其所能地弥补差距。 IBM的Turek表示,许多HPC老手将总是想运行他们自己精心调整过的代码,将需要专门的调试器和其他工具来帮助他们在云上实现这一目标。但是,即使是HPC新手也可以调用供应商构建的代码库,以利用超级计算的并行处理能力。第三方软件提供商出售的交钥匙软件包可以减少HPC的许多复杂性。 Accenture's Tung says the sector needs to lean further into this in order to truly prosper. "HPCaaS has created dramatically impactful new capability, but what needs to happen is making this easy to apply for the data scientist, the enterprise architect, or the software developer," she says. "This includes easy to use APIs, documentation, and sample code. It includes user support to answer questions. It’s not enough to provide an API; that API needs to be fit-for-purpose. For a data scientist this should likely be in Python and easily change out for the frameworks she is already using. The value comes from enabling these users who ultimately will have their jobs improved through new efficiencies and performance, if only they can access the new capabilities." If vendors can pull that off, HPC services might truly bring supercomputing to the masses. -埃森哲的董先生表示,该行业需要进一步倾斜才能真正繁荣。她说:“ ​​HPCaaS已经创建了具有重大影响力的新功能,但是需要做的是使它易于应用于数据科学家,企业架构师或软件开发人员。” “这包括易于使用的API,文档和示例代码。它包含用户回答问题的支持。仅提供API是不够的; API需要适合特定用途。对于数据科学家而言,这可能应该包含在其中。使用Python并轻松地更改她已经在使用的框架。其价值来自使这些用户能够获得新的效率和性能,只要他们能够使用新功能,他们最终将通过新的效率和性能来改善他们的工作。”如果供应商能够做到这一点,那么HPC服务可能真正将超级计算带入大众。 +埃森哲的Tung表示,该行业需要进一步加大投入才能真正繁荣。她说:“HPCaaS已经创建了具有重大影响力的新功能,但还需要做的是使它易于被数据科学家,企业架构师或软件开发人员使用。这包括易用的API,文档和示例代码。它包括用户支持来解答问题。仅仅提供API是不够的,API需要适合特定的用途。对于数据科学家而言,这可能是以python形式提供,并容易更换她已经在使用的框架。其价值来自使这些用户最综只有在使用新功能时才能够改进效率和性能。” 如果供应商能够做到这一点,那么HPC服务才能真正将超级计算带给大众。 Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. -加入[Facebook] [3]和[LinkedIn] [4]上的Network World社区,以评论最重要的主题。 +加入[Facebook][3]和[LinkedIn][4]上的Network World社区,探讨最前沿的话题。 -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html 作者:[Josh Fruhlinger][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[messon007](https://github.com/messon007) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 251b38d0bbfe2711b498dfd2832e7dfd7756f2a7 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Sat, 2 May 2020 10:15:45 +0800 Subject: [PATCH 088/178] Remove original --- ...high-performance computing as a service.md | 54 +++++-------------- 1 file changed, 12 insertions(+), 42 deletions(-) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md index cb7e33856a..df2c6debd3 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/sources/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -7,110 +7,80 @@ [#]: via: (https://www.networkworld.com/article/3534725/the-ins-and-outs-of-high-performance-computing-as-a-service.html) [#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/) -The ins and outs of high-performance computing as a service 高性能计算即服务的来龙去脉 +高性能计算即服务的来龙去脉 ====== -HPC services can be a way to meet expanding supercomputing needs, but depending on the use case, they’re not necessarily better than on-premises supercomputers. -Dell EMC 高性能计算(HPC)服务可能是一种满足不断增长的超级计算需求的方式,但依赖于使用场景,它们不一定比使用本地超级计算机好。 -Electronics on missiles and military helicopters need to survive extreme conditions. Before any of that physical hardware can be deployed, defense contractor McCormick Stevenson Corp. simulates the real-world conditions it will endure, relying on finite element analysis software like Ansys, which requires significant computing power. + 戴尔EMC 导弹和军用直升机上的电子设备需要工作在极端条件下。国防承包商麦考密克·史蒂文森公司(McCormick Stevenson Corp.)在部署任何物理设备之前都会事先模拟它所能承受的真实条件。模拟依赖于像Ansys这样的有限元素分析软件,该软件需要强大的算力。 -Then one day a few years ago, it unexpectedly ran up against its computing limits. 几年前的一天,它出乎意料地超出了计算极限。 -[10 of the world's fastest supercomputers][1] [世界上最快的10个超级计算机][1] -"We had some jobs that would have overwhelmed the computers that we had in office," says Mike Krawczyk, principal engineer at McCormick Stevenson. "It did not make economic or schedule sense to buy a machine and install software." Instead, the company contracted with Rescale, which could sell them cycles on a supercomputer-class system for a tiny fraction of what they would've spent on new hardware. 麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们的一些工作会使办公室的计算机不堪重负。” “购买机器并安装软件在经济上或计划上都不划算。” 相反,该公司与Rescale签约,从其购买在超级计算机系统上运行的周期(cycles),而这只花费了他们购买新硬件上所需的一小部分。 -McCormick Stevenson had become an early adopter in a market known as supercomputing as a service or high-performance computing (HPC) as a service – two terms that are closely related. HPC is the application of supercomputers to computationally complex problems, while supercomputers are those computers at the cutting edge of processing capacity, according to the National Institute for Computational Sciences. 麦考密克·史蒂文森(McCormick Stevenson)已成为被称为超级计算即服务或高性能计算(HPC)即服务(两个紧密相关的术语)市场的早期采用者之一。根据国家计算科学研究所(的定义),HPC是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 -Whatever it's called, these services are upending the traditional supercomputing market and bringing HPC power to customers who could never afford it before. But it's no panacea, and it's definitely not plug-and-play – at least not yet. 无论叫它什么,这些服务都在颠覆传统的超级计算市场,并将HPC能力带给以前买不起的客户。但这不是万能的,而且绝对不是即插即用的,至少现在还不是。 -### HPC services in practice HPC服务实践 +### HPC服务实践 -From the end user's perspective, HPC as a service resembles the batch-processing model that dates back to the early mainframe era. "We create an Ansys batch file and send that up, and after it runs, we pull down the result files and import them locally here," Krawczyk says. 从最终用户的角度来看,HPC即服务类似于早期大型机时代的批处理模型。 “我们创建一个Ansys批处理文件并将其发送过去,运行它,然后将结果文件取下来并在本地导入它们,” Krawczyk说。 -Behind the scenes, cloud providers are running the supercomputing infrastructure in their own data centers – though that doesn't necessarily imply the sort of cutting-edge hardware you might be visualizing when you hear "supercomputer." As Dave Turek, Vice President of Technical Computing at IBM OpenPOWER, explains it, HPC services at their core are "a collection of servers that are strung together with an interconnect. You have the ability to invoke this virtual computing infrastructure that allows you to bring a lot of different servers to work together in a parallel construct to solve the problem when you present it." 在HPC服务背后,云提供商在其自己的数据中心中运行超级计算基础设施,尽管这不一定意味着当您听到“超级计算机”时你就会看到最先进的硬件。正如IBM OpenPOWER计算技术副总裁Dave Turek解释的那样,HPC服务的核心是“相互互连的服务器集合。您可以调用该虚拟计算基础设施,它能够在您提出问题时,使得许多不同的服务器并行工作来解决问题。” [][2] -Sounds simple in theory. But making it viable in practice required some chipping away at technical problems, according to Theo Lynn, Professor of Digital Business at Dublin City University. What differentiates ordinary computing from HPC is those interconnects – high-speed, low-latency, and expensive – so those needed to be brought to the world of cloud infrastructure. Storage performance and data transport also needed to be brought up to a level at least in the same ballpark as on-prem HPC before HPC services could be viable. -理论上听起来很简单。但都柏林城市大学数字业务教授西奥·林恩(Theo Lynn)表示,要使其在实践中可行,需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速的,低延时的而且昂贵的-因此需要将这些互连引入云基础设施领域。在HPC服务可行之前,至少需要将存储性能和数据传输也提升到与本地HPC相同的水平。 +理论听起来很简单。但都柏林城市大学数字业务教授西奥·林恩(Theo Lynn)表示,要使其在实践中可行,需要解决一些技术问题。普通计算与HPC的区别在于那些互连-高速的,低延时的而且昂贵的-因此需要将这些互连引入云基础设施领域。在HPC服务可行之前,至少需要将存储性能和数据传输也提升到与本地HPC相同的水平。 -But Lynn says that some of the innovations that have helped HPC services take off have been more institutional than technological. In particular, "we are now seeing more and more traditional HPC applications adopting cloud-friendly licensing models – a barrier to adoption in the past." 但是林恩说,一些制度创新相比技术更好的帮助了HPC服务的起飞。特别是,“我们现在看到越来越多的传统HPC应用采用云友好的许可模式-过去是采用这种模式的障碍。” -And the economics have also shifted the potential customer base, he says. "Cloud service providers have opened up the market more by targeting low-end HPC buyers who couldn’t afford the capex associated with traditional HPC and opening up the market to new users. As the markets open up, the hyperscale economic model becomes more and more feasible, costs start coming down." 他说,经济也改变了潜在的客户群。 “云服务提供商通过向那些负担不起传统HPC所需的投资成本的低端HPC买家开放,进一步开放了市场。随着市场的开放,超大规模经济模型变得越来越多,更可行,成本开始下降。” -Avoid on-premises CAPEX** 避免本地资本支出** +避免本地资本支出** ** -HPC services are attractive to private-sector customers in the same fields where traditional supercomputing has long held sway. These include sectors that rely heavily on complex mathematical modeling, including defense contractors like McCormick Stevenson, along with oil and gas companies, financial services firms, and biotech companies. Dublin City University's Lynn adds that loosely coupled workloads are a particularly good use case, which meant that many early adopters used it for 3D image rendering and related applications. HPC服务对有志于传统超级计算长期把持的领域的私营行业客户具有吸引力。这些客户包括严重依赖复杂数学模型的行业,包括麦考密克·史蒂文森(McCormick Stevenson)等国防承包商,以及油气公司,金融服务公司和生物技术公司。都柏林城市大学的Lynn补充说,松耦合的工作负载是一个特别好的用例,这意味着许多早期采用者将其用于3D图像渲染和相关应用。 -But when does it make sense to consider HPC services over on-premises HPC? For hhpberlin, a German company that simulates smoke propagation in and fire damage to structural components of buildings, the move came as it outgrew its current resources. 但是,何时考虑HPC服务而不是本地HPC才有意义?对于德国的模拟烟雾在建筑物中的蔓延和火灾对建筑物结构部件的破坏的hhpberlin公司来说,答案是在它超出了其现有资源时。 -"For several years, we had run our own small cluster with up to 80 processor cores," says Susanne Kilian, hhpberlin's scientific head of numerical simulation. "With the rise in application complexity, however, this constellation has increasingly proven to be inadequate; the available capacity was not always sufficient to handle projects promptly." Hpberlin公司数值模拟的科学负责人Susanne Kilian说:“几年来,我们一直在运行自己的小型集群,该集群具有多达80个处理器核。” “但是,随着应用复杂性的提高,这种架构(constellation)已经越来越不足以支撑;可用容量并不总是够快速地处理项目。” -But just spending money on a new cluster wasn't an ideal solution, she says: "In view of the size and administrative environment of our company, the necessity of constant maintenance of this cluster (regular software and hardware upgrades) turned out to be impractical. Plus, the number of required simulation projects is subject to significant fluctuations, such that the utilization of the cluster was not really predictable. Typically, phases with very intensive use alternate with phases with little to no use." By moving to an HPC service model, hhpberlin shed that excess capacity and the need to pay up front for upgrades. 她说:“但是,仅仅花钱买一个新的集群并不是一个理想的解决方案:鉴于我们公司的规模和管理环境,强制持续维护该集群(定期进行软件和硬件升级)是不现实的。另外,需要模拟的项目数量会出现很大的波动,因此集群的利用率并不是真正可预测的。通常,使用率很高的阶段与很少使用或不使用的阶段交替出现。”通过转换为HPC服务模式,hhpberlin释放了过剩的容量,并无需支付升级费用。 -IBM's Turek explains the calculus that different companies go through while assessing their needs. For a biosciences startup with 30 people, "you need computing, but you really can't afford to have 15% of your staff dedicated to it. It's just like you might also say you don't want to have on-staff legal representation, so you'll get that as a service as well." For a bigger company, though, it comes down to weighing the operational expense of an HPC service against the capacity expense of buying an in-house supercomputer or HPC cluster. IBM的Turek解释了不同公司在评估其需求时所经历的计算过程。对于拥有30名员工的生物科学初创公司来说,“您需要计算,但您实在负担不起15%的员工专门从事它。这就像您可能也说过,您不想拥有在职法律代表,因此您也可以通过服务获得它。”但是,对于一家较大的公司而言,最终归结为权衡HPC服务的运营费用与购买内部超级计算机或HPC集群的费用。 -So far, those are the same sorts of arguments you'd have over adopting any cloud service. But the opex vs. capex dilemma can be weighted towards the former by some of the specifics of the HPC market. Supercomputers aren't commodity hardware like storage or x86 servers; they're very expensive, and technological advances can swiftly render them obsolete. As McCormick Stevenson's Krawczyk puts it, "It's like buying a car: as soon as you drive off the lot it starts to depreciate." And for many companies –especially larger and less nimble ones – the process of buying a supercomputer can get hopelessly bogged down. "You're caught up in planning issues, building issues, construction issues, training issues, and then you have to execute an RFP," says IBM's Turek. "You have to work through the CIO. You have to work with your internal customers to make sure there's continuity of service. It's a very, very complex process and not something that a lot of institutions are really excellent at executing." -到目前为止,这些都是您采用任何云服务时都会遇到的类似的争论。但是,可以HPC市场的某些特点将使得衡量运营支出与资本支出时选择前者。超级计算机不是诸如存储或x86服务器之类的商用硬件;它们非常昂贵,技术进步很快会使其过时。正如麦考密克·史蒂文森(McCormick Stevenson)的克拉维奇(Krawczyk)所说,“这就像买车:只要车一开走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说:“您陷入了计划问题,建筑问题,施工问题,培训问题,然后必须执行RFP。” “您必须得到CIO的支持。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程,并没有很多机构有非常出色的执行力。” +到目前为止,这些都是您采用任何云服务时都会遇到的类似的争论。但是,可以HPC市场的某些特点将使得衡量运营支出与资本支出时选择前者。超级计算机不是诸如存储或x86服务器之类的商用硬件;它们非常昂贵,技术进步很快会使其过时。正如麦考密克·史蒂文森(McCormick Stevenson)的克拉维奇(Krawczyk)所说,“这就像买车:只要车一开走,它就会开始贬值。”对于许多公司,尤其是规模较大,灵活性较差的公司,购买超级计算机的过程可能会陷入无望的泥潭。 IBM的Turek说:“您陷入了计划问题,建筑问题,施工问题,培训问题,然后必须执行RFP。您必须得到CIO的支持。您必须与内部客户合作以确保服务的连续性。这是一个非常非常复杂的过程,并没有很多机构有非常出色的执行力。” -Once you choose to go down the services route for HPC, you'll find you get many of the advantages you expect from cloud services, particularly the ability to pay only for HPC power when you need it, which results in an efficient use of resources. Chirag Dekate, Senior Director and Analyst at Gartner, says bursty workloads, when you have short-term needs for high-performance computing, are a key use case driving adoption of HPC services. 一旦您选择了HPC服务的路线后,您会发现您会得到您期望从云服务中得到的许多好处,特别是仅在业务需要时才需付费的能力,从而可以带来资源的高效利用。 Gartner高级总监兼分析师Chirag Dekate表示,当您对高性能计算有短期需求时的突发性负载是推动选择HPC服务的关键用例。 -"In the manufacturing industry, you tend to have a high peak of HPC activity around the product design stage," he says. "But once the product is designed, HPC resources are less utilized during the rest of the product-development cycle." In contrast, he says, "when you have large, long-running jobs, the economics of the cloud wear down." -他说:“在制造业中,在产品设计阶段,HPC活动往往会达到很高的峰值。” “但是,一旦产品设计完成,在其余产品开发周期中,HPC资源的利用率就会降低。” 相比之下,他说:“当您拥有大量长期运行的工作时,云的经济就会逐渐减弱。” +他说:“在制造业中,在产品设计阶段,HPC活动往往会达到很高的峰值。但是,一旦产品设计完成,在其余产品开发周期中,HPC资源的利用率就会降低。” 相比之下,他说:“当您拥有大量长期运行的工作时,云的经济性就会逐渐减弱。” -With clever system design, you can integrate those HPC-services bursts of activity with your own in-house conventional computing. Teresa Tung, managing director in Accenture Labs, gives an example: "Accessing HPC via APIs makes it seamless to mix with traditional computing. A traditional AI pipeline might have its training done on a high-end supercomputer at the stage when the model is being developed, but then the resulting trained model that runs predictions over and over would be deployed on other services in the cloud or even devices at the edge." 通过巧妙的系统设计,您可以将这些HPC服务突发活动与您自己的内部常规计算集成在一起。 埃森哲(Accenture)实验室常务董事Teresa Tung举了一个例子:“通过API访问HPC可以无缝地与传统计算混合。在模型构建阶段,传统的AI流水线可能会在高端超级计算机上进行训练,但是最终经过反复按预期运行的训练好的模型将部署在云中的其他服务上,甚至部署在边缘设备上。” -### It's not for all use cases** 它并不适合所有的应用场景 +### 它并不适合所有的应用场景 ** ** -Use of HPC services lends itself to batch-processing and loosely-coupled use cases. That ties into a common HPC downside: data transfer issues. High-performance computing by its very nature often involves huge data sets, and sending all that information over the internet to a cloud service provider is no simple thing. "We have clients I talk to in the biotech industry who spend $10 million a month on just the data charges," says IBM's Turek. HPC服务适合批处理和松耦合的场景。这与HPC的普遍缺点有关:数据传输问题。高性能计算本身通常涉及庞大的数据集,而将所有这些信息通过Internet发送到云服务提供商并不容易。IBM的Turek说:“我们与生物技术行业的客户交流,他们每月仅在数据费用上就花费1000万美元。” -And money isn't the only potential problem. Building a workflow that makes use of your data can challenge you to work around the long times required for data transfer. "When we had our own HPC cluster, local access to the simulation results already produced – and thus an interactive interim evaluation — was of course possible at any time," says hhpberlin's Kilian. "We're currently working on being able to access and evaluate the data produced in the cloud even more efficiently and interactively at any desired time of the simulation without the need to download large amounts of simulation data." -钱并不是唯一的潜在问题。已制定的需要使用数据的工作流可能会使您在数据传输所需的时间内无法工作。hhpberlin的Kilian说:“当我们拥有自己的HPC集群时,当然可以随时访问已经产生的仿真结果,从而进行交互式的临时评估。” “我们目前正努力达到在仿真的任意时刻都可以更高效地,交互地访问和评估云中生成的数据,而无需下载大量的模拟数据。” +钱并不是唯一的潜在问题。已制定的需要使用数据的工作流可能会使您在数据传输所需的时间内无法工作。hhpberlin的Kilian说:“当我们拥有自己的HPC集群时,当然可以随时访问已经产生的仿真结果,从而进行交互式的临时评估。我们目前正努力达到在仿真的任意时刻都可以更高效地,交互地访问和评估云中生成的数据,而无需下载大量的模拟数据。” -Mike Krawczyk cites another stumbling block: compliance issues. Any service a defense contractor uses needs to be complaint with the International Traffic in Arms Regulations (ITAR), and McCormick Stevenson went with Rescale in part because it was the only vendor they found that checked that box. While more do today, any company looking to use cloud services should be aware of the legal and data-protection issues involved in living on someone else's infrastructure, and the sensitive nature of many of HPC's use cases makes this doubly true for HPC as a service. Mike Krawczyk提到了另一个绊脚石:合规性问题。国防承包商使用的任何服务都需要遵从(原文是complaint, 应该是笔误)《国际武器交易条例》(ITAR),麦考密克·史蒂文森(McCormick Stevenson)之所以选择Rescale,部分原因是因为这是他们发现的唯一符合的供应商。如今,尽管有更多的公司(使用云服务),但任何希望使用云服务的公司都应该意识到使用其他人的基础设施时所涉及的法律和数据保护问题,而且许多HPC场景的敏感性使得更HPC即服务的这个问题更加突出。 -In addition, the IT governance that HPC services require goes beyond regulatory needs. For instance, you'll need to keep track of whether your software licenses permit cloud use ­– especially with specialized software packages written to run on an on-premises HPC cluster. And in general, you need to keep track of how you use HPC services, which can be a tempting resource, especially if you've transitioned from in-house systems where staff was used to having idle HPC capabilities available. For instance, Ron Gilpin, senior director and Azure Platform Services global lead at Avanade, suggests dialing back how many processing cores you use for tasks that aren't time sensitive. "If a job only needs to be completed in an hour instead of ten minutes," he says, "that might use 165 processors instead of 1,000, a savings of thousands of dollars." 此外,HPC服务所需的IT治理超出了目前的监管范围。例如,您需要跟踪您的软件许可证是否允许云使用­ –尤其是专门为本地HPC群集上运行而编写的软件包。通常,您需要跟踪HPC服务的使用方式,它可能是一个诱人的资源,尤其是当您从员工习惯的内部系统过渡到有可用的空闲的HPC能力时。例如,Avanade全球平台高级主管兼Azure平台服务全球负责人Ron Gilpin建议,回调您用于时间不敏感任务的处理核心数量。他说:“如果一项工作只需要用一小时来完成而不需要在十分钟内就完成,那么它可以使用165个处理器而不是1,000个,从而节省了数千美元。” -### A premium on HPC skills** 独特的HPC技能 +### 独特的HPC技能** ** -One of the biggest barriers to HPC adoption has always been the unique in-house skills it requires, and HPC services don't magically make that barrier vanish. "Many CIOs have migrated a lot of their workloads into the cloud and they have seen cost savings and increased agility and efficiency, and believe that they can achieve similar results in HPC ecosystems," says Gartner's Dekate. "And a common misperception is that they can somehow optimize human resource cost by essentially moving away from system admins and hiring new cloud experts who can solve their HPC workloads." -一直以来,采用HPC的最大障碍之一就是其所需的独特的内部技能,而HPC服务并不能使这种障碍消失。Gartner的Dekate表示:“许多CIO将许多工作负载迁移到了云上,他们看到了成本的节约,敏捷性和效率的提升,因此相信在HPC生态中也可以达成类似的效果。” “一个普遍的误解是,他们可以通过彻底地免去系统管理员,并聘用能解决其HPC工作负载的新的云专家,从而以某种方式优化人力成本。” +一直以来,采用HPC的最大障碍之一就是其所需的独特的内部技能,而HPC服务并不能使这种障碍消失。Gartner的Dekate表示:“许多CIO将许多工作负载迁移到了云上,他们看到了成本的节约,敏捷性和效率的提升,因此相信在HPC生态中也可以达成类似的效果。一个普遍的误解是,他们可以通过彻底地免去系统管理员,并聘用能解决其HPC工作负载的新的云专家,从而以某种方式优化人力成本。” -"But HPC is not one of the main enterprise environments," he says. "You're dealing with high-end compute nodes interconnected with high-bandwidth, low-latency networking stacks, along with incredibly complicated application and middleware stacks. Even the filesystem layers in many cases are unique to HPC environments. Not having the right skills can be destabilizing." -“但是HPC并不是主流的企业环境之一。” 他说。“您正在处理通过高带宽,低延迟的网络互联的高端计算节点,以及相当复杂的应用和中间件技术栈。许多情况下,甚至连文件系统层也是HPC环境所独有的。没有对应的技能可能会破坏稳定性。” +“但是HPC并不是一个主流的企业环境。” 他说。“您正在处理通过高带宽,低延迟的网络互联的高端计算节点,以及相当复杂的应用和中间件技术栈。许多情况下,甚至连文件系统层也是HPC环境所独有的。没有对应的技能可能会破坏稳定性。” -But supercomputing skills are in shortening supply, something Dekate refers to as the workforce "greying," in the wake of a generation of developers going to splashy startups rather than academia or the more staid firms where HPC is in use. As a result, vendors of HPC services are doing what they can to bridge the gap. IBM's Turek says that many HPC vets will always want to roll their own exquisitely fine-tuned code and will need specialized debuggers and other tools to help them do that for the cloud. But even HPC newbies can make calls to code libraries built by vendors to exploit supercomputing's parallel processing. And third-party software providers sell turnkey software packages that abstract away much of HPC's complication. 但是超级计算技能的供给却在减少,Dekate将其称为劳动力“灰化”,这是因为一代开发人员将目光投向了新兴的初创公司,而不是学术界或使用HPC的更老套的公司。因此,HPC服务供应商正在尽其所能地弥补差距。 IBM的Turek表示,许多HPC老手将总是想运行他们自己精心调整过的代码,将需要专门的调试器和其他工具来帮助他们在云上实现这一目标。但是,即使是HPC新手也可以调用供应商构建的代码库,以利用超级计算的并行处理能力。第三方软件提供商出售的交钥匙软件包可以减少HPC的许多复杂性。 -Accenture's Tung says the sector needs to lean further into this in order to truly prosper. "HPCaaS has created dramatically impactful new capability, but what needs to happen is making this easy to apply for the data scientist, the enterprise architect, or the software developer," she says. "This includes easy to use APIs, documentation, and sample code. It includes user support to answer questions. It’s not enough to provide an API; that API needs to be fit-for-purpose. For a data scientist this should likely be in Python and easily change out for the frameworks she is already using. The value comes from enabling these users who ultimately will have their jobs improved through new efficiencies and performance, if only they can access the new capabilities." If vendors can pull that off, HPC services might truly bring supercomputing to the masses. 埃森哲的Tung表示,该行业需要进一步加大投入才能真正繁荣。她说:“HPCaaS已经创建了具有重大影响力的新功能,但还需要做的是使它易于被数据科学家,企业架构师或软件开发人员使用。这包括易用的API,文档和示例代码。它包括用户支持来解答问题。仅仅提供API是不够的,API需要适合特定的用途。对于数据科学家而言,这可能是以python形式提供,并容易更换她已经在使用的框架。其价值来自使这些用户最综只有在使用新功能时才能够改进效率和性能。” 如果供应商能够做到这一点,那么HPC服务才能真正将超级计算带给大众。 -Join the Network World communities on [Facebook][3] and [LinkedIn][4] to comment on topics that are top of mind. 加入[Facebook][3]和[LinkedIn][4]上的Network World社区,探讨最前沿的话题。 -------------------------------------------------------------------------------- From f087cd01bb522381699f6168319653c51b3983f3 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Sat, 2 May 2020 10:17:29 +0800 Subject: [PATCH 089/178] Done --- ...e ins and outs of high-performance computing as a service.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename {sources => translated}/talk/20200401 The ins and outs of high-performance computing as a service.md (97%) diff --git a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md b/translated/talk/20200401 The ins and outs of high-performance computing as a service.md similarity index 97% rename from sources/talk/20200401 The ins and outs of high-performance computing as a service.md rename to translated/talk/20200401 The ins and outs of high-performance computing as a service.md index df2c6debd3..948d985765 100644 --- a/sources/talk/20200401 The ins and outs of high-performance computing as a service.md +++ b/translated/talk/20200401 The ins and outs of high-performance computing as a service.md @@ -17,7 +17,7 @@ 几年前的一天,它出乎意料地超出了计算极限。 [世界上最快的10个超级计算机][1] -麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们的一些工作会使办公室的计算机不堪重负。” “购买机器并安装软件在经济上或计划上都不划算。” 相反,该公司与Rescale签约,从其购买在超级计算机系统上运行的周期(cycles),而这只花费了他们购买新硬件上所需的一小部分。 +麦考密克·史蒂文森(McCormick Stevenson)的首席工程师迈克·克劳奇奇(Mike Krawczyk)说:“我们的一些工作会使办公室的计算机不堪重负。购买机器并安装软件在经济上或计划上都不划算。” 相反,该公司与Rescale签约,从其购买在超级计算机系统上运行的周期(cycles),而这只花费了他们购买新硬件上所需的一小部分。 麦考密克·史蒂文森(McCormick Stevenson)已成为被称为超级计算即服务或高性能计算(HPC)即服务(两个紧密相关的术语)市场的早期采用者之一。根据国家计算科学研究所(的定义),HPC是超级计算机在计算复杂问题上的应用,而超级计算机是处理能力最先进的那些计算机。 From 1eed92f4ad672398553c5375dba27f35e63d39d5 Mon Sep 17 00:00:00 2001 From: Brooke Lau Date: Sat, 2 May 2020 11:48:40 +0800 Subject: [PATCH 090/178] TSL 20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode --- ...OS-RHEL 7-8 Systems in Single User Mode.md | 159 ------------------ ...OS-RHEL 7-8 Systems in Single User Mode.md | 159 ++++++++++++++++++ 2 files changed, 159 insertions(+), 159 deletions(-) delete mode 100644 sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md create mode 100644 translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md diff --git a/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md deleted file mode 100644 index 5e2fb84c2b..0000000000 --- a/sources/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md +++ /dev/null @@ -1,159 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode) -[#]: via: (https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/) -[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) - -Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode -====== - -Single user mode, also referred to as maintenance mode, which allows a single super user to recover/repair system problems. - -Generally, these problems cannot be solved in a multi-user environment. The system can boot but will not function properly or you will not be able to log in. - -It uses `runlevel1.target` or `rescue.target` on **[Red Hat][1]** (RHEL) 7/8 based systems. - -In this mode, the system mount all local file systems, but does not activate network interfaces. - -It only enables certain services and minimal functionality to repair the system. - -This method is mostly useful when you want to run fsck to fix corrupted file systems, or to reset a forgotten root password, or to fix a mount point issue on the system. - -You can boot **[CentOS][2]**/**[RHEL][3]** 7/8 systems in single user mode using the below three methods. - - * **Method-1:** Boot CentOS/RHEL 7/8 systems in single user mode by adding the “rd.break” parameter to the kernel - * **Method-2:** Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “rhgb quiet” word with the “init=/bin/bash or init=/bin/sh” parameter in the kernel - * **Method-3:** Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “ro” word with the “rw init=/sysroot/bin/sh” parameter in the kernel - - - -### Method-1: Boot CentOS/RHEL 7/8 systems in single user mode by adding the “rd.break” parameter to the kernel - -Reboot your system, on the GRUB2 boot screen, press the `"e"` key to edit the selected kernel. You need to select the first line, the first one is the latest kernel whereas you can select the different one if you would like to boot your system with the older kernel. - -![][4] - -Depending on your RHEL/CentOS version, find the word **“linux16”** or **“linux”**, press the “End” button on the keyboard, go to the end of the line, and add the keyword **“rd.break”** as shown below in the screenshot, then press **“Ctrl+x”** or **“F10”** to boot into single-user mode. - -You need to find the word **`linux16`** for RHEL/CentOS 7 systems, while **`linux`** for RHEL/CentOS 8 systems. - -![][4] - -This change mount your root file system into **“read only (RO)”** mode. You can check this by running the command below. Also, the output below clearly shows that you are in **“Emergency Mode”**. - -``` -# mount | grep root -``` - -![][4] - -To make changes to the **“sysroot”** file system you need to remount it with READ and WRITE (RW) mode. - -``` -# mount -o remount,rw /sysroot -``` - -Run the below command to change the environment, commonly known as “jailed directory” or “chroot jail”. - -``` -# chroot /sysroot -``` - -![][4] - -Now, single-user mode is completely ready for use. Once you have fixed your problem to exit single user mode, perform the following steps. - -CentOS/RHEL 7/8 uses SELinux by default, so create the following hidden file, which will automatically perform a relabel of all files on next boot. - -``` -# touch /.autorelabel -``` - -Finally, run the below command to restart the system. Alternatively, type “exit” command twice to restart your system. - -``` -# reboot -f -``` - -### Method-2: Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “rhgb quiet” word with the “init=/bin/bash or init=/bin/sh” parameters in the kernel - -Reboot your system, on the GRUB2 boot screen, press the `"e"` key to edit the selected kernel parameters. - -![][4] - -Find the word **“rhgb quiet”** and replace it with **“init=/bin/bash”** or **“init=/bin/sh”**, then press **“Ctrl+x”** or **“F10”** to boot in single user mode. - -Screenshot for **`init=/bin/bash`**. - -![][4] - -Screenshot for **`init=/bin/sh`**. - -![][4] - -By default, this will mount your “/” partition in read-only (RO) mode, so you will need to remount the “/” file system with READ and WRITE (RW) mode to make changes. - -``` -# mount -o remount,rw / -``` - -![][4] - -You can now perform any task that you want. When you are done, run the following command to enable SELinux relabeling on reboot. - -``` -# touch /.autorelabel -``` - -Finally reboot the system. - -``` -# exec /sbin/init 6 -``` - -### Method-3: Boot CentOS/RHEL 7/8 systems in single user mode by replacing the “ro” word with the “rw init=/sysroot/bin/sh” parameter in the kernel - -To interrupt the automatic boot, reboot your system and press any key on the GRUB2 splash screen. - -This will display the list of kernels available on your system and select the latest kernel and press the **`"e"`** key to edit the selected kernel parameters. - -Find the line that starts with the word **“linux”** or **“linux16”** and replace **“ro”** with **“rw init=/sysroot/bin/sh”**. When finished, press **“Ctrl+x”** or **“F10”** to boot in single user mode. - -Change the environment to “chroot jail” by running the below command. - -``` -# chroot /sysroot -``` - -Make any necessary changes to the system. Once done, run the below command to enable SELinux relabeling on reboot. - -``` -# touch /.autorelabel -``` - -Finally reboot the system. - -``` -# reboot -f -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/ - -作者:[Magesh Maruthamuthu][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/magesh/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/category/red-hat/ -[2]: https://www.2daygeek.com/category/centos/ -[3]: https://www.2daygeek.com/category/rhel/ -[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 diff --git a/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md new file mode 100644 index 0000000000..c5eaa1dc84 --- /dev/null +++ b/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md @@ -0,0 +1,159 @@ +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode) +[#]: via: (https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +在单用户模式下启动 CentOS/RHEL 7/8 的三种方法 +====== + +单用户模式,也被称为维护模式,超级用户可以在此模式下恢复/修复系统问题。 + +通常情况下,这些问题在多用户环境中修复不了。系统可以启动但功能不能正常运行或者你登录不了系统。 + +在基于 **[Red Hat][1]** (RHEL) 7/8 的系统中,使用 `runlevel1.target` 或 `rescue.target` 来实现。 + +在此模式下,系统会挂载所有的本地文件系统,但不开启网络接口。 + +系统仅启动特定的几个服务和修复系统必要的尽可能少的功能。 + +当你想运行文件系统一致性检查来修复损坏的文件系统,或忘记 root 密码后重置密码,或修复系统上的一个挂载点问题时,这个方法会很有用。 + +你可以用下面三种方法以单用户模式启动 **[CentOS][2]**/**[RHEL][3]** 7/8 系统。 + + * **方法 1:** 通过向内核添加 “rd.break” 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 + * **方法 2:** 通过用 “init=/bin/bash“ 或 ”init=/bin/sh” 替换内核中的 “rhgb quiet” 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 + * **方法 3:** 通过用 “rw init=/sysroot/bin/sh” 参数替换内核中的 “ro” 语句以单用户模式启动 CentOS/RHEL 7/8 系统 + + + +### 方法 1: 通过向内核添加 “rd.break” 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 + +重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。你需要选中第一行,第一个是最新的内核,然而如果你想用旧的内核启动系统你也可以选择其他的行。 + +![][4] + +根据你的 RHEL/CentOS 版本,找到 **“linux16”** 或 **“linux”** 语句,按下键盘上的 ”End“ 按钮,跳到行末,像下面截图中展示的那样添加关键词 **“rd.break”**,按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 + +如果你的系统是 RHEL/CentOS 7,你需要找 **`linux16`**,如果你的系统是 RHEL/CentOS 8,那么你需要找 **`linux`**。 + +![][4] + +这个修改会让你的 root 文件系统以 **“只读 (RO)”** 模式挂载。你可以用下面的命令来验证下。下面的输出也明确地告诉你当前是在 **“紧急模式”**。 + +``` +# mount | grep root +``` + +![][4] + +为了修改 **“sysroot”** 文件系统,你需要用 RW 模式重新挂载它。 + +``` +# mount -o remount,rw /sysroot +``` + +运行下面的命令修改环境,这就是大家熟知的 “jailed directory” 或 “chroot jail”。 + +``` +# chroot /sysroot +``` + +![][4] + +现在,单用户模式的前期准备已经完成了。当你修复了你的问题要退出单用户模式时,执行下面的步骤。 + +CentOS/RHEL 7/8 默认使用 SELinux,因此创建下面的隐藏文件,这个文件会在下一次启动时重新确认所有文件。 + +``` +# touch /.autorelabel +``` + +最后,用下面的命令重启系统。你也可以输入两次 “exit” 命令来重启你的系统。 + +``` +# reboot -f +``` + +### 方法 2: 通过用 “init=/bin/bash“ 或 ”init=/bin/sh” 替换内核中的 “rhgb quiet” 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 + +重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。 + +![][4] + +找到语句 **“rhgb quiet”**,用 **“init=/bin/bash”** 或 **“init=/bin/sh”** 替换它,然后按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 + +**`init=/bin/bash`** 的截图。 + +![][4] + +**`init=/bin/sh`** 的截图。 + +![][4] + +默认情况下,上面的操作会以只读(RO)模式挂载你的 “/” 分区,因此你需要以读写(RW)模式重新挂载 “/” 文件系统,这样才能修改它。 + +``` +# mount -o remount,rw / +``` + +![][4] + +现在你可以执行你的任务了。当结束时,执行下面的命令来开启重启时的 SELinux 重新确认。 + +``` +# touch /.autorelabel +``` + +最后,重启系统。 + +``` +# exec /sbin/init 6 +``` + +### 方法 3: 通过用 “rw init=/sysroot/bin/sh” 参数替换内核中的 “ro” 语句以单用户模式启动 CentOS/RHEL 7/8 系统 + +为了中断自动启动的过程,重启你的系统并在 GRUB2 启动界面按下任意键。 + +现在会展示你系统上所有可用的内核,选择最新的内核,按下 `e` 键来编辑选中的内核参数。 + +找到以 **“linux”** 或 **“linux16”** 开头的语句,用 **“rw init=/sysroot/bin/sh”** 替换 **“ro”**。替换完后按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 + +运行下面的命令把环境切换为 “chroot jail”。 + +``` +# chroot /sysroot +``` + +如果需要,做出必要的修改。修改完后,执行下面的命令来开启重启时的 SELinux 重新确认。 + +``` +# touch /.autorelabel +``` + +最后,重启系统。 + +``` +# reboot -f +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[lxbwolf](https://github.com/lxbwolf) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/red-hat/ +[2]: https://www.2daygeek.com/category/centos/ +[3]: https://www.2daygeek.com/category/rhel/ +[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 From 4595af0d9587cfd3a9b133598a0dbb66ce1c54d1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 12:49:09 +0800 Subject: [PATCH 091/178] APL --- ...04 Review- Best Ubuntu-based Distribution Just Got Better.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md index 760307eeb1..acba85fe58 100644 --- a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md +++ b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 841f19a2d63a81b2bc303fb896b6cdeeab5bc8d3 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 20:14:02 +0800 Subject: [PATCH 092/178] TSL&PRF --- ...untu-based Distribution Just Got Better.md | 168 +++++++++--------- 1 file changed, 82 insertions(+), 86 deletions(-) diff --git a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md index acba85fe58..1760e313d1 100644 --- a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md +++ b/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md @@ -7,188 +7,184 @@ [#]: via: (https://itsfoss.com/pop-os-20-04-review/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) -Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better +Pop!_OS 20.04 点评:最好的基于 Ubuntu 的发行版越来越好了 ====== -_**Brief: Pop OS 20.04 is an impressive Linux distribution based on Ubuntu. I review the major new features in this review and share my experience with the latest release.**_ +> Pop!_OS 20.04 是一款令人印象深刻的基于 Ubuntu 的 Linux 发行版。我在这篇评论中回顾了其主要的新功能,并分享了我对最新版本的体验。 -Now that Ubuntu 20.04 LTS and its official flavours are here – it’s time to take a look at one of best Ubuntu-based distro i.e Pop!_OS 20.04 by [System76][1]. +现在,Ubuntu 20.04 LTS 及其官方变体版本已经发布了 - 是时候看看 [System76][1] 的 Pop!_OS 20.04 了,这是基于 Ubuntu 的最好的发行版之一。 -To be honest, Pop!_OS is my favorite Linux distro that I primarily use for everything I do. +老实说,Pop!_OS 是我最喜欢的 Linux 发行版,主要用于我做的所有事情。 -Now that Pop!_OS 20.04 has finally arrived. It’s time to take a look at what it offers and whether you should upgrade or not? +现在,Pop!_OS 20.04 终于来了。是时候来看看它提供了哪些功能,以及你是否应该升级? -### What’s New In Pop!_OS 20.04 LTS? +### Pop!_OS 20.04 LTS 中有什么新东西? ![][2] -Visually, Pop!_OS 20.04 LTS isn’t really very different from Pop!_OS 19.10. However, you can find several new features and improvements. +从视觉上看,Pop!\_OS 20.04 LTS 与 Pop!\_OS 19.10 并没有太大的区别。然而,你可以发现几个新功能和改进。 -But, if you were using **Pop!_OS 18.04 LTS**, you have a lot of things to try. +但是,如果你之前使用的是 Pop!_OS 18.04 LTS,则可以发现有很多东西可以尝试。 -With [GNOME 3.36][3] onboard along with some newly added features, Pop!_OS 20.04 is an exciting release. +随着 [GNOME 3.36][3] 的到来,及其带来的一些新功能,Pop!_OS 20.04 成为了一个令人激动的版本。 -Overall, to give you an overview here are some key highlights: +总的来说,以下是一些主要的亮点。 - * Automatic Window Tiling - * New Application Switcher and Launcher - * Flatpack support added in Pop!_Shop + * 自动窗口平铺 + * 新的应用程序切换器和启动器 + * 在 Pop!_Shop 中增加了对 Flatpack 的支持。 * GNOME 3.36 - * Linux Kernel 5.4 - * Improved hybrid graphics support + * Linux 内核 5.4 + * 改进的混合图形支持 +虽然听起来很有趣,但我们还是来了解一下详细的变化,以及到目前为止 Pop!_OS 20.04 的体验如何。 +#### Pop!_OS 20.04 中的用户体验提升 -While this sounds fun, let us take a look at a detailed look on what has changed and how’s the experience of Pop!_OS 20.04 so far. +毫无疑问,很多 Linux 发行版都提供了开箱即用的用户体验。同样的,[Ubuntu 20.04 LTS 也有一流的改进和功能][4]。 -### User Experience Improvements in Pop OS 20.04 +而对于 System76 的 Pop!_OS,他们总是试图更进一步。并且,大多数新功能旨在通过提供有用的功能来改善用户体验。 -Undoubtedly, a lot of Linux distros offer a pleasant user experience out of the box. Likewise, [Ubuntu 20.04 LTS has had top-notch improvements and features][4] as well. +在这里,我将介绍一些改进,其中包括 [GNOME 3.36][3] 和 Pop!_OS 特有的一些功能。 -And, when it comes to Pop!_OS by System 76, they always try to go a mile further. And, the majority of new features aim to improve the user experience by providing useful functionalities. +#### 支持系统托盘图标 -Here, I’m going to take a look at some of the improvements that include [GNOME 3.36][3] and Pop!_OS-specific features. - -#### Support For System Tray Icons - -Finally! This may not be a big change – but Pop!_OS did not have the support for system tray icons (or applet icons). +总算是有了!这可能不是什么大的改变 —— 但 Pop!_OS 以前没有支持系统托盘图标(或小程序图标)。 ![][5] -With 20.04 LTS release, it’s here by default. No need of any extension. +随着 20.04 LTS 的发布,默认情况就有了系统托盘,不需要任何扩展。 -There may not be a whole lot of programs depending on system tray icons – but it is still something important to have. +依靠系统托盘图标的程序可能并不多 —— 但它仍然是重要的东西。 -In my case, I wasn’t able to use [ActivityWatch][6] on Pop!_OS 19.10 – but now I can. +就我而言,我以前无法在 Pop!_OS 19.10 上使用 [ActivityWatch][6] —— 但现在可以了。 -#### Automatic Window Tiling +#### 自动窗口平铺 ![][7] -**Automatic Window Tiling** is something I always wanted to try – but never invested any time to set it up using a [tiling window manager][8] like [i3][9], not even with [Regolith Desktop][10]. +自动窗口平铺是我一直想尝试的东西 —— 但从来没花时间使用过 [i3][9] 这样的[平铺窗口管理器][8]来设置它,更别说是 [Regolith 桌面][10]了。 -With Pop!_OS 20.04, you don’t need to do that anyway. The automatic window tiling feature comes baked in without needing you to set it up. +在 Pop!_OS 20.04 中,你就不需要这样做了。自动窗口平铺功能已经内置,无需设置。 -It also features an option to **Show Active Hint** i.e it will highlight the active window to avoid confusion. And, you can also adjust the gap between the windows. +它还提供了“显示活动提示”的选项,也就是说,它将高亮显示活动窗口以避免混淆。而且,你还可以调整窗口之间的间隙。 ![][11] -You can see it in action in their official video: +你可以在他们的官方视频中看到它是如何工作的: -[Subscribe to our YouTube channel for more Linux videos][12] +- [System76 Pop!_OS 20.04 - Auto Tiling](https://youtu.be/-fltwBKsMY0) -And, I must say that it is one of the biggest additions on Pop!_OS 20.04 that could potentially help you multi-task more efficiently. +而且,我得说,这是 Pop!_OS 20.04 上最大的新增功能之一,有可能帮助你更有效地进行多任务处理。 -Even though the feature comes in handy everytime you use it. To make the most out of it, a display screen bigger than 21-inches (at least) should be the best way to go! And, for this reason – I’m really tempted to upgrade my monitor as well! +即使每次使用该功能都很方便,但为了最大程度地利用它,最好是使用一个大于 21 英寸的显示屏(至少)! 而且,因为这个原因 —— 我真的很想把我的显示器也升级一下! -#### New Extensions App +#### 新的扩展应用 ![][13] -Pop!_OS comes baked in with some unique GNOME extensions. But, you don’t need GNOME Tweaks the manage the extension anymore. +Pop!_OS 内置了一些独特的 GNOME 扩展。但是,你不需要用 GNOME Tweaks 来管理扩展。 -The newly added **Extensions** app lets you configure and manage the extensions on Pop!_OS 20.04. +新增加的 “Extensions” 应用可以让你在 Pop!_OS 20.04 上配置和管理扩展程序。 -#### Improved Notification Center +#### 改进的通知中心 ![][14] -With the new GNOME 3.36 release, the notification center includes a revamped look. Here, I have the dark mode enabled. +在新的 GNOME 3.36 中,通知中心的外观经过了改进。这里,我启用了黑暗模式。 -#### New Application Switcher & Launcher +#### 新的应用程序切换器 & 启动器 ![][15] -You can still **ALT+TAB** or **Super key + TAB** to go through the running applications. +你仍然可以用 `ALT+TAB` 或 `Super+TAB` 来浏览正在运行的应用程序。 -But, that’s time-consuming when you have a lot of things going on. So, on Pop!_OS 20.04, you get an application switcher and launcher which you can activate using **Super key + /** +但是,当你有很多事情要做的时候,这很耗时。所以,在 Pop!_OS 20.04上,你可以使用 `Super+ /` 激活应用程序切换器和启动器。 -Once you get used to the keyboard shortcut, it will be very convenient thing to have. +一旦你习惯了这个快捷键,它将是非常方便的东西。 -In addition to this, you may find numerous other subtle improvements visually with the icons/windows on Pop!_OS 20.04. +除此以外,你可能会发现 Pop!_OS 20.04 上的图标/窗口在视觉上有许多其它细微的改进。 -#### New Login Screen +#### 新的登录界面 -Well, with GNOME 3.36, it’s an obvious change. But, it does look good! +嗯,这是 GNOME 3.36 带来的一个明显的变化。但是,它看起来确实很不错! ![][16] -### Flatpak Support on Pop!_Shop +#### Pop!_Shop 支持 Flatpak -Normally, Pop!_Shop is already something useful with a huge repository along with [Pop!_OS’s own repositories.][17] +通常,Pop!_Shop 已经是一个非常有用的东西了,包括它自有的在内,它带有一个巨大的软件仓库。 -Now, with Pop!_OS 20.04, you can choose to install either Flatpak (via Flathub) or the Debian package of any available software on Pop!_Shop. Of course, only if a Flatpak package exists for the particular software. +现在,在 Pop!\_OS 20.04 中,你可以用 Pop!_Shop 安装任何可用软件的 Debian 包或 Flatpak(通过 Flathub) —— 当然,前提是某个软件有 Flatpak 软件包。 -You might want to check [how to use Flatpak on Linux][18] if you don’t have Pop!_OS 20.04. +如果你没有使用 Pop!_OS 20.04,你可能要看看[如何在 Linux 上使用 Flatpak][18]。 ![][19] -Personally, I’m not a fan of Flatpak but some applications like GIMP requires you to install the Flatpak package to get the latest version. So, it is definitely a good thing to have the support for Flatpak on Pop!_Shop baked right into it. +就我个人而言,我并不是 Flatpak 的粉丝,但有些应用如 GIMP 需要你安装 Flatpak 包才能获得最新版本。所以,在 Pop!_Shop 上直接支持了 Flatpak 绝对是一件好事。 -### Keyboard Shortcut Changes +#### 键盘快捷键更改 -This can be annoying if you’re comfortable with the existing keyboard shortcuts on Pop!_OS 19.10 or older. +如果你习惯了 Pop!_OS 19.10 或更早的版本上现有的键盘快捷键,这可能会让你很烦。 -In either case, there are a few important keyboard shortcut changes to potentially improve your experience, here they are: +不管是哪种情况,有几个重要的键盘快捷键变化可能会改善你的体验,如下: - * Lock Screen: **Super + L** _changed to_ **Super + Escape** - * Move Workspace: **Super + Up/Down Arrow** _changed to_ **Super + CTRL + Up/Down Arrow** - * Close Window: **Super + W** _changed_ to **Super + Q** - * Toggle Maximize: **Super + Up Arrow** _changed to_ **Super + M** + * 锁定屏幕:`Super + L` 改为 `Super + Escape`。 + * 移动工作区:`Super + 上/下箭头键` 改为 `Super + CTRL + 上/下箭头键`。 + * 关闭窗口:`Super + W` 变更为 `Super + Q`。 + * 切换最大化:`Super +向上箭头` 改为 `Super + M`。 +#### Linux 内核 5.4 +与其他大多数最新的 Linux 发行版相似,Pop!_OS 20.04 搭载了 [Linux 内核 5.4][20]。 -### Linux Kernel 5.4 +所以,很明显,你可以期望获得对 [exFAT 支持][21]、改进的 AMD 图形兼容性以及它附带所有其他功能。 -Similar to most of the other latest Linux distros, Pop!_OS 20.04 comes loaded with [Linux Kernel 5.4][20]. +#### 性能提升 -So, obviously, you can expect the [exFAT support][21] and an improved AMD graphics compatibility along with all the other features that come with it. +尽管 Pop!_OS 并不称自己是轻量级的 Linux 发行版,但它仍然是一个资源节约型的发行版。而且,有了 GNOME 3.36 的支持,它的速度应该足够快了。 -### Performance Improvements - -Even though Pop!_OS doesn’t pitch itself as a lightweight Linux distro, it is still a resource-efficient distro. And, with GNOME 3.36 onboard, it should be fast enough. - -Considering that I’ve been using Pop!_OS as my primary distro for about a year, I’ve never had any performance issues. And, this is how the resource usage will probably look like (depending on your system configuration) after you install Pop!_OS 20.04. +考虑到我已经将 Pop!\_OS 作为主要发行版使用已经一年多了,我从来没有遇到过性能问题。这就是你安装了 Pop!_OS 20.04 之后的资源使用情况(取决于你的系统配置)。 ![][22] -To give you an idea, my desktop configuration involves an i5-7400 processor, 16 GB RAM (2400 MHz), NVIDIA GTX 1050ti graphics card, and an SSD. +给你一个作为参考,我的台式机配置包括 i5-7400 处理器、16GB 内存(2400MHz)、NVIDIA GTX 1050ti 显卡和 SSD。 -I’m not really a fan of system benchmarks because it does not really give you the idea of how a specific application or a game would perform unless you try it. +我不是一个系统基准测试的忠实拥护者,因为除非你去尝试,否则它并不能让你知道特定的应用或游戏的性能。 -You can try the [Phoronix Test Suite][23] to analyze how your system performs. But, Pop!_OS 20.04 LTSshould be a snappy experience! +你可以试试 [Phoronix 测试套件][23]来分析你的系统表现。但是,Pop!_OS 20.04 LTS 应该是一个很爽快的体验! -### Package Updates & Other Improvements +#### 软件包更新 & 其他改进 -While every Ubuntu-based distro benefits from the [improvements in Ubuntu 20.04 LTS][4], there are some Pop OS specific bug fixes and improvements as well. +尽管每个基于Ubuntu的发行版都受益于Ubuntu 20.04 LTS的改进,但也有一些 Pop!_OS 特有的错误修复和改进。 -In addition to it, some major apps/packages like **Firefox 75.0** have been updated to their latest version. +除此之外,一些主要的应用程序/包(如 Firefox 75.0)也已经更新到了最新版本。 -As of now, there should be no critical bugs present and at least none for me. +到现在为止,应该没有任何严重的错误,至少对我来说没有。 -You can check out their [development progress on GitHub][24] to check the details of issues they’ve already fixed during the beta testing and the issues they will be fixing right after the release. +你可以在 [GitHub 上查看他们的开发进度][24],以了解他们在测试期间已经修复的问题和发布后即将修复的问题。 -### Download & Support Pop!_OS 20.04 +### 下载 & 支持 Pop!_OS 20.04 ![][25] -With this release, System76 has finally added a subscription model (optional) to support Pop!_OS development. +在这个版本中,System76 终于增加了一个可选的订阅模式来支持 Pop!_OS 的开发。 -You can download **Pop!_OS 20.04** for free – but if you want to support them I’d suggest you go for the subscription with just **$1/month**. +你可以免费下载 Pop!_OS 20.04 —— 但如果你想支持他们,我建议你只需要 \$1/月就可以订阅。 -[Pop!_OS 20.04][26] +- [Pop!_OS 20.04][26] -### My Thoughts on Pop OS 20.04 +### 我对 Pop OS 20.04 的看法 -I must mention that I was rooting for a fresh new wallpaper with the latest 20.04 release. But, that’s not a big deal. +我必须提到的是,我正在为最新的 20.04 版本提供全新的墙纸。但是,这没什么大不了的。 -With the window tiling feature, flatpak support, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far. Also, it’s great to see that they are highlighting their focus on creative professionals with out-of-the-box support for some popular software. +有了窗口平铺功能、支持 flatpak,以及众多其他改进,到目前为止,我对 Pop!_OS 20.04 的体验是一流的。另外,很高兴看到他们在一些流行软件的开箱即用支持上突出了他们对创意专业人士的关注。 ![][27] -All the good things about Ubuntu 20.04 and some extra toppings on it by System76, I’m impressed! +Ubuntu 20.04 的所有优点,再加上 System76 的一些额外的加料,让我印象深刻! -_**Have you tried the Pop!_OS 20.04 yet? Let me know your thoughts in the comments below.**_ +你试过 Pop!_OS 20.04 吗?请在下面的评论中告诉我你的想法。 -------------------------------------------------------------------------------- @@ -196,8 +192,8 @@ via: https://itsfoss.com/pop-os-20-04-review/ 作者:[Ankush Das][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From bc9b9b48abfb8de73106ba25975db4b800ce1fbf Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 20:16:44 +0800 Subject: [PATCH 093/178] PRF --- ...eview- Best Ubuntu-based Distribution Just Got Better.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {sources => translated}/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md (99%) diff --git a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md b/translated/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md similarity index 99% rename from sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md rename to translated/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md index 1760e313d1..731c14fa0b 100644 --- a/sources/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md +++ b/translated/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12175-1.html) [#]: subject: (Pop OS 20.04 Review: Best Ubuntu-based Distribution Just Got Better) [#]: via: (https://itsfoss.com/pop-os-20-04-review/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) From c8edffef9b6330dfe962050897db8b12d699ce1a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 20:18:30 +0800 Subject: [PATCH 094/178] PUB @wxy https://linux.cn/article-12175-1.html --- ...0.04 Review- Best Ubuntu-based Distribution Just Got Better.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {translated/tech => published}/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md (100%) diff --git a/translated/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md b/published/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md similarity index 100% rename from translated/tech/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md rename to published/20200502 Pop OS 20.04 Review- Best Ubuntu-based Distribution Just Got Better.md From 2178dc85001508a499ad94941a34e5c53617ed26 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 20:53:34 +0800 Subject: [PATCH 095/178] =?UTF-8?q?=E5=BD=92=E6=A1=A3=20202004?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../20190107 Different Ways To Update Linux Kernel For Ubuntu.md | 0 published/{ => 202004}/20190116 Best Audio Editors For Linux.md | 0 ... Installing Kali Linux on VirtualBox- Quickest - Safest Way.md | 0 .../{ => 202004}/20190429 10 moments that shaped Linux history.md | 0 .../20190523 Run your blog on GitHub Pages with Python.md | 0 published/{ => 202004}/20190605 What is GraphQL.md | 0 published/{ => 202004}/20190612 How to write a loop in Bash.md | 0 published/{ => 202004}/20190612 Why use GraphQL.md | 0 published/{ => 202004}/20190712 What is Silverblue.md | 0 .../20190814 9 open source cloud native projects to consider.md | 0 .../{ => 202004}/20190822 How the Linux desktop has grown.md | 0 .../20191014 How to make a Halloween lantern with Inkscape.md | 0 ...191209 Use the Fluxbox Linux desktop as your window manager.md | 0 .../20191216 Relive Linux history with the ROX desktop.md | 0 ... Difference Between DNF and YUM, Why is Yum Replaced by DNF.md | 0 .../20191224 Why your Python code should be flat and sparse.md | 0 published/{ => 202004}/20200211 Navigating man pages in Linux.md | 0 ...Manage complex Git workspaces with Great Teeming Workspaces.md | 0 .../{ => 202004}/20200228 Getting started with Linux firewalls.md | 0 .../{ => 202004}/20200309 Fish - A Friendly Interactive Shell.md | 0 .../20200311 Directing Kubernetes traffic with Traefik.md | 0 .../20200311 What you need to know about variables in Emacs.md | 0 published/{ => 202004}/20200312 Make SSL certs easy with k3s.md | 0 .../20200313 What is the internet backbone and how it works.md | 0 ...20200320 Build a private social network with a Raspberry Pi.md | 0 .../20200320 Control the firewall at the command line.md | 0 ...ow to Check Password Expiration Date for All Users on Linux.md | 0 .../{ => 202004}/20200323 Don-t love diff- Use Meld instead.md | 0 published/{ => 202004}/20200325 Linux firewall basics with ufw.md | 0 .../20200326 3 open source tools for sticking to a budget.md | 0 ...d a private chat server with a Raspberry Pi and Rocket.Chat.md | 0 ...tall Microsoft TrueType Fonts on Ubuntu-based Distributions.md | 0 ...Oracle Announces Java 14- How to Install it on Ubuntu Linux.md | 0 ...The Keyring Concept in Ubuntu- What is It and How to Use it.md | 0 ...n Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md | 0 ...20200330 Using data from spreadsheets in Fedora with Python.md | 0 .../{ => 202004}/20200331 5 ways to level up your Vim skills.md | 0 ...200401 How to Find Which Graphics Card do You Have in Linux.md | 0 ... Association Launches an Open Source Collaboration Platform.md | 0 ...0402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md | 0 .../20200403 Scheduling tasks on Linux using the at command.md | 0 .../{ => 202004}/20200403 Take back your dotfiles with Chezmoi.md | 0 ...odhi Linux 5.1 Review- Slightly Different Lightweight Linux.md | 0 ... Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md | 0 ... 15 years of Git- How to get started or learn something new.md | 0 .../20200407 Bitwarden- A Free - Open Source Password Manager.md | 0 ...ion UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md | 0 .../20200408 Create web tutorials with Reveal.js and Git.md | 0 ...lates in LibreOffice to Save Time and Increase Productivity.md | 0 ...e-s How to Find Out Which Desktop Environment You are Using.md | 0 ...-m using AI to translate -wash your hands- in 500 languages.md | 0 .../20200410 How to Go Full Dark Mode in Ubuntu 20.04.md | 0 ...13 A handy utility for creating Raspberry Pi SD card images.md | 0 .../20200413 How to Add Multiple Time Zones in Ubuntu.md | 0 published/{ => 202004}/20200413 How to install Python on Linux.md | 0 ... Folders as Administrator in Nautilus File Manager in Linux.md | 0 .../20200416 How to package Python applications for Linux.md | 0 ...20200417 12 Linux Commands to Have Some Fun in the Terminal.md | 0 ...418 Ethernet consortium announces completion of 800GbE spec.md | 0 ...200421 MystiQ- A Free and Open Source Audio-Video Converter.md | 0 ...d IP Addresses, MAC Addresses, and Interface Speed on Linux.md | 0 .../20200422 Things You Should Know About Ubuntu 20.04.md | 0 .../20200424 Ubuntu 20.04 LTS Released. Download Now.md | 0 ...0200424 What you need to know about open source ad blockers.md | 0 ... What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md | 0 published/{ => 202004}/20200428 Fedora 32 is officially here.md | 0 ...20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md | 0 67 files changed, 0 insertions(+), 0 deletions(-) rename published/{ => 202004}/20190107 Different Ways To Update Linux Kernel For Ubuntu.md (100%) rename published/{ => 202004}/20190116 Best Audio Editors For Linux.md (100%) rename published/{ => 202004}/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md (100%) rename published/{ => 202004}/20190429 10 moments that shaped Linux history.md (100%) rename published/{ => 202004}/20190523 Run your blog on GitHub Pages with Python.md (100%) rename published/{ => 202004}/20190605 What is GraphQL.md (100%) rename published/{ => 202004}/20190612 How to write a loop in Bash.md (100%) rename published/{ => 202004}/20190612 Why use GraphQL.md (100%) rename published/{ => 202004}/20190712 What is Silverblue.md (100%) rename published/{ => 202004}/20190814 9 open source cloud native projects to consider.md (100%) rename published/{ => 202004}/20190822 How the Linux desktop has grown.md (100%) rename published/{ => 202004}/20191014 How to make a Halloween lantern with Inkscape.md (100%) rename published/{ => 202004}/20191209 Use the Fluxbox Linux desktop as your window manager.md (100%) rename published/{ => 202004}/20191216 Relive Linux history with the ROX desktop.md (100%) rename published/{ => 202004}/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md (100%) rename published/{ => 202004}/20191224 Why your Python code should be flat and sparse.md (100%) rename published/{ => 202004}/20200211 Navigating man pages in Linux.md (100%) rename published/{ => 202004}/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md (100%) rename published/{ => 202004}/20200228 Getting started with Linux firewalls.md (100%) rename published/{ => 202004}/20200309 Fish - A Friendly Interactive Shell.md (100%) rename published/{ => 202004}/20200311 Directing Kubernetes traffic with Traefik.md (100%) rename published/{ => 202004}/20200311 What you need to know about variables in Emacs.md (100%) rename published/{ => 202004}/20200312 Make SSL certs easy with k3s.md (100%) rename published/{ => 202004}/20200313 What is the internet backbone and how it works.md (100%) rename published/{ => 202004}/20200320 Build a private social network with a Raspberry Pi.md (100%) rename published/{ => 202004}/20200320 Control the firewall at the command line.md (100%) rename published/{ => 202004}/20200320 How to Check Password Expiration Date for All Users on Linux.md (100%) rename published/{ => 202004}/20200323 Don-t love diff- Use Meld instead.md (100%) rename published/{ => 202004}/20200325 Linux firewall basics with ufw.md (100%) rename published/{ => 202004}/20200326 3 open source tools for sticking to a budget.md (100%) rename published/{ => 202004}/20200327 Build a private chat server with a Raspberry Pi and Rocket.Chat.md (100%) rename published/{ => 202004}/20200329 How to install Microsoft TrueType Fonts on Ubuntu-based Distributions.md (100%) rename published/{ => 202004}/20200329 Oracle Announces Java 14- How to Install it on Ubuntu Linux.md (100%) rename published/{ => 202004}/20200329 The Keyring Concept in Ubuntu- What is It and How to Use it.md (100%) rename published/{ => 202004}/20200329 Turn Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md (100%) rename published/{ => 202004}/20200330 Using data from spreadsheets in Fedora with Python.md (100%) rename published/{ => 202004}/20200331 5 ways to level up your Vim skills.md (100%) rename published/{ => 202004}/20200401 How to Find Which Graphics Card do You Have in Linux.md (100%) rename published/{ => 202004}/20200401 IEEE Standards Association Launches an Open Source Collaboration Platform.md (100%) rename published/{ => 202004}/20200402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md (100%) rename published/{ => 202004}/20200403 Scheduling tasks on Linux using the at command.md (100%) rename published/{ => 202004}/20200403 Take back your dotfiles with Chezmoi.md (100%) rename published/{ => 202004}/20200404 Bodhi Linux 5.1 Review- Slightly Different Lightweight Linux.md (100%) rename published/{ => 202004}/20200404 What is Arch User Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md (100%) rename published/{ => 202004}/20200407 15 years of Git- How to get started or learn something new.md (100%) rename published/{ => 202004}/20200407 Bitwarden- A Free - Open Source Password Manager.md (100%) rename published/{ => 202004}/20200407 New Linux Distribution UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md (100%) rename published/{ => 202004}/20200408 Create web tutorials with Reveal.js and Git.md (100%) rename published/{ => 202004}/20200408 How to Create Templates in LibreOffice to Save Time and Increase Productivity.md (100%) rename published/{ => 202004}/20200409 Here-s How to Find Out Which Desktop Environment You are Using.md (100%) rename published/{ => 202004}/20200410 How I-m using AI to translate -wash your hands- in 500 languages.md (100%) rename published/{ => 202004}/20200410 How to Go Full Dark Mode in Ubuntu 20.04.md (100%) rename published/{ => 202004}/20200413 A handy utility for creating Raspberry Pi SD card images.md (100%) rename published/{ => 202004}/20200413 How to Add Multiple Time Zones in Ubuntu.md (100%) rename published/{ => 202004}/20200413 How to install Python on Linux.md (100%) rename published/{ => 202004}/20200415 How to Open Files and Folders as Administrator in Nautilus File Manager in Linux.md (100%) rename published/{ => 202004}/20200416 How to package Python applications for Linux.md (100%) rename published/{ => 202004}/20200417 12 Linux Commands to Have Some Fun in the Terminal.md (100%) rename published/{ => 202004}/20200418 Ethernet consortium announces completion of 800GbE spec.md (100%) rename published/{ => 202004}/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md (100%) rename published/{ => 202004}/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md (100%) rename published/{ => 202004}/20200422 Things You Should Know About Ubuntu 20.04.md (100%) rename published/{ => 202004}/20200424 Ubuntu 20.04 LTS Released. Download Now.md (100%) rename published/{ => 202004}/20200424 What you need to know about open source ad blockers.md (100%) rename published/{ => 202004}/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md (100%) rename published/{ => 202004}/20200428 Fedora 32 is officially here.md (100%) rename published/{ => 202004}/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md (100%) diff --git a/published/20190107 Different Ways To Update Linux Kernel For Ubuntu.md b/published/202004/20190107 Different Ways To Update Linux Kernel For Ubuntu.md similarity index 100% rename from published/20190107 Different Ways To Update Linux Kernel For Ubuntu.md rename to published/202004/20190107 Different Ways To Update Linux Kernel For Ubuntu.md diff --git a/published/20190116 Best Audio Editors For Linux.md b/published/202004/20190116 Best Audio Editors For Linux.md similarity index 100% rename from published/20190116 Best Audio Editors For Linux.md rename to published/202004/20190116 Best Audio Editors For Linux.md diff --git a/published/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/published/202004/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md similarity index 100% rename from published/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md rename to published/202004/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md diff --git a/published/20190429 10 moments that shaped Linux history.md b/published/202004/20190429 10 moments that shaped Linux history.md similarity index 100% rename from published/20190429 10 moments that shaped Linux history.md rename to published/202004/20190429 10 moments that shaped Linux history.md diff --git a/published/20190523 Run your blog on GitHub Pages with Python.md b/published/202004/20190523 Run your blog on GitHub Pages with Python.md similarity index 100% rename from published/20190523 Run your blog on GitHub Pages with Python.md rename to published/202004/20190523 Run your blog on GitHub Pages with Python.md diff --git a/published/20190605 What is GraphQL.md b/published/202004/20190605 What is GraphQL.md similarity index 100% rename from published/20190605 What is GraphQL.md rename to published/202004/20190605 What is GraphQL.md diff --git a/published/20190612 How to write a loop in Bash.md b/published/202004/20190612 How to write a loop in Bash.md similarity index 100% rename from published/20190612 How to write a loop in Bash.md rename to published/202004/20190612 How to write a loop in Bash.md diff --git a/published/20190612 Why use GraphQL.md b/published/202004/20190612 Why use GraphQL.md similarity index 100% rename from published/20190612 Why use GraphQL.md rename to published/202004/20190612 Why use GraphQL.md diff --git a/published/20190712 What is Silverblue.md b/published/202004/20190712 What is Silverblue.md similarity index 100% rename from published/20190712 What is Silverblue.md rename to published/202004/20190712 What is Silverblue.md diff --git a/published/20190814 9 open source cloud native projects to consider.md b/published/202004/20190814 9 open source cloud native projects to consider.md similarity index 100% rename from published/20190814 9 open source cloud native projects to consider.md rename to published/202004/20190814 9 open source cloud native projects to consider.md diff --git a/published/20190822 How the Linux desktop has grown.md b/published/202004/20190822 How the Linux desktop has grown.md similarity index 100% rename from published/20190822 How the Linux desktop has grown.md rename to published/202004/20190822 How the Linux desktop has grown.md diff --git a/published/20191014 How to make a Halloween lantern with Inkscape.md b/published/202004/20191014 How to make a Halloween lantern with Inkscape.md similarity index 100% rename from published/20191014 How to make a Halloween lantern with Inkscape.md rename to published/202004/20191014 How to make a Halloween lantern with Inkscape.md diff --git a/published/20191209 Use the Fluxbox Linux desktop as your window manager.md b/published/202004/20191209 Use the Fluxbox Linux desktop as your window manager.md similarity index 100% rename from published/20191209 Use the Fluxbox Linux desktop as your window manager.md rename to published/202004/20191209 Use the Fluxbox Linux desktop as your window manager.md diff --git a/published/20191216 Relive Linux history with the ROX desktop.md b/published/202004/20191216 Relive Linux history with the ROX desktop.md similarity index 100% rename from published/20191216 Relive Linux history with the ROX desktop.md rename to published/202004/20191216 Relive Linux history with the ROX desktop.md diff --git a/published/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md b/published/202004/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md similarity index 100% rename from published/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md rename to published/202004/20191220 The Difference Between DNF and YUM, Why is Yum Replaced by DNF.md diff --git a/published/20191224 Why your Python code should be flat and sparse.md b/published/202004/20191224 Why your Python code should be flat and sparse.md similarity index 100% rename from published/20191224 Why your Python code should be flat and sparse.md rename to published/202004/20191224 Why your Python code should be flat and sparse.md diff --git a/published/20200211 Navigating man pages in Linux.md b/published/202004/20200211 Navigating man pages in Linux.md similarity index 100% rename from published/20200211 Navigating man pages in Linux.md rename to published/202004/20200211 Navigating man pages in Linux.md diff --git a/published/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md b/published/202004/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md similarity index 100% rename from published/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md rename to published/202004/20200213 Manage complex Git workspaces with Great Teeming Workspaces.md diff --git a/published/20200228 Getting started with Linux firewalls.md b/published/202004/20200228 Getting started with Linux firewalls.md similarity index 100% rename from published/20200228 Getting started with Linux firewalls.md rename to published/202004/20200228 Getting started with Linux firewalls.md diff --git a/published/20200309 Fish - A Friendly Interactive Shell.md b/published/202004/20200309 Fish - A Friendly Interactive Shell.md similarity index 100% rename from published/20200309 Fish - A Friendly Interactive Shell.md rename to published/202004/20200309 Fish - A Friendly Interactive Shell.md diff --git a/published/20200311 Directing Kubernetes traffic with Traefik.md b/published/202004/20200311 Directing Kubernetes traffic with Traefik.md similarity index 100% rename from published/20200311 Directing Kubernetes traffic with Traefik.md rename to published/202004/20200311 Directing Kubernetes traffic with Traefik.md diff --git a/published/20200311 What you need to know about variables in Emacs.md b/published/202004/20200311 What you need to know about variables in Emacs.md similarity index 100% rename from published/20200311 What you need to know about variables in Emacs.md rename to published/202004/20200311 What you need to know about variables in Emacs.md diff --git a/published/20200312 Make SSL certs easy with k3s.md b/published/202004/20200312 Make SSL certs easy with k3s.md similarity index 100% rename from published/20200312 Make SSL certs easy with k3s.md rename to published/202004/20200312 Make SSL certs easy with k3s.md diff --git a/published/20200313 What is the internet backbone and how it works.md b/published/202004/20200313 What is the internet backbone and how it works.md similarity index 100% rename from published/20200313 What is the internet backbone and how it works.md rename to published/202004/20200313 What is the internet backbone and how it works.md diff --git a/published/20200320 Build a private social network with a Raspberry Pi.md b/published/202004/20200320 Build a private social network with a Raspberry Pi.md similarity index 100% rename from published/20200320 Build a private social network with a Raspberry Pi.md rename to published/202004/20200320 Build a private social network with a Raspberry Pi.md diff --git a/published/20200320 Control the firewall at the command line.md b/published/202004/20200320 Control the firewall at the command line.md similarity index 100% rename from published/20200320 Control the firewall at the command line.md rename to published/202004/20200320 Control the firewall at the command line.md diff --git a/published/20200320 How to Check Password Expiration Date for All Users on Linux.md b/published/202004/20200320 How to Check Password Expiration Date for All Users on Linux.md similarity index 100% rename from published/20200320 How to Check Password Expiration Date for All Users on Linux.md rename to published/202004/20200320 How to Check Password Expiration Date for All Users on Linux.md diff --git a/published/20200323 Don-t love diff- Use Meld instead.md b/published/202004/20200323 Don-t love diff- Use Meld instead.md similarity index 100% rename from published/20200323 Don-t love diff- Use Meld instead.md rename to published/202004/20200323 Don-t love diff- Use Meld instead.md diff --git a/published/20200325 Linux firewall basics with ufw.md b/published/202004/20200325 Linux firewall basics with ufw.md similarity index 100% rename from published/20200325 Linux firewall basics with ufw.md rename to published/202004/20200325 Linux firewall basics with ufw.md diff --git a/published/20200326 3 open source tools for sticking to a budget.md b/published/202004/20200326 3 open source tools for sticking to a budget.md similarity index 100% rename from published/20200326 3 open source tools for sticking to a budget.md rename to published/202004/20200326 3 open source tools for sticking to a budget.md diff --git a/published/20200327 Build a private chat server with a Raspberry Pi and Rocket.Chat.md b/published/202004/20200327 Build a private chat server with a Raspberry Pi and Rocket.Chat.md similarity index 100% rename from published/20200327 Build a private chat server with a Raspberry Pi and Rocket.Chat.md rename to published/202004/20200327 Build a private chat server with a Raspberry Pi and Rocket.Chat.md diff --git a/published/20200329 How to install Microsoft TrueType Fonts on Ubuntu-based Distributions.md b/published/202004/20200329 How to install Microsoft TrueType Fonts on Ubuntu-based Distributions.md similarity index 100% rename from published/20200329 How to install Microsoft TrueType Fonts on Ubuntu-based Distributions.md rename to published/202004/20200329 How to install Microsoft TrueType Fonts on Ubuntu-based Distributions.md diff --git a/published/20200329 Oracle Announces Java 14- How to Install it on Ubuntu Linux.md b/published/202004/20200329 Oracle Announces Java 14- How to Install it on Ubuntu Linux.md similarity index 100% rename from published/20200329 Oracle Announces Java 14- How to Install it on Ubuntu Linux.md rename to published/202004/20200329 Oracle Announces Java 14- How to Install it on Ubuntu Linux.md diff --git a/published/20200329 The Keyring Concept in Ubuntu- What is It and How to Use it.md b/published/202004/20200329 The Keyring Concept in Ubuntu- What is It and How to Use it.md similarity index 100% rename from published/20200329 The Keyring Concept in Ubuntu- What is It and How to Use it.md rename to published/202004/20200329 The Keyring Concept in Ubuntu- What is It and How to Use it.md diff --git a/published/20200329 Turn Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md b/published/202004/20200329 Turn Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md similarity index 100% rename from published/20200329 Turn Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md rename to published/202004/20200329 Turn Your Regular TV into a Smart TV With KDE Plasma Bigscreen.md diff --git a/published/20200330 Using data from spreadsheets in Fedora with Python.md b/published/202004/20200330 Using data from spreadsheets in Fedora with Python.md similarity index 100% rename from published/20200330 Using data from spreadsheets in Fedora with Python.md rename to published/202004/20200330 Using data from spreadsheets in Fedora with Python.md diff --git a/published/20200331 5 ways to level up your Vim skills.md b/published/202004/20200331 5 ways to level up your Vim skills.md similarity index 100% rename from published/20200331 5 ways to level up your Vim skills.md rename to published/202004/20200331 5 ways to level up your Vim skills.md diff --git a/published/20200401 How to Find Which Graphics Card do You Have in Linux.md b/published/202004/20200401 How to Find Which Graphics Card do You Have in Linux.md similarity index 100% rename from published/20200401 How to Find Which Graphics Card do You Have in Linux.md rename to published/202004/20200401 How to Find Which Graphics Card do You Have in Linux.md diff --git a/published/20200401 IEEE Standards Association Launches an Open Source Collaboration Platform.md b/published/202004/20200401 IEEE Standards Association Launches an Open Source Collaboration Platform.md similarity index 100% rename from published/20200401 IEEE Standards Association Launches an Open Source Collaboration Platform.md rename to published/202004/20200401 IEEE Standards Association Launches an Open Source Collaboration Platform.md diff --git a/published/20200402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md b/published/202004/20200402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md similarity index 100% rename from published/20200402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md rename to published/202004/20200402 How to Upgrade to Ubuntu 20.04 Beta from 18.04 - 19.10.md diff --git a/published/20200403 Scheduling tasks on Linux using the at command.md b/published/202004/20200403 Scheduling tasks on Linux using the at command.md similarity index 100% rename from published/20200403 Scheduling tasks on Linux using the at command.md rename to published/202004/20200403 Scheduling tasks on Linux using the at command.md diff --git a/published/20200403 Take back your dotfiles with Chezmoi.md b/published/202004/20200403 Take back your dotfiles with Chezmoi.md similarity index 100% rename from published/20200403 Take back your dotfiles with Chezmoi.md rename to published/202004/20200403 Take back your dotfiles with Chezmoi.md diff --git a/published/20200404 Bodhi Linux 5.1 Review- Slightly Different Lightweight Linux.md b/published/202004/20200404 Bodhi Linux 5.1 Review- Slightly Different Lightweight Linux.md similarity index 100% rename from published/20200404 Bodhi Linux 5.1 Review- Slightly Different Lightweight Linux.md rename to published/202004/20200404 Bodhi Linux 5.1 Review- Slightly Different Lightweight Linux.md diff --git a/published/20200404 What is Arch User Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md b/published/202004/20200404 What is Arch User Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md similarity index 100% rename from published/20200404 What is Arch User Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md rename to published/202004/20200404 What is Arch User Repository (AUR)- How to Use AUR on Arch and Manjaro Linux.md diff --git a/published/20200407 15 years of Git- How to get started or learn something new.md b/published/202004/20200407 15 years of Git- How to get started or learn something new.md similarity index 100% rename from published/20200407 15 years of Git- How to get started or learn something new.md rename to published/202004/20200407 15 years of Git- How to get started or learn something new.md diff --git a/published/20200407 Bitwarden- A Free - Open Source Password Manager.md b/published/202004/20200407 Bitwarden- A Free - Open Source Password Manager.md similarity index 100% rename from published/20200407 Bitwarden- A Free - Open Source Password Manager.md rename to published/202004/20200407 Bitwarden- A Free - Open Source Password Manager.md diff --git a/published/20200407 New Linux Distribution UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md b/published/202004/20200407 New Linux Distribution UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md similarity index 100% rename from published/20200407 New Linux Distribution UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md rename to published/202004/20200407 New Linux Distribution UbuntuDDE Brings The Beautiful Deepin Desktop to Ubuntu.md diff --git a/published/20200408 Create web tutorials with Reveal.js and Git.md b/published/202004/20200408 Create web tutorials with Reveal.js and Git.md similarity index 100% rename from published/20200408 Create web tutorials with Reveal.js and Git.md rename to published/202004/20200408 Create web tutorials with Reveal.js and Git.md diff --git a/published/20200408 How to Create Templates in LibreOffice to Save Time and Increase Productivity.md b/published/202004/20200408 How to Create Templates in LibreOffice to Save Time and Increase Productivity.md similarity index 100% rename from published/20200408 How to Create Templates in LibreOffice to Save Time and Increase Productivity.md rename to published/202004/20200408 How to Create Templates in LibreOffice to Save Time and Increase Productivity.md diff --git a/published/20200409 Here-s How to Find Out Which Desktop Environment You are Using.md b/published/202004/20200409 Here-s How to Find Out Which Desktop Environment You are Using.md similarity index 100% rename from published/20200409 Here-s How to Find Out Which Desktop Environment You are Using.md rename to published/202004/20200409 Here-s How to Find Out Which Desktop Environment You are Using.md diff --git a/published/20200410 How I-m using AI to translate -wash your hands- in 500 languages.md b/published/202004/20200410 How I-m using AI to translate -wash your hands- in 500 languages.md similarity index 100% rename from published/20200410 How I-m using AI to translate -wash your hands- in 500 languages.md rename to published/202004/20200410 How I-m using AI to translate -wash your hands- in 500 languages.md diff --git a/published/20200410 How to Go Full Dark Mode in Ubuntu 20.04.md b/published/202004/20200410 How to Go Full Dark Mode in Ubuntu 20.04.md similarity index 100% rename from published/20200410 How to Go Full Dark Mode in Ubuntu 20.04.md rename to published/202004/20200410 How to Go Full Dark Mode in Ubuntu 20.04.md diff --git a/published/20200413 A handy utility for creating Raspberry Pi SD card images.md b/published/202004/20200413 A handy utility for creating Raspberry Pi SD card images.md similarity index 100% rename from published/20200413 A handy utility for creating Raspberry Pi SD card images.md rename to published/202004/20200413 A handy utility for creating Raspberry Pi SD card images.md diff --git a/published/20200413 How to Add Multiple Time Zones in Ubuntu.md b/published/202004/20200413 How to Add Multiple Time Zones in Ubuntu.md similarity index 100% rename from published/20200413 How to Add Multiple Time Zones in Ubuntu.md rename to published/202004/20200413 How to Add Multiple Time Zones in Ubuntu.md diff --git a/published/20200413 How to install Python on Linux.md b/published/202004/20200413 How to install Python on Linux.md similarity index 100% rename from published/20200413 How to install Python on Linux.md rename to published/202004/20200413 How to install Python on Linux.md diff --git a/published/20200415 How to Open Files and Folders as Administrator in Nautilus File Manager in Linux.md b/published/202004/20200415 How to Open Files and Folders as Administrator in Nautilus File Manager in Linux.md similarity index 100% rename from published/20200415 How to Open Files and Folders as Administrator in Nautilus File Manager in Linux.md rename to published/202004/20200415 How to Open Files and Folders as Administrator in Nautilus File Manager in Linux.md diff --git a/published/20200416 How to package Python applications for Linux.md b/published/202004/20200416 How to package Python applications for Linux.md similarity index 100% rename from published/20200416 How to package Python applications for Linux.md rename to published/202004/20200416 How to package Python applications for Linux.md diff --git a/published/20200417 12 Linux Commands to Have Some Fun in the Terminal.md b/published/202004/20200417 12 Linux Commands to Have Some Fun in the Terminal.md similarity index 100% rename from published/20200417 12 Linux Commands to Have Some Fun in the Terminal.md rename to published/202004/20200417 12 Linux Commands to Have Some Fun in the Terminal.md diff --git a/published/20200418 Ethernet consortium announces completion of 800GbE spec.md b/published/202004/20200418 Ethernet consortium announces completion of 800GbE spec.md similarity index 100% rename from published/20200418 Ethernet consortium announces completion of 800GbE spec.md rename to published/202004/20200418 Ethernet consortium announces completion of 800GbE spec.md diff --git a/published/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md b/published/202004/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md similarity index 100% rename from published/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md rename to published/202004/20200421 MystiQ- A Free and Open Source Audio-Video Converter.md diff --git a/published/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md b/published/202004/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md similarity index 100% rename from published/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md rename to published/202004/20200422 How to Check the Available Network Interfaces, Associated IP Addresses, MAC Addresses, and Interface Speed on Linux.md diff --git a/published/20200422 Things You Should Know About Ubuntu 20.04.md b/published/202004/20200422 Things You Should Know About Ubuntu 20.04.md similarity index 100% rename from published/20200422 Things You Should Know About Ubuntu 20.04.md rename to published/202004/20200422 Things You Should Know About Ubuntu 20.04.md diff --git a/published/20200424 Ubuntu 20.04 LTS Released. Download Now.md b/published/202004/20200424 Ubuntu 20.04 LTS Released. Download Now.md similarity index 100% rename from published/20200424 Ubuntu 20.04 LTS Released. Download Now.md rename to published/202004/20200424 Ubuntu 20.04 LTS Released. Download Now.md diff --git a/published/20200424 What you need to know about open source ad blockers.md b/published/202004/20200424 What you need to know about open source ad blockers.md similarity index 100% rename from published/20200424 What you need to know about open source ad blockers.md rename to published/202004/20200424 What you need to know about open source ad blockers.md diff --git a/published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md b/published/202004/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md similarity index 100% rename from published/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md rename to published/202004/20200425 What Happened to IPv5- Why there is IPv4, IPv6 but no IPv5.md diff --git a/published/20200428 Fedora 32 is officially here.md b/published/202004/20200428 Fedora 32 is officially here.md similarity index 100% rename from published/20200428 Fedora 32 is officially here.md rename to published/202004/20200428 Fedora 32 is officially here.md diff --git a/published/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md b/published/202004/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md similarity index 100% rename from published/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md rename to published/202004/20200429 Manjaro 20 Lysia Arrives with ZFS and Snap Support.md From 992c5660edc8f257524e87dce01515310ed3cb22 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 22:23:20 +0800 Subject: [PATCH 096/178] PRF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit @lxbwolf 这篇翻译的不错。 --- .../20200425 Inlining optimisations in Go.md | 48 ++++++++++--------- 1 file changed, 25 insertions(+), 23 deletions(-) diff --git a/translated/tech/20200425 Inlining optimisations in Go.md b/translated/tech/20200425 Inlining optimisations in Go.md index dc13968c1d..0e28421308 100644 --- a/translated/tech/20200425 Inlining optimisations in Go.md +++ b/translated/tech/20200425 Inlining optimisations in Go.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Inlining optimisations in Go) @@ -10,33 +10,35 @@ Go 中的内联优化 ====== -本文讨论 Go 编译器是如何实现内联的以及这种优化方法如何影响你的 Go 代码。 +> 本文讨论 Go 编译器是如何实现内联的,以及这种优化方法如何影响你的 Go 代码。 -*请注意:*本文重点讨论 *gc*,实际上是 [golang.org](https://github.com/golang/go) 的 Go 编译器。讨论到的概念可以广泛用于其他 Go 编译器,如 gccgo 和 llgo,但它们在实现方式和功能上可能有所差异。 +![](https://img.linux.net.cn/data/attachment/album/202005/02/222202e3v3pppkhnndpbpn.jpg) + +*请注意:*本文重点讨论 *gc*,这是来自 [golang.org](https://github.com/golang/go) 的事实标准的 Go 编译器。讨论到的概念可以广泛适用于其它 Go 编译器,如 gccgo 和 llgo,但它们在实现方式和功效上可能有所差异。 ### 内联是什么? -内联就是把简短的函数在调用它的地方展开。在计算机发展历程的早期,这个优化是由程序员手动实现的。现在,内联已经成为编译过程中自动实现的基本优化过程的其中一步。 +内联inlining就是把简短的函数在调用它的地方展开。在计算机发展历程的早期,这个优化是由程序员手动实现的。现在,内联已经成为编译过程中自动实现的基本优化过程的其中一步。 ### 为什么内联很重要? -有两个原因。第一个是它消除了函数调用本身的虚耗。第二个是它使得编译器能更高效地执行其他的优化策略。 +有两个原因。第一个是它消除了函数调用本身的开销。第二个是它使得编译器能更高效地执行其他的优化策略。 -#### 函数调用的虚耗 +#### 函数调用的开销 -在任何语言中,调用一个函数 [1][2] 都会有消耗。把参数编组进寄存器或放入栈中(取决于 ABI),在返回结果时倒序取出时会有虚耗。引入一次函数调用会导致程序计数器从指令流的一点跳到另一点,这可能导致管道阻塞。函数内部通常有前置处理,需要为函数执行准备新的栈帧,还有与前置相似的后续处理,需要在返回给调用方之前释放栈帧空间。 +在任何语言中,调用一个函数 [^1] 都会有消耗。把参数编组进寄存器或放入栈中(取决于 ABI),在返回结果时的逆反过程都会有开销。引入一次函数调用会导致程序计数器从指令流的一点跳到另一点,这可能导致管道滞后。函数内部通常有前置处理preamble,需要为函数执行准备新的栈帧,还有与前置相似的后续处理epilogue,需要在返回给调用方之前释放栈帧空间。 -在 Go 中函数调用会消耗额外的资源来支持栈的动态增长。在进入函数时,goroutine 可用的栈空间与函数需要的空间大小相等。如果可用空间不同,前置处理就会跳到把数据复制到一块新的、更大的空间的运行时逻辑,而这会导致栈空间变大。当这个复制完成后,运行时跳回到原来的函数入口,再执行栈空间检查,函数调用继续执行。这种方式下,goroutine 开始时可以申请很小的栈空间,在有需要时再申请更大的空间。[2][3] +在 Go 中函数调用会消耗额外的资源来支持栈的动态增长。在进入函数时,goroutine 可用的栈空间与函数需要的空间大小进行比较。如果可用空间不同,前置处理就会跳到运行时runtime的逻辑中,通过把数据复制到一块新的、更大的空间的来增长栈空间。当这个复制完成后,运行时就会跳回到原来的函数入口,再执行栈空间检查,现在通过了检查,函数调用继续执行。这种方式下,goroutine 开始时可以申请很小的栈空间,在有需要时再申请更大的空间。[^2] -这个检查消耗很小 — 只有几个指令 — 而且由于 goroutine 是成几何级数增长的,因此这个检查很少失败。这样,现代处理器的分支预测单元会通过假定检查肯定会成功来隐藏栈空间检查的消耗。当处理器预测错了栈空间检查,必须要抛弃它推测性执行的操作时,与为了增加 goroutine 的栈空间运行时所需的操作消耗的资源相比,管道阻塞的代价更小。 +这个检查消耗很小,只有几个指令,而且由于 goroutine 的栈是成几何级数增长的,因此这个检查很少失败。这样,现代处理器的分支预测单元可以通过假定检查肯定会成功来隐藏栈空间检查的消耗。当处理器预测错了栈空间检查,不得不放弃它在推测性执行所做的操作时,与为了增加 goroutine 的栈空间运行时所需的操作消耗的资源相比,管道滞后的代价更小。 -虽然现代处理器可以用预测性执行技术优化每次函数调用中的泛型和 Go 特定的元素的虚耗,但那些虚耗不能被完全消除,因此在每次函数调用执行必要的工作过程中都会有性能消耗。一次函数调用本身的虚耗是固定的,与更大的函数相比,调用小函数的代价更大,因为在每次调用过程中它们做的有用的工作更少。 +虽然现代处理器可以用预测性执行技术优化每次函数调用中的泛型和 Go 特定的元素的开销,但那些开销不能被完全消除,因此在每次函数调用执行必要的工作过程中都会有性能消耗。一次函数调用本身的开销是固定的,与更大的函数相比,调用小函数的代价更大,因为在每次调用过程中它们做的有用的工作更少。 -消除这些虚耗的方法必须是要消除函数调用本身,Go 的编译器就是这么做的,在某些条件下通过用函数的内容来替换函数调用来实现。这个过程被称为*内联*,因为它在函数调用处把函数体展开了。 +因此,消除这些开销的方法必须是要消除函数调用本身,Go 的编译器就是这么做的,在某些条件下通过用函数的内容来替换函数调用来实现。这个过程被称为*内联*,因为它在函数调用处把函数体展开了。 #### 改进的优化机会 -Cliff Click 博士把内联描述为现代编译器做的优化措施,像常量传播(译注:此处作者笔误,原文为 constant proportion,修正为 constant propagation)和死码消除一样,都是编译器的基本优化方法。实际上,内联可以让编译器看得更深,使编译器可以观察调用的特定函数的上下文内容,可以看到能继续简化或彻底消除的逻辑。由于可以递归地执行内联,因此不仅可以在每个独立的函数上下文处进行这种优化,也可以在整个函数调用链中进行。 +Cliff Click 博士把内联描述为现代编译器做的优化措施,像常量传播(LCTT 译注:此处作者笔误,原文为 constant proportion,修正为 constant propagation)和死代码消除一样,都是编译器的基本优化方法。实际上,内联可以让编译器看得更深,使编译器可以观察调用的特定函数的上下文内容,可以看到能继续简化或彻底消除的逻辑。由于可以递归地执行内联,因此不仅可以在每个独立的函数上下文处进行这种优化决策,也可以在整个函数调用链中进行。 ### 实践中的内联 @@ -66,14 +68,14 @@ func BenchmarkMax(b *testing.B) { } ``` -运行这个基准,会得到如下结果:[3][4] +运行这个基准,会得到如下结果:[^3] ```bash % go test -bench=. BenchmarkMax-4 530687617 2.24 ns/op ``` -在我的 2015 MacBook Air 上 `max(-1, i)` 的耗时约为 2.24 纳秒。现在去掉 `//go:noinline` 编译指令,再看下结果: +在我的 2015 MacBook Air 上 `max(-1, i)` 的耗时约为 2.24 纳秒。现在去掉 `//go:noinline` 编译指令,再看下结果: ```bash % go test -bench=. @@ -90,7 +92,7 @@ Max-4 2.21ns ± 1% 0.49ns ± 6% -77.96% (p=0.000 n=18+19) 这个提升是从哪儿来的呢? -首先,移除掉函数调用以及与之关联的前置处理 [4][5] 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 +首先,移除掉函数调用以及与之关联的前置处理 [^4] 是主要因素。把 `max` 函数的函数体在调用处展开,减少了处理器执行的指令数量并且消除了一些分支。 现在由于编译器优化了 `BenchmarkMax`,因此它可以看到 `max` 函数的内容,进而可以做更多的提升。当 `max` 被内联后,`BenchmarkMax` 呈现给编译器的样子,看起来是这样的: @@ -116,7 +118,7 @@ name old time/op new time/op delta Max-4 2.21ns ± 1% 0.48ns ± 3% -78.14% (p=0.000 n=18+18) ``` -现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 true。[5][6] +现在编译器能看到在 `BenchmarkMax` 里内联 `max` 的结果,可以执行以前不能执行的优化措施。例如,编译器注意到 `i` 初始值为 `0`,仅做自增操作,因此所有与 `i` 的比较都可以假定 `i` 不是负值。这样条件表达式 `-1 > i` 永远不是 `true`。[^5] 证明了 `-1 > i` 永远不为 true 后,编译器可以把代码简化为: @@ -150,7 +152,7 @@ func BenchmarkMax(b *testing.B) { ### 内联的限制 -本文中我论述的内联称作*叶子*内联;把函数调用栈中最底层的函数在调用它的函数处展开的行为。内联是个递归的过程,当把函数内联到调用它的函数 A 处后,编译器会把内联后的结果代码再内联到 A 的调用方,这样持续内联下去。例如,下面的代码: +本文中我论述的内联称作叶子内联leaf inlining:把函数调用栈中最底层的函数在调用它的函数处展开的行为。内联是个递归的过程,当把函数内联到调用它的函数 A 处后,编译器会把内联后的结果代码再内联到 A 的调用方,这样持续内联下去。例如,下面的代码: ```go func BenchmarkMaxMaxMax(b *testing.B) { @@ -166,11 +168,11 @@ func BenchmarkMaxMaxMax(b *testing.B) { 下一篇文章中,我会论述当 Go 编译器想要内联函数调用栈中间的某个函数时选用的另一种内联策略。最后我会论述编译器为了内联代码准备好要达到的极限,这个极限 Go 现在的能力还达不到。 -1. 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。[][7] -2. 在 Go 1.14 以前,栈检查的前置处理也被 gc 用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换][8]为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。[][9] -3. 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。关于 `//go:` 注释在[这篇文章][10]中详细论述。[][11] -4. 你可以自己通过比较 `go test -bench=. -gcflags=-S`有无 `//go:noinline` 注释时的不同结果来验证一下。[][12] -5. 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。[][13] +[^1]: 在 Go 中,一个方法就是一个有预先定义的形参和接受者的函数。假设这个方法不是通过接口调用的,调用一个无消耗的函数所消耗的代价与引入一个方法是相同的。 +[^2]: 在 Go 1.14 以前,栈检查的前置处理也被垃圾回收器用于 STW,通过把所有活跃的 goroutine 栈空间设为 0,来强制它们切换为下一次函数调用时的运行时状态。这个机制[最近被替换][8]为一种新机制,新机制下运行时可以不用等 goroutine 进行函数调用就可以暂停 goroutine。 +[^3]: 我用 `//go:noinline` 编译指令来阻止编译器内联 `max`。这是因为我想把内联 `max` 的影响与其他影响隔离开,而不是用 `-gcflags='-l -N'` 选项在全局范围内禁止优化。关于 `//go:` 注释在[这篇文章][10]中详细论述。 +[^4]: 你可以自己通过比较 `go test -bench=. -gcflags=-S` 有无 `//go:noinline` 注释时的不同结果来验证一下。 +[^5]: 你可以用 `-gcflags=-d=ssa/prove/debug=on` 选项来自己验证一下。 #### 相关文章: @@ -186,7 +188,7 @@ via: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go 作者:[Dave Cheney][a] 选题:[lujun9972][b] 译者:[lxbwolf](https://github.com/lxbwolf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 748e253520ef62031e1dd71cd2925bededaebf2f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sat, 2 May 2020 22:24:03 +0800 Subject: [PATCH 097/178] PUB @lxbwolf https://linux.cn/article-12176-1.html --- .../20200425 Inlining optimisations in Go.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200425 Inlining optimisations in Go.md (99%) diff --git a/translated/tech/20200425 Inlining optimisations in Go.md b/published/20200425 Inlining optimisations in Go.md similarity index 99% rename from translated/tech/20200425 Inlining optimisations in Go.md rename to published/20200425 Inlining optimisations in Go.md index 0e28421308..12f1cb67b7 100644 --- a/translated/tech/20200425 Inlining optimisations in Go.md +++ b/published/20200425 Inlining optimisations in Go.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12176-1.html) [#]: subject: (Inlining optimisations in Go) [#]: via: (https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go) [#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) From b87a3977819e510ed01b8289ed34442498420839 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sun, 3 May 2020 00:58:43 +0800 Subject: [PATCH 098/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200502=20Mid-st?= =?UTF-8?q?ack=20inlining=20in=20Go?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200502 Mid-stack inlining in Go.md --- .../tech/20200502 Mid-stack inlining in Go.md | 211 ++++++++++++++++++ 1 file changed, 211 insertions(+) create mode 100644 sources/tech/20200502 Mid-stack inlining in Go.md diff --git a/sources/tech/20200502 Mid-stack inlining in Go.md b/sources/tech/20200502 Mid-stack inlining in Go.md new file mode 100644 index 0000000000..e6172566ab --- /dev/null +++ b/sources/tech/20200502 Mid-stack inlining in Go.md @@ -0,0 +1,211 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mid-stack inlining in Go) +[#]: via: (https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Mid-stack inlining in Go +====== + +In the [previous post][1] I discussed how leaf inlining allows the Go compiler to reduce the overhead of function calls and extend optimisation opportunities across function boundaries. In this post I’ll discuss the limits of inlining and leaf vs mid-stack inlining. + +### The limits of inlining + +Inlining a function into its caller removes the call’s overhead and increases the opportunity for the compiler to apply additional optimisations so the question should be asked, if some inlining is good, would more be better, _why not inline as much as possible?_ + +Inlining trades possibly larger program sizes for potentially faster execution time. The main reason to limit inlining is creating many inlined copies of a function can increase compile time and result in larger binaries for marginal gain. Even taking into account the opportunities for further optimisation, aggressive inlining tends to increase the size of, and the time too compile, the resulting binary. + +Inlining works best for [small functions][2] that do relatively little work compared to the overhead of calling them. As the size of a function grows, the time saved avoiding the call’s overhead diminishes relative to the work done inside the function. Larger functions tend to be more complex, thus the benefits of optimising their inlined forms vs in situ are reduced. + +### Inlining budget + +During compilation each function’s inlineabilty is calculated using what is known as the _inlining budget_[1][3]. The cost calculation can tricky too internalise but is broadly one unit per node in the AST for simple things like unary and binary operations but can be higher for complex operations like `make`. Consider this example: + +``` +package main + +func small() string { + s := "hello, " + "world!" + return s +} + +func large() string { + s := "a" + s += "b" + s += "c" + s += "d" + s += "e" + s += "f" + s += "g" + s += "h" + s += "i" + s += "j" + s += "k" + s += "l" + s += "m" + s += "n" + s += "o" + s += "p" + s += "q" + s += "r" + s += "s" + s += "t" + s += "u" + s += "v" + s += "w" + s += "x" + s += "y" + s += "z" + return s +} + +func main() { + small() + large() +} +``` + +Compiling this function with `-gcflags=-m=2` allows us to see the cost the compiler assigns to each function. + +``` +% go build -gcflags=-m=2 inl.go +# command-line-arguments +./inl.go:3:6: can inline small with cost 7 as: func() string { s := "hello, world!"; return s } +./inl.go:8:6: cannot inline large: function too complex: cost 82 exceeds budget 80 +./inl.go:38:6: can inline main with cost 68 as: func() { small(); large() } +./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } +``` + +The compiler determined that `func small()` can be inlined due to its cost of 7. `func large()` was determined to be too expensive. `func main()`has been marked as eligable and assigned a cost of 68; 7 from the body of `small`, 57 from the function call to `small` and the remainder in its own overhead. + +The inlining budget can be controlled to some degree with the `-gcflag=-l` flag. Currently the values that apply are: + + * `-gcflags=-l=0` is the default level of inlining. + * `-gcflags=-l` (or `-gcflags=-l=1`) disables inlining. + * `-gcflags=-l=2` and `-gcflags=-l=3` are currently unused and have no effect over `-gcflags=-l=0` + * `-gcflags=-l=4` reduces the cost for inlining non-leaf functions and calls through interfaces.[2][4] + + + +#### Hairy optimisations + +Some functions with a relatively low inlining cost may be ineligible because of their complexity. This is known as the function’s hairiness as the semantics of some operations are hard to reason about once inlined, for example `recover`, `break`. Others, like `select` and `go`, involve co-ordination with the runtime so the extra effort of inlining doesn’t pay for itself. + +The list of hairy statements also includes things like `for `and `range` which don’t have an inherently large cost, but simply haven’t been optimised yet. + +### Mid stack inlining + +Historically the Go compiler only performed leaf inlining–only functions which did not call other functions were eligible. In the context of the hairiness discussion previously, a function call would disqualify the function from being inlined. + +Enter mid stack inlining which, as its name implies, allows functions in the middle of a call stack to be inlined without requiring everything below them to be eligible. Mid stack inlining was introduced by David Lazar in Go 1.9 and improved in subsequent releases. [This presentation][5] goes into some of the difficulties with retaining the behaviour of stack traces and `runtime.Callers` in code paths that had been heavily inlined. + +We see an example of mid-stack inlining in the previous example. After inlining, `func main()` contains the body of `func small()` and a call to `func large()`, thus it is considered a non-leaf function. Historically this would have prevented it from being further inlined even though its combined cost was less than the inlining budget. + +The primary use case for mid stack inlining is to reduce the overhead of a path through the call stack. Consider this example: + +``` +package main + +import ( + "fmt" + "strconv" +) + +type Rectangle struct {} + +//go:noinline +func (r *Rectangle) Height() int { + h, _ := strconv.ParseInt("7", 10, 0) + return int(h) +} + +func (r *Rectangle) Width() int { + return 6 +} + +func (r *Rectangle) Area() int { return r.Height() * r.Width() } + +func main() { + var r Rectangle + fmt.Println(r.Area()) +} +``` + +In this example `r.Area()` is a simple function which calls two others. `r.Width()` can be inlined while `r.Height()`, simulated here with the `//go:noinline` annotation, cannot. [3][6] + +``` +% go build -gcflags='-m=2' square.go +# command-line-arguments +./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline +./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } +./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } +./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 +./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } +./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } +``` + +As the multiplication performed by `r.Area()` is cheap compared to the overhead of calling it, inlining `r.Area()`‘s single expression is a net win even if its downstream caller to `r.Height()` remains ineligible. + +#### Fast path inlining + +The most startling example of the power of mid-stack inlining comes from 2019 when [Carlo Alberto Ferraris improved the performance][7] of `sync.Mutex.Lock()` by allowing the fast path of the lock–the uncontended case–to be inlined into its caller. Prior to this change `sync.Mutex.Lock()` was a large function containing many hairy conditions which made it ineligible to be inlined. Even in the case where the lock was available, the caller had to pay the overhead of calling `sync.Mutex.Lock()`. + +Carlo’s change split `sync.Mutex.Lock()` into two functions (a process he dubbed _outlining_). The outer `sync.Mutex.Lock()` method now calls `sync/atomic.CompareAndSwapInt32()` and returns to the caller immediately if the CAS succeeds. If not, the function falls through to `sync.Mutex.lockSlow()` which handles the slow path required to register interest on the lock and park the goroutine.[4][8] + +``` +% go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock' +../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled {  }; return  }; m.lockSlow() } +``` + +By splitting the function into an easily inalienable outer function, falling through to a complex inner function to handle the slow path Carlo’s combined mid stack inlining and the [compiler’s support for intrinsic operations][9] to reduce the cost of an uncontended lock by 14%. Then he repeated the trick for an additional 9% saving in `sync.RWMutex.Unlock()`. + + 1. The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.[][10] + 2. Keep in mind that the compiler authors warn that “[Additional levels of inlining (beyond -l) may be buggy and are not supported”][11]. Caveat emptor.[][12] + 3. The compiler is powerful enough that it can inline complex functions like `strconv.ParseInt`. As a experiment, try removing the `//go:noinline` annotation and observe the result with `-gcflags=-m=2`.[][13] + 4. The expression `race.Enable` is a constant controlled by the `-race` flag passed to the `go` tool. It is `false` for normal builds which allows the compiler to elide those code paths entirely.[][14] + + + +#### Related posts: + + 1. [Inlining optimisations in Go][15] + 2. [Why is a Goroutine’s stack infinite ?][16] + 3. [Stack traces and the errors package][17] + 4. [What is the zero value, and why is it useful?][18] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go +[2]: https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f +[3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 (The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.) +[4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 (Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor.) +[5]: https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p +[6]: tmp.FyRthF1bbF#easy-footnote-bottom-3-4076 (The compiler is powerful enough that it can inline complex functions like strconv.ParseInt. As a experiment, try removing the //go:noinline annotation and observe the result with -gcflags=-m=2.) +[7]: https://go-review.googlesource.com/c/go/+/148959 +[8]: tmp.FyRthF1bbF#easy-footnote-bottom-4-4076 (The expression race.Enable is a constant controlled by the -race flag passed to the go tool. It is false for normal builds which allows the compiler to elide those code paths entirely.) +[9]: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics +[10]: tmp.FyRthF1bbF#easy-footnote-1-4076 +[11]: https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11 +[12]: tmp.FyRthF1bbF#easy-footnote-2-4076 +[13]: tmp.FyRthF1bbF#easy-footnote-3-4076 +[14]: tmp.FyRthF1bbF#easy-footnote-4-4076 +[15]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go (Inlining optimisations in Go) +[16]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) +[17]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package) +[18]: https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful (What is the zero value, and why is it useful?) From 7d57f4e9f2ccc8a4f4ccb3f475230eb9c675056b Mon Sep 17 00:00:00 2001 From: DarkSun Date: Sun, 3 May 2020 01:04:28 +0800 Subject: [PATCH 099/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200502=20The=20?= =?UTF-8?q?real=20impact=20of=20canceling=20PyCon=20due=20to=20COVID-19?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md --- ...pact of canceling PyCon due to COVID-19.md | 129 ++++++++++++++++++ 1 file changed, 129 insertions(+) create mode 100644 sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md diff --git a/sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md b/sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md new file mode 100644 index 0000000000..aebbbf28c4 --- /dev/null +++ b/sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md @@ -0,0 +1,129 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The real impact of canceling PyCon due to COVID-19) +[#]: via: (https://opensource.com/article/20/5/pycon-covid-19) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) + +The real impact of canceling PyCon due to COVID-19 +====== +An interview with Ewa Jodlowska on how the Python Software Foundation is +responding to the cancelation of in-person events. +![A dollar sign in a network][1] + +The Python Software Foundation (PSF) had to [cancel its popular PyCon US][2] event in response to COVID-19. I interviewed [Ewa Jodlowska][3], Executive Director of the PSF, to talk about the experience and see what we all can learn, and how we can be supportive of the non-profit that supports one of my favorite programming languages. + +### The impact on PSF employees + +I asked Jodlowska "how have you had to adjust your work in light of COVID-19?" + +In her response, the day-to-day didn't sound like much of a change. PSF staff "have always worked remotely." The organization practices a [fully remote work][4] culture and doesn’t have an office. The small staff of seven employees is well versed in collaborating outside of an office. + +Familiarity aside, the emotional impact of needing to cancel an event they put a year’s worth of planning into hurt. + +> **"We all believe in what we do. Which is particularly why we’re such a great small team. So it really impacted us emotionally and mentally. And it continues to."** + +We spoke about how the team is reliving what the days would have looked like if PyCon wasn't interrupted by COVID-19–keynotes would start _now_, the sponsor booths would be in full motion right _now_–and just how emotionally taxing it all was. Throughout the discussion, Jodlowska always came back to recognizing the staff for their resiliency and energy to pivot the event online. + +### The cascading impact of event cancellation + +Jodlowska has been incredibly transparent about the experience. In her March 31st [article on the financial outcome][5], she outlines it clearly: the Python Software Foundation would take a hit from the event cancelation.  + +Jodlowska notes that part of the challenge is that PyCon accounts for too much of the organization’s financial health. About 63% of the 2020 revenue was projected to come from the show. While that number is down from the [2017 estimate of 80%][6], it’s still a concern when in-person events will remain limited to keep attendees safe during the COVID-19 outbreak. + +> **"We don’t want to rely on one event**–**or events in general**–**to operate and provide community support."** + +The PSF board of directors is hard at work to look into the diversification of funding. In the meantime, PyCon remains essential to sustainability running the organization. + +### Community support makes all the difference + +It's at this point that Jodlowska again recognizes the incredible work of the PSF staff. They quickly pivoted the vision of the event, and the community of attendees, sponsors, and speakers were all supportive of the move. + +> **"[We] have been brought to tears many times by the generosity of our sponsors and our individual donors."** + +Jodlowska noted that the generosity of so many resulted in reducing the financial impact on the PSF. An incredible amount of Individual attendees are donating their registration costs to the PSF. They are also showing up across social media sites to participate in their own distributed virtual experience of PyCon.  + +Another important part of the community, the corporate sponsors of the show, are also showing up to support the non-profit. Many sponsors had already canceled physical presence at the show before the event was officially moved online. Some of them were kind enough, as Jodlowska noted, to donate the cost of sponsorship to the PSF. In a huge turn of events, the list of sponsors **grew** as the online event came together. + +> **[M]any sponsors have opted into participating in PyCon 2020 online. Because of this we have decreased the amount needed from our reserve by 77%! The PSF will now only need $141,713 from its financial reserve to get through 2020.** + +For more on the data side, see Jodlowska’s article _[Thank you to donors & sponsors][7]_. + +Support in all its forms led to the conference feeling like it is well on its way. Some sponsors are even moving to a virtual booth experience. + +> Since our sponsors can’t be with you in person, we’ve created a place to provide their content online - . [#PyCon2020][8] Gold Sponsor Weekly Python Exercise shared this video to introduce you to their offerings: . +> +> — PyCon US (@pycon) [April 18, 2020][9] + +Maybe most impressively, many speakers and tutorial instructors made the effort of recording their sessions. That’s helped PyCon to [gradually unfold online][10] with incredible educational content. The audience is still able to interact as well: YouTube comments are open for moderation so speakers can interact with their audience. + +Lastly, there remains an army of volunteers who shifted their in-person plans online and continue to help in any way possible. + +### Some of the surprising positives from this difficult change + +While it is without a doubt a challenging time for the organization, Jodlowska noted a number of positives that are unfolding due to this move to virtual. + +To start, the staff of the PSF “have never been closer,” as they bond over the experience and spend more time getting to know each other through weekly video calls and baking competitions. + +Jodlowska was inspired to get involved in another open source effort, [FOSS responders][11], who are helping organizations respond to the cancelation of events due to COVID-19. (If you’ve been affected as well, they are there to help.) + +The generosity mentioned above is a silver lining to the experience and encouraging to the hardworking team that uplifts the popular Python programming language. + +There is also a broader impact on participation in PyCon. While the final numbers are not in yet, an international audience has access to all of PyCon as it unfolds, which gives the entire world a chance to be part of an excellent event [I got to attend][12] last year. On the development side, Jodlowska mentioned that the [core-dev team][13] that maintains Python, who would normally meet in person, shifted to a virtual meeting. As a result of that shift, some participants got to attend that otherwise would not have had the opportunity to join in person. + +### How you can help the Python Software Foundation + +I reached out to Jodlowska because I am impressed with and supportive of their mission to support the Python community. If you want to support them as well, you have options: + + * Become a [free or supporting member][14] of the PSF to get involved in our future. + * [Sign up for the PSF’s free newsletter][15] to stay up to date. + * [Donate][16] directly to the PSF (and thank you to those that already have). + * Ask your employer to [sponsor the PSF][17]. + * Ask your employer if they match donations to 501(c)(3) non-profits, and ask for your donations to the PSF to be matched. + + + +Last but not least, participate in PyCon over the next few weeks. You can learn from all kinds of smart people on a range of topics like [Matt Harrison][18]’s [Hands-on Python for Programmers][19] that guides attendees through analyzing COVID-19 data to [Katie McLaughlin][20]’s thoughtful talk on [What is deployment, anyway?][21] + +Be sure to [review the full][10] list and engage with the amazing lineup of speakers. + +* * * + +_Are you part of a non-profit looking to connect with your open source community at this time of social distancing? Let me know at matt @ opensource.com._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/pycon-covid-19 + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0 (A dollar sign in a network) +[2]: https://pycon.blogspot.com/2020/03/pycon-us-2020-in-pittsburgh.html +[3]: https://www.python.org/psf/records/staff/ +[4]: https://opensource.com/tags/wfh +[5]: http://pyfound.blogspot.com/2020/03/psfs-projected-2020-financial-outcome.html +[6]: https://www.youtube.com/watch?v=79AIzbjLzdk +[7]: http://pyfound.blogspot.com/2020/04/thank-you-to-donors-sponsors.html +[8]: https://twitter.com/hashtag/PyCon2020?src=hash&ref_src=twsrc%5Etfw +[9]: https://twitter.com/pycon/status/1251563142641000455?ref_src=twsrc%5Etfw +[10]: https://us.pycon.org/2020/online/ +[11]: https://fossresponders.com/ +[12]: https://opensource.com/article/19/5/jupyterlab-python-developers-magic +[13]: https://devguide.python.org/coredev/ +[14]: https://www.python.org/psf/membership/ +[15]: https://www.python.org/psf/newsletter/ +[16]: https://www.python.org/psf/donations/ +[17]: https://www.python.org/psf/sponsorship/ +[18]: https://us.pycon.org/2020/speaker/profile/454/ +[19]: https://youtu.be/fuJcSNUMrW0 +[20]: https://opensource.com/users/glasnt +[21]: https://youtu.be/8vstov3Y7uE From 7b0dc6070f908efb5f530ff0402f989e22e18701 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Sun, 3 May 2020 08:05:15 +0800 Subject: [PATCH 100/178] Rename sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md to sources/talk/20200502 The real impact of canceling PyCon due to COVID-19.md --- ...20200502 The real impact of canceling PyCon due to COVID-19.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20200502 The real impact of canceling PyCon due to COVID-19.md (100%) diff --git a/sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md b/sources/talk/20200502 The real impact of canceling PyCon due to COVID-19.md similarity index 100% rename from sources/tech/20200502 The real impact of canceling PyCon due to COVID-19.md rename to sources/talk/20200502 The real impact of canceling PyCon due to COVID-19.md From a1263fe89a6e2cfe11ad0b23b16a9a5a50f1818c Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Sun, 3 May 2020 09:41:50 +0800 Subject: [PATCH 101/178] APL --- sources/tech/20200502 Mid-stack inlining in Go.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200502 Mid-stack inlining in Go.md b/sources/tech/20200502 Mid-stack inlining in Go.md index e6172566ab..f34c7070d5 100644 --- a/sources/tech/20200502 Mid-stack inlining in Go.md +++ b/sources/tech/20200502 Mid-stack inlining in Go.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (lxbwolf) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From de32773584c1cb37a129e405535b809a46b87bfb Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Sun, 3 May 2020 14:27:03 +0800 Subject: [PATCH 102/178] TSL --- .../tech/20200502 Mid-stack inlining in Go.md | 211 ------------------ .../tech/20200502 Mid-stack inlining in Go.md | 211 ++++++++++++++++++ 2 files changed, 211 insertions(+), 211 deletions(-) delete mode 100644 sources/tech/20200502 Mid-stack inlining in Go.md create mode 100644 translated/tech/20200502 Mid-stack inlining in Go.md diff --git a/sources/tech/20200502 Mid-stack inlining in Go.md b/sources/tech/20200502 Mid-stack inlining in Go.md deleted file mode 100644 index f34c7070d5..0000000000 --- a/sources/tech/20200502 Mid-stack inlining in Go.md +++ /dev/null @@ -1,211 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Mid-stack inlining in Go) -[#]: via: (https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go) -[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) - -Mid-stack inlining in Go -====== - -In the [previous post][1] I discussed how leaf inlining allows the Go compiler to reduce the overhead of function calls and extend optimisation opportunities across function boundaries. In this post I’ll discuss the limits of inlining and leaf vs mid-stack inlining. - -### The limits of inlining - -Inlining a function into its caller removes the call’s overhead and increases the opportunity for the compiler to apply additional optimisations so the question should be asked, if some inlining is good, would more be better, _why not inline as much as possible?_ - -Inlining trades possibly larger program sizes for potentially faster execution time. The main reason to limit inlining is creating many inlined copies of a function can increase compile time and result in larger binaries for marginal gain. Even taking into account the opportunities for further optimisation, aggressive inlining tends to increase the size of, and the time too compile, the resulting binary. - -Inlining works best for [small functions][2] that do relatively little work compared to the overhead of calling them. As the size of a function grows, the time saved avoiding the call’s overhead diminishes relative to the work done inside the function. Larger functions tend to be more complex, thus the benefits of optimising their inlined forms vs in situ are reduced. - -### Inlining budget - -During compilation each function’s inlineabilty is calculated using what is known as the _inlining budget_[1][3]. The cost calculation can tricky too internalise but is broadly one unit per node in the AST for simple things like unary and binary operations but can be higher for complex operations like `make`. Consider this example: - -``` -package main - -func small() string { - s := "hello, " + "world!" - return s -} - -func large() string { - s := "a" - s += "b" - s += "c" - s += "d" - s += "e" - s += "f" - s += "g" - s += "h" - s += "i" - s += "j" - s += "k" - s += "l" - s += "m" - s += "n" - s += "o" - s += "p" - s += "q" - s += "r" - s += "s" - s += "t" - s += "u" - s += "v" - s += "w" - s += "x" - s += "y" - s += "z" - return s -} - -func main() { - small() - large() -} -``` - -Compiling this function with `-gcflags=-m=2` allows us to see the cost the compiler assigns to each function. - -``` -% go build -gcflags=-m=2 inl.go -# command-line-arguments -./inl.go:3:6: can inline small with cost 7 as: func() string { s := "hello, world!"; return s } -./inl.go:8:6: cannot inline large: function too complex: cost 82 exceeds budget 80 -./inl.go:38:6: can inline main with cost 68 as: func() { small(); large() } -./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } -``` - -The compiler determined that `func small()` can be inlined due to its cost of 7. `func large()` was determined to be too expensive. `func main()`has been marked as eligable and assigned a cost of 68; 7 from the body of `small`, 57 from the function call to `small` and the remainder in its own overhead. - -The inlining budget can be controlled to some degree with the `-gcflag=-l` flag. Currently the values that apply are: - - * `-gcflags=-l=0` is the default level of inlining. - * `-gcflags=-l` (or `-gcflags=-l=1`) disables inlining. - * `-gcflags=-l=2` and `-gcflags=-l=3` are currently unused and have no effect over `-gcflags=-l=0` - * `-gcflags=-l=4` reduces the cost for inlining non-leaf functions and calls through interfaces.[2][4] - - - -#### Hairy optimisations - -Some functions with a relatively low inlining cost may be ineligible because of their complexity. This is known as the function’s hairiness as the semantics of some operations are hard to reason about once inlined, for example `recover`, `break`. Others, like `select` and `go`, involve co-ordination with the runtime so the extra effort of inlining doesn’t pay for itself. - -The list of hairy statements also includes things like `for `and `range` which don’t have an inherently large cost, but simply haven’t been optimised yet. - -### Mid stack inlining - -Historically the Go compiler only performed leaf inlining–only functions which did not call other functions were eligible. In the context of the hairiness discussion previously, a function call would disqualify the function from being inlined. - -Enter mid stack inlining which, as its name implies, allows functions in the middle of a call stack to be inlined without requiring everything below them to be eligible. Mid stack inlining was introduced by David Lazar in Go 1.9 and improved in subsequent releases. [This presentation][5] goes into some of the difficulties with retaining the behaviour of stack traces and `runtime.Callers` in code paths that had been heavily inlined. - -We see an example of mid-stack inlining in the previous example. After inlining, `func main()` contains the body of `func small()` and a call to `func large()`, thus it is considered a non-leaf function. Historically this would have prevented it from being further inlined even though its combined cost was less than the inlining budget. - -The primary use case for mid stack inlining is to reduce the overhead of a path through the call stack. Consider this example: - -``` -package main - -import ( - "fmt" - "strconv" -) - -type Rectangle struct {} - -//go:noinline -func (r *Rectangle) Height() int { - h, _ := strconv.ParseInt("7", 10, 0) - return int(h) -} - -func (r *Rectangle) Width() int { - return 6 -} - -func (r *Rectangle) Area() int { return r.Height() * r.Width() } - -func main() { - var r Rectangle - fmt.Println(r.Area()) -} -``` - -In this example `r.Area()` is a simple function which calls two others. `r.Width()` can be inlined while `r.Height()`, simulated here with the `//go:noinline` annotation, cannot. [3][6] - -``` -% go build -gcflags='-m=2' square.go -# command-line-arguments -./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline -./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } -./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } -./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 -./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } -./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } -``` - -As the multiplication performed by `r.Area()` is cheap compared to the overhead of calling it, inlining `r.Area()`‘s single expression is a net win even if its downstream caller to `r.Height()` remains ineligible. - -#### Fast path inlining - -The most startling example of the power of mid-stack inlining comes from 2019 when [Carlo Alberto Ferraris improved the performance][7] of `sync.Mutex.Lock()` by allowing the fast path of the lock–the uncontended case–to be inlined into its caller. Prior to this change `sync.Mutex.Lock()` was a large function containing many hairy conditions which made it ineligible to be inlined. Even in the case where the lock was available, the caller had to pay the overhead of calling `sync.Mutex.Lock()`. - -Carlo’s change split `sync.Mutex.Lock()` into two functions (a process he dubbed _outlining_). The outer `sync.Mutex.Lock()` method now calls `sync/atomic.CompareAndSwapInt32()` and returns to the caller immediately if the CAS succeeds. If not, the function falls through to `sync.Mutex.lockSlow()` which handles the slow path required to register interest on the lock and park the goroutine.[4][8] - -``` -% go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock' -../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled {  }; return  }; m.lockSlow() } -``` - -By splitting the function into an easily inalienable outer function, falling through to a complex inner function to handle the slow path Carlo’s combined mid stack inlining and the [compiler’s support for intrinsic operations][9] to reduce the cost of an uncontended lock by 14%. Then he repeated the trick for an additional 9% saving in `sync.RWMutex.Unlock()`. - - 1. The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.[][10] - 2. Keep in mind that the compiler authors warn that “[Additional levels of inlining (beyond -l) may be buggy and are not supported”][11]. Caveat emptor.[][12] - 3. The compiler is powerful enough that it can inline complex functions like `strconv.ParseInt`. As a experiment, try removing the `//go:noinline` annotation and observe the result with `-gcflags=-m=2`.[][13] - 4. The expression `race.Enable` is a constant controlled by the `-race` flag passed to the `go` tool. It is `false` for normal builds which allows the compiler to elide those code paths entirely.[][14] - - - -#### Related posts: - - 1. [Inlining optimisations in Go][15] - 2. [Why is a Goroutine’s stack infinite ?][16] - 3. [Stack traces and the errors package][17] - 4. [What is the zero value, and why is it useful?][18] - - - --------------------------------------------------------------------------------- - -via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go - -作者:[Dave Cheney][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://dave.cheney.net/author/davecheney -[b]: https://github.com/lujun9972 -[1]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go -[2]: https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f -[3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 (The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.) -[4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 (Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor.) -[5]: https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p -[6]: tmp.FyRthF1bbF#easy-footnote-bottom-3-4076 (The compiler is powerful enough that it can inline complex functions like strconv.ParseInt. As a experiment, try removing the //go:noinline annotation and observe the result with -gcflags=-m=2.) -[7]: https://go-review.googlesource.com/c/go/+/148959 -[8]: tmp.FyRthF1bbF#easy-footnote-bottom-4-4076 (The expression race.Enable is a constant controlled by the -race flag passed to the go tool. It is false for normal builds which allows the compiler to elide those code paths entirely.) -[9]: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics -[10]: tmp.FyRthF1bbF#easy-footnote-1-4076 -[11]: https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11 -[12]: tmp.FyRthF1bbF#easy-footnote-2-4076 -[13]: tmp.FyRthF1bbF#easy-footnote-3-4076 -[14]: tmp.FyRthF1bbF#easy-footnote-4-4076 -[15]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go (Inlining optimisations in Go) -[16]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) -[17]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package) -[18]: https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful (What is the zero value, and why is it useful?) diff --git a/translated/tech/20200502 Mid-stack inlining in Go.md b/translated/tech/20200502 Mid-stack inlining in Go.md new file mode 100644 index 0000000000..3d5bb1ee15 --- /dev/null +++ b/translated/tech/20200502 Mid-stack inlining in Go.md @@ -0,0 +1,211 @@ +[#]: collector: "lujun9972" +[#]: translator: "lxbwolf" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Mid-stack inlining in Go" +[#]: via: "https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go" +[#]: author: "Dave Cheney https://dave.cheney.net/author/davecheney" + +Go 中对栈中函数进行内联 +====== + +[上一篇文章][1]中我论述了叶子内联是怎样让 Go 编译器减少函数调用的开销的,以及延伸出了跨函数边界的优化的机会。本文中,我要论述内联的限制以及叶子与栈中内联的对比。 + +### 内联的限制 + +把函数内联到它的调用处消除了调用的开销,为编译器进行其他的优化提供了更好的机会,那么问题来了,既然内联这么好,内联得越多开销就越少,_为什么不尽可能多地内联呢?_ + +内联用可能的增加程序大小换来了更快的执行时间。限制内联的最主要原因是,创建太多的函数内联的备份会增加编译时间,并且作为边际效应会增加生成的二进制文件的大小。即使把内联带来的进一步的优化机会考虑在内,太激进的内联也可能会增加生成的二进制文件的大小和编译时间。 + +内联收益最大的是[小函数][2],相对于调用它们的开销来说,这些函数做很少的工作。随着函数大小的增长,函数内部做的工作与函数调用的开销相比省下的时间越来越少。函数越大通常越复杂,因此对它们内联后进行优化与不内联相比的收益没有(对小函数进行内联)那么大。 + +### 内联预算 + +在编译过程中,每个函数的内联能力是用_内联预算_计算的。开销的计算过程可以巧妙地内化,像一元和二元等简单操作,在抽象语法数(Abstract Syntax Tree,AST)中通常是每个节点一个单元,更复杂的操作如 `make` 可能单元更多。考虑下面的例子: + +```go +package main + +func small() string { + s := "hello, " + "world!" + return s +} + +func large() string { + s := "a" + s += "b" + s += "c" + s += "d" + s += "e" + s += "f" + s += "g" + s += "h" + s += "i" + s += "j" + s += "k" + s += "l" + s += "m" + s += "n" + s += "o" + s += "p" + s += "q" + s += "r" + s += "s" + s += "t" + s += "u" + s += "v" + s += "w" + s += "x" + s += "y" + s += "z" + return s +} + +func main() { + small() + large() +} +``` + +使用 `-gcflags=-m=2` 参数编译这个函数能让我们看到编译器分配给每个函数的开销: + +```bash +% go build -gcflags=-m=2 inl.go +# command-line-arguments +./inl.go:3:6: can inline small with cost 7 as: func() string { s := "hello, world!"; return s } +./inl.go:8:6: cannot inline large: function too complex: cost 82 exceeds budget 80 +./inl.go:38:6: can inline main with cost 68 as: func() { small(); large() } +./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } +``` + +编译器根据函数 `func small()` 的开销(7)决定可以对它内联,而`func large()` 的开销太大,编译器决定不进行内联。`func main()` 被标记为适合内联的,分配了 68 的开销;其中 `small` 占用 7,调用 `small` 函数占用 57,剩余的(4)是它自己的开销。 + +可以用 `-gcflag=-l` 参数控制内联预算的等级。下面是可使用的值: + + * `-gcflags=-l=0` 默认的内联等级。 + * `-gcflags=-l` (或 `-gcflags=-l=1`) 取消内联。 + * `-gcflags=-l=2` 和 `-gcflags=-l=3` 现在已经不使用了。不影响 `-gcflags=-l=0` + * `-gcflags=-l=4` 减少非叶子函数和通过接口调用的函数的开销。[2][4] + + + +#### 难以理解的优化 + +一些函数虽然内联的开销很小,但由于太复杂它们仍不适合进行内联。这就是函数的不确定性,因为一些操作的语义在内联后很难去推导,如 `recover`,`break`。其他的操作,如 `select` 和 `go` 涉及运行时的协调,因此内联后引入的额外的开销不能抵消内联带来的收益。 + +难理解的语句也包括 `for` 和 `range`,这些语句不一定开销很大,但目前为止还没有对它们进行优化。 + +### 栈中函数优化 + +在过去,Go 编译器只对叶子函数进行内联 — 只有那些不调用其他函数的函数才有资格。在上一段难以理解的的语句的探讨内容中,一次函数调用会让这个函数失去内联的资格。 + +进入栈中进行内联,就像它的名字一样,能内联在函数调用栈中间的函数,不需要先让它下面的所有的函数都被标记为有资格内联的。栈中内联是 David Lazar 在 Go 1.9 中引入的,并在随后的版本中做了改进。[这篇文章][5]深入探究保留栈追踪的表现和被深度内联后的代码路径里的 `runtime.Caller` 们的难点。 + +在前面的例子中我们看到了栈中函数内联。内联后,`func main()` 包含了 `func small()` 的函数体和对 `func large()` 的一次调用,因此它被判定为非叶子函数。在过去,这会阻止它被继续内联,虽然它的联合开销小于内联预算。 + +栈中内联的最主要的应用案例就是减少贯穿函数调用栈的开销。考虑下面的例子: + +```go +package main + +import ( + "fmt" + "strconv" +) + +type Rectangle struct {} + +//go:noinline +func (r *Rectangle) Height() int { + h, _ := strconv.ParseInt("7", 10, 0) + return int(h) +} + +func (r *Rectangle) Width() int { + return 6 +} + +func (r *Rectangle) Area() int { return r.Height() * r.Width() } + +func main() { + var r Rectangle + fmt.Println(r.Area()) +} +``` + +在这个例子中, `r.Area()` 是个简单的函数,调用了两个函数。`r.Width()` 可以被内联,`r.Height()` 这里用 `//go:noinline` 指令标注了,不能被内联。[3][6] + +```bash +% go build -gcflags='-m=2' square.go +# command-line-arguments +./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline +./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } +./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } +./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 +./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } +./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } +``` + +由于 `r.Area()` 中的乘法与调用它的开销相比并不大,因此内联它的表达式是纯收益,即使它的调用的下游 `r.Height()` 仍是没有内联资格的。 + +#### 快速路径内联 + +关于栈中内联的效果最令人吃惊的例子是 2019 年 [Carlo Alberto Ferraris][7] 通过允许把 `sync.Mutex.Lock()` 的快速路径,非竞争的情况,内联到它的调用方来[提升它的性能][7]。在这个修改之前,`sync.Mutex.Lock()` 是个很大的函数,包含很多难以理解的条件,使得它没有资格被内联。即使锁可用时,调用者也要付出调用 `sync.Mutex.Lock()` 的代价。 + +Carlo 把 `sync.Mutex.Lock()` 分成了两个函数(他自己称为*外联*)。外部的 `sync.Mutex.Lock()` 方法现在调用 `sync/atomic.CompareAndSwapInt32()` 且如果 CAS(Compare and Swap)成功了之后立即返回给调用者。如果 CAS 失败,函数会走到 `sync.Mutex.lockSlow()` 慢速路径,需要对锁进行注册,暂停 goroutine。[4][8] + +```bash +% go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock' +../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled {  }; return  }; m.lockSlow() } +``` + +通过把函数分割成一个简单的不能再被分割的外部函数,和(如果没走到外部函数就走到的)一个处理慢速路径的复杂的内部函数,Carlo 组合了栈中函数内联和[编译器对基础操作的支持][9],减少了非竞争锁 14% 的开销。之后他在 `sync.RWMutex.Unlock()` 重复这个技巧,节省了另外 9% 的开销。 + + 1. 不同发布版本中,在考虑该函数是否适合内联时,Go 编译器对同一函数的预算是不同的。[][10] + 2. 时刻记着编译器的作者警告过[“更高的内联等级(比 -l 更高)可能导致 bug 或不被支持”][11]。 Caveat emptor。[][12] + 3. 编译器有足够的能力来内联像 `strconv.ParseInt` 的复杂函数。作为一个实验,你可以尝试去掉 `//go:noinline` 注释,使用 `-gcflags=-m=2` 编译后观察。[][13] + 4. `race.Enable` 表达式是通过传递给 `go` 工具的 `-race` 参数控制的一个常量。对于普通编译,它的值是 `false`,此时编译器可以完全省略代码路径。[][14] + + + +#### 相关文章: + + 1. [Go 中的内联优化][15] + 2. [goroutine 的栈为什么会无限增长?][16] + 3. [栈追踪和 errors 包][17] + 4. [零值是什么,为什么它很有用?][18] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go +[2]: https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f +[3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 "The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release." +[4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 "Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor." +[5]: https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p +[6]: tmp.FyRthF1bbF#easy-footnote-bottom-3-4076 "The compiler is powerful enough that it can inline complex functions like strconv.ParseInt. As a experiment, try removing the //go:noinline annotation and observe the result with -gcflags=-m=2." +[7]: https://go-review.googlesource.com/c/go/+/148959 +[8]: tmp.FyRthF1bbF#easy-footnote-bottom-4-4076 "The expression race.Enable is a constant controlled by the -race flag passed to the go tool. It is false for normal builds which allows the compiler to elide those code paths entirely." +[9]: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics +[10]: tmp.FyRthF1bbF#easy-footnote-1-4076 +[11]: https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11 +[12]: tmp.FyRthF1bbF#easy-footnote-2-4076 +[13]: tmp.FyRthF1bbF#easy-footnote-3-4076 +[14]: tmp.FyRthF1bbF#easy-footnote-4-4076 +[15]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go "Inlining optimisations in Go" +[16]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite "Why is a Goroutine’s stack infinite ?" +[17]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package "Stack traces and the errors package" +[18]: https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful "What is the zero value, and why is it useful?" From 3af05630e965137ceec70d80d00915149c699e41 Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Sun, 3 May 2020 14:29:59 +0800 Subject: [PATCH 103/178] TSL --- .../tech/20200502 Mid-stack inlining in Go.md | 34 +++++++++---------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/translated/tech/20200502 Mid-stack inlining in Go.md b/translated/tech/20200502 Mid-stack inlining in Go.md index 3d5bb1ee15..9d8edcab43 100644 --- a/translated/tech/20200502 Mid-stack inlining in Go.md +++ b/translated/tech/20200502 Mid-stack inlining in Go.md @@ -1,11 +1,11 @@ -[#]: collector: "lujun9972" -[#]: translator: "lxbwolf" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " -[#]: subject: "Mid-stack inlining in Go" -[#]: via: "https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go" -[#]: author: "Dave Cheney https://dave.cheney.net/author/davecheney" +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mid-stack inlining in Go) +[#]: via: (https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) Go 中对栈中函数进行内联 ====== @@ -184,7 +184,7 @@ via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go 作者:[Dave Cheney][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[lxbwolf](https://github.com/lxbwolf) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -193,19 +193,19 @@ via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go [b]: https://github.com/lujun9972 [1]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go [2]: https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f -[3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 "The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release." -[4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 "Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor." +[3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 (The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.) +[4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 (Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor.) [5]: https://docs.google.com/presentation/d/1Wcblp3jpfeKwA0Y4FOmj63PW52M_qmNqlQkNaLj0P5o/edit#slide=id.p -[6]: tmp.FyRthF1bbF#easy-footnote-bottom-3-4076 "The compiler is powerful enough that it can inline complex functions like strconv.ParseInt. As a experiment, try removing the //go:noinline annotation and observe the result with -gcflags=-m=2." +[6]: tmp.FyRthF1bbF#easy-footnote-bottom-3-4076 (The compiler is powerful enough that it can inline complex functions like strconv.ParseInt. As a experiment, try removing the //go:noinline annotation and observe the result with -gcflags=-m=2.) [7]: https://go-review.googlesource.com/c/go/+/148959 -[8]: tmp.FyRthF1bbF#easy-footnote-bottom-4-4076 "The expression race.Enable is a constant controlled by the -race flag passed to the go tool. It is false for normal builds which allows the compiler to elide those code paths entirely." +[8]: tmp.FyRthF1bbF#easy-footnote-bottom-4-4076 (The expression race.Enable is a constant controlled by the -race flag passed to the go tool. It is false for normal builds which allows the compiler to elide those code paths entirely.) [9]: https://dave.cheney.net/2019/08/20/go-compiler-intrinsics [10]: tmp.FyRthF1bbF#easy-footnote-1-4076 [11]: https://github.com/golang/go/blob/be08e10b3bc07f3a4e7b27f44d53d582e15fd6c7/src/cmd/compile/internal/gc/inl.go#L11 [12]: tmp.FyRthF1bbF#easy-footnote-2-4076 [13]: tmp.FyRthF1bbF#easy-footnote-3-4076 [14]: tmp.FyRthF1bbF#easy-footnote-4-4076 -[15]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go "Inlining optimisations in Go" -[16]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite "Why is a Goroutine’s stack infinite ?" -[17]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package "Stack traces and the errors package" -[18]: https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful "What is the zero value, and why is it useful?" +[15]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go (Inlining optimisations in Go) +[16]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) +[17]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package) +[18]: https://dave.cheney.net/2013/01/19/what-is-the-zero-value-and-why-is-it-useful (What is the zero value, and why is it useful?) From 6fbca396bde6e561ed89989d89194f3b424f3188 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E9=83=91?= Date: Sun, 3 May 2020 14:50:41 +0800 Subject: [PATCH 104/178] Translated --- ...7 How to compress files on Linux 5 ways.md | 207 ------------------ ...7 How to compress files on Linux 5 ways.md | 207 ++++++++++++++++++ 2 files changed, 207 insertions(+), 207 deletions(-) delete mode 100644 sources/tech/20200417 How to compress files on Linux 5 ways.md create mode 100644 translated/tech/20200417 How to compress files on Linux 5 ways.md diff --git a/sources/tech/20200417 How to compress files on Linux 5 ways.md b/sources/tech/20200417 How to compress files on Linux 5 ways.md deleted file mode 100644 index f71e90f9fc..0000000000 --- a/sources/tech/20200417 How to compress files on Linux 5 ways.md +++ /dev/null @@ -1,207 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to compress files on Linux 5 ways) -[#]: via: (https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html) -[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) - -How to compress files on Linux 5 ways -====== -There are a number of tools that you use to compress files on Linux systems, but they don't all behave the same way or yield the same level of compression. In this post, we compare five of them. -Getty Images - -There are quite a few commands on Linux for compressing files. One of the newest and most effective is **xz**, but they all have advantages for both saving disk space and preserving files for later use. In this post, we compare the compression commands and point out the significant differences. - -### tar - -The tar command is not specifically a compression command. It’s generally used to pull a number of files into a single file for easy transport to another system or to back the files up as a related group. It also provides compression as a feature, which makes a lot of sense, and the addition of the **z** compression option is available to make this happen. - -When compression is added to a **tar** command with the **z** option, tar uses **gzip** to do the compressing. - -You can use **tar** to compress a single file as easily as a group though this offers no particular advantage over using **gzip** directly. To use **tar** for this, just identify the file as you would a group of files with a “tar cfz newtarfile filename” command like this: - -``` -$ tar cfz bigfile.tgz bigfile - ^ ^ - | | - +- new file +- file to be compressed - -$ ls -l bigfile* --rw-rw-r-- 1 shs shs 103270400 Apr 16 16:09 bigfile --rw-rw-r-- 1 shs shs 21608325 Apr 16 16:08 bigfile.tgz -``` - -Note the significant reduction in the file size. - -If you prefer, you can use the **tar.gz** extension which might make the character of the file a bit more obvious, but most Linux users will probably recognize **tgz** as meaning the same thing – the combination of **tar** and **gz** to indicate that the file is a compressed tar file. You will be left with both the original file and the compressed file once the compression is complete. - -To collect a number of files together and compress the resultant “tar ball” in one command, use the same basic syntax, but specify the files to be included as a group in place of the single file. Here’s an example: - -[][1] - -``` -$ tar cfz bin.tgz bin/* - ^ ^ - | +-- files to include - + new file -``` - -### zip - -The **zip** command creates a compressed file while leaving the original file intact. The syntax is straightforward except that, as with **tar**, you have to remember that your original file should be the last argument on the command line. - -``` -$ zip ./bigfile.zip bigfile -updating: bigfile (deflated 79%) -$ ls -l bigfile bigfile.zip --rw-rw-r-- 1 shs shs 103270400 Apr 16 11:18 bigfile --rw-rw-r-- 1 shs shs 21606889 Apr 16 11:19 bigfile.zip -``` - -### gzip - -The **gzip** command is very simple to use. You just type "gzip" followed by the name of the file you want to compress. Unlike the commands described above, **gzip** will encrypt the files "in place". In other words, the original file will be replaced by the encrypted file. - -``` -$ gzip bigfile -$ ls -l bigfile* --rw-rw-r-- 1 shs shs 21606751 Apr 15 17:57 bigfile.gz -``` - -### bzip2 - -As with the **gzip** command, **bzip2** will compress the file that you select "in place", leaving only the original file. - -``` -$ bzip bigfile -$ ls -l bigfile* --rw-rw-r-- 1 shs shs 18115234 Apr 15 17:57 bigfile.bz2 -``` - -### xz - -A relative newcomer to the compression command team, **xz** is a front runner in terms of how well it compresses files. Like the two previous commands, you only need to supply the file name to the command. Again, the original file is compressed in place. - -``` -$ xz bigfile -$ ls -l bigfile* --rw-rw-r-- 1 shs shs 13427236 Apr 15 17:30 bigfile.xz -``` - -For large files, you are likely to notice that **xz** takes longer to run than other compression commands, but the compression results are very impressive. - -### Comparisons to consider - -Most people have heard it said that "size isn't everything". So, let's compare file size as well as some other issues to be considered when you make plans for how you want to compress your files. - -The stats shown below all relate to compressing the single file – bigfile – used in the example commands shown above. This file is a large and fairly random text file. Compression rates will depend to some extent on the content of the files. - -#### Size reduction - -When compared, the various compression commands shown above yielded the following results. The percentages represent how the compressed files compare with the original file. - -``` --rw-rw-r-- 1 shs shs 103270400 Apr 16 14:01 bigfile ------------------------------------------------------- --rw-rw-r-- 1 shs shs 18115234 Apr 16 13:59 bigfile.bz2 ~17% --rw-rw-r-- 1 shs shs 21606751 Apr 16 14:00 bigfile.gz ~21% --rw-rw-r-- 1 shs shs 21608322 Apr 16 13:59 bigfile.tgz ~21% --rw-rw-r-- 1 shs shs 13427236 Apr 16 14:00 bigfile.xz ~13% --rw-rw-r-- 1 shs shs 21606889 Apr 16 13:59 bigfile.zip ~21% -``` - -The **xz** commands wins, ending up at only 13% the size of the original file, but all of these compression commands reduced the original file size quite significantly. - -#### Whether the original files are replaced - -The **bzip2**, **gzip** and **xz** commands all replace the original files with compressed versions. The **tar** and **zip** commands to not. - -#### Run time - -The **xz** command seems to take more time than the other commands to encrypt the files. For bigfile, the approximate times were: - -``` -command run-time -tar 4.9 seconds -zip 5.2 seconds -bzip2 22.8 seconds -gzip 4.8 seconds -xz 50.4 seconds -``` - -Decompression times are likely to be considerably smaller than compression times. - -#### File permissions - -Regardless of what permissions you have set on your original file, permissions for the compressed file will be based on your **umask** setting, except for **bzip2** which retains the original file's permissions. - -#### Compatibility with Windows - -The **zip** command creates a file which can be used (i.e., decompressed) on Windows systems as well as Linux and other Unix systems without having to install other tools which may or may not be available. - -### Decompressing files - -The commands for decompressing files are similar to those used to compress the files. These commands would work for decompressing bigfile after the compression commands shown above were run. - - * tar: **tar xf bigfile.tgz** - * zip: **unzip bigfile.zip** - * gzip: **gunzip bigfile.gz** - * bzip2: **bunzip2 bigfile.gz2** - * xz: **xz -d bigfile.xz** or **unxz bigfile.xz** - - - -### Running your own compression comparisons - -If you'd like to run some tests on your own, grab a large but replaceable file and compress it using each of the commands shown above – preferably using a new subdirectory. You might have to first install **xz** if you want to include it in the tests.This script can make the comparison easier, but will likely take a few minutes to complete. - -``` -#!/bin/bash - -# ask user for filename -echo -n "filename> " -read filename - -# you need this because some commands will replace the original file -cp $filename $filename-2 - -# clean up first (in case previous results are still available) -rm $filename.* - -tar cvfz ./$filename.tgz $filename > /dev/null -zip $filename.zip $filename > /dev/null -bzip2 $filename -# recover original file -cp $filename-2 $filename -gzip $filename -# recover original file -cp $filename-2 $filename -xz $filename - -# show results -ls -l $filename.* - -# replace the original file -mv $filename-2 $filename -``` - -Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. - --------------------------------------------------------------------------------- - -via: https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html - -作者:[Sandra Henry-Stocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ -[b]: https://github.com/lujun9972 -[1]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy) -[2]: https://www.facebook.com/NetworkWorld/ -[3]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20200417 How to compress files on Linux 5 ways.md b/translated/tech/20200417 How to compress files on Linux 5 ways.md new file mode 100644 index 0000000000..e9ef6af572 --- /dev/null +++ b/translated/tech/20200417 How to compress files on Linux 5 ways.md @@ -0,0 +1,207 @@ +[#]: collector: (lujun9972) +[#]: translator: (robsean) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to compress files on Linux 5 ways) +[#]: via: (https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +在 Linux 上压缩文件的 5 种方法 +====== +在 Linux 系统上有很多可以用于压缩文件的工具,但是它们表现的行为或产生相同程度的压缩等级并不相同,在这篇文章中,我们比较其中的五个工具。 +Getty Images + +在 Linux 上有不少用于压缩文件的命令。最新最有效的一个方法是 **xz** ,但是所有的方法都有节省磁盘空间和为后期使用维护备份文件的优点。在这篇文章中,我们将比较压缩命令并指出显著的不同 。 + +### tar + +tar 命令不是专门的压缩命令。它通常用于将多个文件拉入一单个文件中,以便容易地传输到另一个系统,或者备份文件为一个相关的组。它也提供压缩作为一个功能,这是很明智的,附加的 **z** 压缩选项能够实现压缩文件。 + +当压缩过程被附加到一个使用 **z** 选项的 **tar** 命令时,tar 使用 **gzip** 来进行压缩。 + +你可以使用 **tar** 来压缩一个单个文件,就像压缩一个组一样容易,尽管这种操作与直接使用 **gzip** 相比没有特别的优势。为此,要使用 **tar** ,只需要使用一个 “tar cfz newtarfile filename” 命令来像你标识一个组一样标识文件,像这样: + +``` +$ tar cfz bigfile.tgz bigfile + ^ ^ + | | + +- 新的文件 +- 将被压缩的文件 + +$ ls -l bigfile* +-rw-rw-r-- 1 shs shs 103270400 Apr 16 16:09 bigfile +-rw-rw-r-- 1 shs shs 21608325 Apr 16 16:08 bigfile.tgz +``` + +注意,文件的大小显著减少。 + +如果你喜欢,你可以使用 **tar.gz** 扩展名,这可能会使文件的特征更加明显,但是大多数的 Linux 用户将很可能会意识到与 **tgz** 的意思是相同的东西 – **tar** 和 **gz** 的组合来显示文件是一个压缩的 tar 文件。在压缩完成后,将留下原始文件和压缩文件。 + +为收集很多文件在一起并在一个命令中压缩生成的 “tar ball” ,使用相同的语法,但是要指明将要被包含的文件来作为一个组,而不是单个文件。这里有一个示例: + +[][1] + +``` +$ tar cfz bin.tgz bin/* + ^ ^ + | +-- 将被包含的文件 + + 新的文件 +``` + +### zip + +**zip** 命令创建一个压缩文件,与此同时保留原始文件的完整性。语法像使用 **tar** 一样简单,只是你必需记住,你的原始文件名称应该是命令行上的最后一个参数。 + +``` +$ zip ./bigfile.zip bigfile +updating: bigfile (deflated 79%) +$ ls -l bigfile bigfile.zip +-rw-rw-r-- 1 shs shs 103270400 Apr 16 11:18 bigfile +-rw-rw-r-- 1 shs shs 21606889 Apr 16 11:19 bigfile.zip +``` + +### gzip + +**gzip** 命令非常容易使用。你只需要键入 "gzip" ,紧随其后的是你想要压缩的文件名称。不像上述描述的命令,**gzip** 将“就地”加密文件。换句话说,原始文件将被加密文件替换。 + +``` +$ gzip bigfile +$ ls -l bigfile* +-rw-rw-r-- 1 shs shs 21606751 Apr 15 17:57 bigfile.gz +``` + +### bzip2 + +像使用 **gzip** 命令一样,**bzip2** 将在你选的“合适位置”压缩文件,只留下原始文件保持原样离开。 + +``` +$ bzip bigfile +$ ls -l bigfile* +-rw-rw-r-- 1 shs shs 18115234 Apr 15 17:57 bigfile.bz2 +``` + +### xz + +压缩命令组中的一个相对较新的成员,**xz** 就如何更好的压缩文件而言是领跑者。像先前的两个命令一样,你只需要将文件名称补给到命令中。再强调一次,原始文件被就地压缩。 + +``` +$ xz bigfile +$ ls -l bigfile* +-rw-rw-r-- 1 shs shs 13427236 Apr 15 17:30 bigfile.xz +``` + +对于大文件来说,你可能会注意到 **xz** 将比其它的压缩命令花费更多的运行时间,但是压缩的结果却是非常令人赞叹的。 + +### 考虑对比性 + +大多数人都听说过 "文件大小不是万能的"。所以,让我们比较一下文件大小以及一些当你计划如何压缩文件时的问题。 + +下面显示的统计数据都与压缩单个文件相关,在上面显示的示例中使用 – bigfile – 。这个文件是一个大的且相当随机的文本文件。压缩率在一定程度上取决于文件的内容。 + +#### 大小减缩率 + +在比较期间,上面显示的各种压缩命产生下面的结果。百分比表示压缩文件对比原始文件。 + +``` +-rw-rw-r-- 1 shs shs 103270400 Apr 16 14:01 bigfile +------------------------------------------------------ +-rw-rw-r-- 1 shs shs 18115234 Apr 16 13:59 bigfile.bz2 ~17% +-rw-rw-r-- 1 shs shs 21606751 Apr 16 14:00 bigfile.gz ~21% +-rw-rw-r-- 1 shs shs 21608322 Apr 16 13:59 bigfile.tgz ~21% +-rw-rw-r-- 1 shs shs 13427236 Apr 16 14:00 bigfile.xz ~13% +-rw-rw-r-- 1 shs shs 21606889 Apr 16 13:59 bigfile.zip ~21% +``` + +**xz** 命令获胜,最终只有压缩文件大小的13%,但是这些所有的压缩命令都相当显著地减少原始文件的大小。 + +#### 是否替换原始文件 + +**bzip2**,**gzip** 和 **xz** 命令都将使用压缩文件替换原始文件。**tar** 和 **zip** 命令不替换。 + +#### 运行时间 + +**xz** 命令似乎比其它命令需要花费更多的时间来加密文件。对于 bigfile 来说,近似时间是: + +``` +命令 运行时间 +tar 4.9 秒 +zip 5.2 秒 +bzip2 22.8 秒 +gzip 4.8 秒 +xz 50.4 秒 +``` + +解压缩文件很可能比压缩时间要短得多。 + +#### 文件权限 + +不管你对压缩文件设置什么权限,压缩文件的权限将基于你的 **umask** 设置,除 **bzip2** 维持原始文件的权限外。 + +#### 与 Windows 的兼容性 + +**zip** 命令将创建一个可被使用的文件(例如,解压缩),在 Windows 系统上以及 Linux 和其它 Unix 系统上,无需安装其它可能可用或不可用的工具。 + +### 解压缩文件 + +解压缩文件的命令类似于这些压缩文件的命令。这些命令将在我们运行上述压缩命令后用于解压缩 bigfile 。 + + * tar: **tar xf bigfile.tgz** + * zip: **unzip bigfile.zip** + * gzip: **gunzip bigfile.gz** + * bzip2: **bunzip2 bigfile.gz2** + * xz: **xz -d bigfile.xz** 或 **unxz bigfile.xz** + + + +### 对比你自己运行的压缩 + +如果你想自己运行一些测试,抓取一个大的且可以替换的文件,并使用上面显示的每个命令来压缩它 – 最好使用一个新的子目录。你可能必需先安装 **xz** ,如果你想在测试中包含它的话。这个脚本可能更容易地压缩,但是将可能花费几分钟来完成。 + +``` +#!/bin/bash + +# 询问用户文件名称 +echo -n "filename> " +read filename + +# 你需要这个,因为一些命令将替换原始文件 +cp $filename $filename-2 + +# 先清理(以免先前的结果仍然可用) +rm $filename.* + +tar cvfz ./$filename.tgz $filename > /dev/null +zip $filename.zip $filename > /dev/null +bzip2 $filename +# 恢复原始文件 +cp $filename-2 $filename +gzip $filename +# 恢复原始文件 +cp $filename-2 $filename +xz $filename + +# 显示结果 +ls -l $filename.* + +# 替换原始文件 +mv $filename-2 $filename +``` + +加入 [Facebook][2] 和 [LinkedIn][3] 网络世界社区来评论那些最重要的话题。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/blog/itaas-and-the-corporate-storage-technology/?utm_source=IDG&utm_medium=promotions&utm_campaign=HPE22140&utm_content=sidebar (ITAAS and Corporate Storage Strategy) +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world From 8e065c91dc13e4121408bfd8e1dcf28ee04bf11f Mon Sep 17 00:00:00 2001 From: "qfzy1233@163.com" Date: Sun, 3 May 2020 16:32:03 +0800 Subject: [PATCH 105/178] Update 20200424 16 Things to do After Installing Ubuntu 20.04.md --- ...ngs to do After Installing Ubuntu 20.04.md | 196 +++++++++--------- 1 file changed, 98 insertions(+), 98 deletions(-) diff --git a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index 5637651850..1b5d5ef5c3 100644 --- a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -7,218 +7,218 @@ [#]: via: (https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) -16 Things to do After Installing Ubuntu 20.04 +安装完 Ubuntu 20.04 后要做的 16 件事 ====== -_**Here is a list of tweaks and things to do after installing Ubuntu 20.04, to get a smoother and better desktop Linux experience.**_ +_**以下是安装 Ubuntu 20.04 之后需要做的一些调整和事情,它将使你获得更流畅、更好的桌面 Linux 体验。**_ -[Ubuntu 20.04 LTS brings plenty of new features][1] and visual changes. If you choose to install Ubuntu 20.04, let me show you a few recommended steps that you can follow to get started with it. +[Ubuntu 20.04 LTS (长期支持版)带来了许多新的特性][1] 和观感上的变化。 如果你要安装 Ubuntu 20.04 ,让我向你展示一些推荐步骤便于你的使用。 -### 16 Things to do after installing Ubuntu 20.04 LTS “Focal Fossa” +### 安装完 Ubuntu 20.04 LTS “Focal Fossa”后要做的 16 件事 ![][2] -The steps I am going to mention here are my recommendation. You may ignore a few customization or tweaks if they don’t suit your need and interest. +我在这里要提到的步骤仅是我的建议。如果一些定制或调整不适合你的需要和兴趣,你可以忽略它们。 -Similarly, some steps may seem too simple but essential for someone completely new to Ubuntu. +同样的,有些步骤看起来很简单,但是对于一个 Ubuntu 新手来说是必要的。 -A number of suggestions here are suited for the default Ubuntu 20.04 with GNOME desktop. So please check [which Ubuntu version][3] and [which desktop environment][4] you are using. +这里的一些建议适用于启用 GNOME 作为默认桌面Ubuntu 20.04。所以请检查[Ubuntu版本][3]和[桌面环境][4]。 -Let’s get started with the list of things to do after installing Ubuntu 20.04 LTS codenamed Focal Fossa. +以下列表便是安装了代号为 Focal Fossa 的 Ubuntu 20.04 LTS 之后要做的事。 -#### 1\. Get your system ready by updating and enabling additional repos +#### 1\. 通过更新和启用额外的 repos 来准备您的系统 -The first thing you should do after installing Ubuntu or any other Linux distribution is to update it. Linux works on a local database of available packages. And this cache needs to be synced in order for you to be able to install any software. +安装Ubuntu或任何其他Linux发行版之后,你应该做的第一件事就是更新它。Linux 的可用包数据库工作于本地。这个缓存需要同步以便你能够安装任何软件。 -It is very easy to update Ubuntu. You can run the software updater from the menu (press Windows key and search for software updater): +升级Ubuntu非常简单。你可以运行软件更新从菜单( 按 Win 键并搜索软件更新): -![Software Updater in Ubuntu 20.04][5] +![Ubuntu 20.04 的软件升级中心][5] -You may also use the following command in the terminal to update your system: +你也可以在终端使用以下命令更新你的系统: ``` sudo apt update && sudo apt upgrade ``` -Next, you should make sure that you have [universe and multiverse repositories enabled][6]. You’ll have access to a lot more software with these repositories. I also recommend reading about [Ubuntu repositories][6] to learn the basic concept behind it. +接下来,您应该确保启用了[universe和multiverse存储库][6]。使用这些存储库,你可以访问更多的软件。我还推荐阅读关于[Ubuntu软件库][6]的书籍,以了解它背后的基本概念。 -Search for Software & Updates in the menu: +搜索软件和放大器;更新菜单: -![Software & Updates Settings][7] +![软件及更新设置项][7] -Make sure to check the boxes in front of the repositories: +请务必选中存储库前面的方框: -![Enable additional repositories][8] +![启用额外的存储库][8] -#### 2\. Install media codecs to play MP3, MPEG4 and other media files +#### 2\. 安装媒体解码器来播放MP3、MPEG4和其他格式媒体文件 -If you want to play media files like MP3, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu doesn’t install it by default because of copyright issues in various countries. +如果你想播放媒体文件,如MP3, MPEG4, AVI等,你需要安装媒体解码器。由于不同国家的版权问题,Ubuntu在默认情况下不会安装它。 -As an individual, you can install these media codecs easily [using the Ubuntu Restricted Extra package][9]. This will install media codecs, Adobe Flash player and [Microsoft True Type Fonts in your Ubuntu system][10]. +作为个人,你可以很容易地安装这些媒体编解码器[使用 Ubuntu 额外安装包][9]。这将安装媒体编解码器,Adobe Flash播放器和[微软 True Type 字体在您的Ubuntu系统][10]。 -You can install it by [clicking this link][11] (it will asked to be open in software center) or use this command: +你可以通过[点击这个链接][11](它将要求打开软件中心)来安装它,或者使用以下命令: ``` sudo apt install ubuntu-restricted-extras ``` -If you encounter the EULA or the license screen, remember to use the tab key to select between the options and then hit enter to confirm your choice. +如果遇到EULA或许可证界面,请记住使用tab键在选项之间进行选择,然后按enter键确认你的选择。 -![Press tab to select OK and press enter][12] +![按tab键选择OK并按enter键][12] -#### 3\. Install software from the software center or the web +#### 3\. 从软件中心或网络上安装软件 -Now that you have set up the repositories and updated the package cache, you should start installing software that you need. +现在已经设置了存储库并更新了包缓存,应该开始安装所需的软件了。 -There are several ways of [installing applications in Ubuntu][13]. The easiest and the official way is to use the Software Center. +有几种方法可以在Ubuntu中安装应用程序。最简单和正式的方法是使用软件中心。 -![Ubuntu Software Center][14] +![Ubuntu 软件中心][14] -If you want some recommendation about software, please refer to this extensive [list of Ubuntu applications for different purposes][15]. +如果你想要一些关于软件的建议,请参考这个扩展的[不同用途的Ubuntu应用程序列表][15]。 -Some software vendors provide .deb files to easily install their application. You may get the deb files from their website. For example, to [install Google Chrome on Ubuntu][16], you can get the deb file from its website and double click on it to start the installation. +一些软件供应商提供 .deb 文件来方便地安装他们的应用程序。你可以从他们的网站获得 deb 文件。例如,要[在 Ubuntu 上安装谷歌 Chrome ][16],你可以从它的网站上获得 deb 文件,双击它开始安装。 -#### 4\. Enjoy gaming with Steam Proton and GameMode +#### 4\. 享受 Steam Proton 和 GameModeEnjoy 上的游戏 -[Gaming on Linux][17] has come a long way. You are not restricted to a handful of games included by default. You can [install Steam on Ubuntu][18] and enjoy a good number of games. +[ Linux 上的游戏][17] 已经有了长足的发展。你不受限于自带的少数游戏。你可以[在 Ubuntu 上安装 Steam ][18]并享受许多游戏。 -[Steam’s new P][19][r][19][oton project][19] enables you to play a number of Windows-only games on Linux. In addition to that, Ubuntu 20.04 comes with [Feral Interactive’s GameMode][20] installed by default. +[Steam 新的 Proton 项目][19]可以让你在Linux上玩许多只适用于windows的游戏。除此之外,Ubuntu 20.04还默认安装了[Feral Interactive的游戏][20]。 -The GameMode automatically adjust Linux system performance to give more priority to games than other background processes. +游戏模式会自动调整Linux系统的性能,使游戏具有比其他后台进程更高的优先级。 -This means some games that support the GameMode (like [Rise of Tomb Raiders][21]) should have improved performance on Ubuntu. +这意味着一些支持游戏模式的游戏(如[古墓丽影崛起][21])在 Ubuntu 上的性能应该有所提高。 -#### 5\. Manage auto-updates (for intermediate and experts) +#### 5\. 管理自动更新(适用于进阶和专家) -Recently, Ubuntu has started to automatically download and install security updates that are essential to your system. This is a security feature as a regular user, you should leave it as it is, +最近,Ubuntu 已经开始自动下载和安装对你的系统至关重要的安全更新。这是一个安全功能,作为一个普通用户,你应该让它保持默认, -But if you like to do everything on your own and this auto-update is frequently leading you to [“Unable to lock the administration directory” error][22], maybe you can change the auto updates behavior. +但是,如果你喜欢自己进行配置更新,而这个自动更新经常导致你[“无法锁定管理目录”错误][22],也许你可以改变自动更新行为。 -You can opt for the Show immediately so that it notifies you of security updates as soon as they are available instead of automatically installing. +你可以选择更新是提示,以便它通知你的安全更新是否可用,而不是自动安装。 -![Control the auto updates settings][23] +![管理自动更新设置][23] -#### 6\. Control automatic suspend and screenlock for laptops +#### 6\. 控制电脑的自动挂起和屏幕锁定 -If you are using Ubuntu 20.04 on a laptop then you may want to pay attention to a few power and screenlock settings. +如果你在笔记本电脑上使用Ubuntu 20.04,那么你可能需要注意一些电源和屏幕锁定设置。 -If your laptop is on battery mode, Ubuntu will suspend the system after 20 minutes of inactivity. This is done to save battery power. Personally, I don’t like it and thus I disable it. +如果你的笔记本电脑是电池模式,Ubuntu会在20分钟不工作后休眠系统。这样做是为了节省电池电量。就我个人而言,我不喜欢它,因此我禁用了它。 -Similarly, if you leave your system for a few minutes, it automatically locks the screen. I don’t like this behavior as well so I prefer disabling it. +类似地,如果您离开系统几分钟,它会自动锁定屏幕。我也不喜欢这种行为,所以我宁愿禁用它。 -![Power Settings in Ubuntu 20.04][24] +![Ubuntu 20.04的电源设置][24] -#### 7\. Enjoy dark mode +#### 7\. 享受夜间模式 -One of the [most talked about features of Ubuntu 20.04][25] is the dark mode. You can enable the dark mode by going into Settings and selecting it under Appearance section. +一个[谈论最多的 Ubuntu 20.04 特性][25]是夜间模式。你可以通过进入设置并在外观部分中选择它来启用夜间模式。 -![Enable Dark Theme Ubuntu][26] +![开启夜间主题 Ubuntu ][26] -You may have to do some [additional tweaking to get full dark mode in Ubuntu 20.04][27]. +你可能需要做一些额外的调整来启用 Ubuntu 20.04 的深度夜间模式。 -#### 8\. Control desktop icons and launcher +#### 8\. 控制桌面图标和启动程序 -If you want a minimal looking desktop, you can disable the icons on the desktop. You can also disable the launcher from the left side and the appindicators in the top panel. +如果你想要一个最简的桌面,你可以禁用桌面上的图标。您还可以从左侧禁用启动程序,并在顶部面板中禁用软件状态栏。 -All this can be controlled via the new GNOME Extensions that is already available by default. +所有这些都可以通过默认的新 GNOME 扩展来控制。 ![][28] -By the way, you can also change the position of the launcher to the bottom or to the right by going to the Settings->Appearance. +顺便说一下,你也可以通过设置-外观来改变启动栏的位置到底部或者右边。 -#### 9\. Use emojis (smileys) and special characters or disable it from the search +#### 9\. 使用emojis(表情)和特殊字符,或从搜索中禁用它 -Ubuntu provides an easy way to use smiley or the emoticons. There is a dedicated application called Characters installed by default. It basically gives you [Unicode][29] of the emojis. +Ubuntu提供了一个使用 emojis 或表情符号的简单方法。在默认情况下,有一个专用的应用程序叫做 Characters。它可以给你基本表情符号的[Unicode][29]码。 -Not only emojis, you can use it to get the unicode for French, German, Russian and Latin characters. Clicking on the symbol gives you the opportunity to copy the unicode and when you paste this code, your chosen symbol should be typed. +不仅是表情符号,你还可以使用它来获得法语、德语、俄语和拉丁语字符的 unicode 。单击符号你可以复制 unicode ,当你粘贴这段代码时,你所选择的符号便被插入。 -![Emoji Ubuntu][30] +! [Emoji Ubuntu] [30] -You’ll find these special characters and emoticons appearing in the desktop search as well. You can copy them from the search results as well. +你也能在桌面搜索中找到这些特殊的字符和表情符号。也可以从搜索结果中复制它们。 -![Emojis appear in desktop search][31] +![Emojis 出现在桌面搜索中][31] -If you don’t want to see them in search results, you should disable their access to the search feature. The next section discuss how to do that. +如果你不想在搜索结果中看到它们,你应该禁用搜索功能对它们的访问。下一节将讨论如何做到这一点。 -#### 10\. Master the desktop search +#### 10\. 掌握桌面搜索 -The GNOME desktop has a powerful search feature. Most people use it for searching installed applications but it is more than just that. +GNOME桌面拥有强大的搜索功能。大多数人使用它来搜索已安装的应用程序,但它不仅限于此。 -Press the super key (Windows key) and search for something. It will show any applications that matches that search term, followed by system settings and matching applications available in the software center. +按超级键(Win键)并搜索一些东西。它将显示与搜索词匹配的任何应用程序,然后是系统设置和软件中心提供的匹配应用程序。 -![Desktop search][32] +![桌面搜索][32] -Not only that, the search can also find text inside files. If you are using the calendar, it can also find your meetings and reminders. You can even do quick calculations in the search and copy its result. +不仅如此,搜索还可以找到文件中的文本。如果你正在使用日历,它也可以找到你的会议和提醒。你甚至可以在搜索中进行快速计算并复制其结果。 -![Quick Calculations Ubuntu Search][33] +![Ubuntu搜索的快速计算][33] -You can control what can be searched and in which order by going into Settings. +你可以通过进入设置来控制搜索的内容和顺序。 ![][34] -#### 11\. Use nightlight feature to reduce eye strain at night +#### 11\. 使用夜灯功能,减少夜间眼睛疲劳 -If you use your computer or smartphone at night, you should use the night light feature to reduce eye strain. I feel that it helps a lot. +如果你在晚上使用电脑或智能手机,你应该使用夜灯功能来减少眼睛疲劳。我觉得这很有帮助。 -The night light feature adds a yellow tint to the screen which is less pinching than the white light. +夜灯的特点是在屏幕上增加了一种黄色的色调,比白光少了一些挤压感。 -You can enable night light in the Settings -> Displays and switching to Night Light tab. You can set the ‘yellowness’ as per your liking. +你可以在设置->显示并切换到夜灯选项卡。你可以根据自己的喜好设置“黄色”。 -![Nightlight feature][35] +![夜灯功能][35] -#### 12\. Got a 2K/4K screen? Use fractional scaling to get bigger icons and fonts +#### 12\.使用 2K/4K 显示器? 使用分辨率缩放得到更大的图标和字体 -If you feel that the icons, fonts, folders everything looks too small on your HiDPI screen, you can take advantage of the fractional scaling. +如果你觉得图标、字体、文件夹在你的高分辨率屏幕上看起来都太小了,你可以利用分辨率缩放。 -Enabling fractional scaling gives you more options to increase the size between 100% to 200%. You can choose the scaling size that suits your preference. +启用分级缩放可以让你有更多的选项来从100%增加到200%。你可以选择适合自己喜好的缩放尺寸。 -![Enable fractional scaling from Settings -> Displays][36] +![启用高分缩放从设置->显示][36] -#### 13\. Explore GNOME Extensions to extend the usability of GNOME desktop +#### 13\. 探索GNOME扩展以扩展GNOME桌面的可用性 -The GNOME desktop has tiny plugins or add-ons called Extensions. You should [learn to use GNOME extensions][37] to extend the usability of your system. +GNOME桌面有称为扩展的小插件或附加组件。你应该[学习使用 GNOM E扩展][37]来扩展系统的可用性。 -As you can see in the image below, the weather extension shows the weather information in the top panel. A tiny but useful thing. You may also take a look at some of [best GNOME extensions][38] here. Don’t install all of them, use only those that are useful to you. +如下图所示,天气扩展顶部面板中显示了天气信息。不起眼但十分有用。您也可以在这里查看一些[最佳 GNOME 扩展][38]。不要全部安装,只使用那些对你有用的。 -![Weather Extension][39] +![天气 扩展][39] -#### 14\. Enable ‘do not disturb’ mode and focus on work +#### 14\.启用“勿扰”模式,专注于工作 -If you want to concentrate on work, disabling desktop notifications would come handy. You can easily enable ‘do not disturb’ mode and mute all notifications. +如果你想专注于工作,禁用桌面通知会很方便。你可以轻松地启用“勿扰”模式,并静音所有通知。 -![Enable ‘Do Not Disturb’ to get rid of desktop notifications][40] +![启用“请勿打扰”清除桌面通知][40] -These notifications will still be in the message tray so that you can read them later but they won’t pop up on the desktop anymore. +这些通知仍然会在消息栏中,以便您以后可以阅读它们,但是它们不会在桌面上弹出。 -#### 15\. Clean your system +#### 15\. 清理你的系统 -This is something you don’t need to do right after installing Ubuntu. But keeping it in mind will help you. +这是你安装Ubuntu后不需要马上做的事情。但是记住它会对你有帮助。 -Over the time, your system will have significant amount of packages that won’t be needed anymore. You can remove them all in one go with this command: +随着时间的推移,你的系统将有大量不再需要的包。你可以用这个命令一次性删除它们: ``` sudo apt autoremove ``` -There are other [ways to clean Ubuntu to free disk space][41] but this is the easiest and safest. +还有其他[清理 Ubuntu 以释放磁盘空间的方法][41],但这是最简单和最安全的。 -#### 16\. Tweak and customize the GNOME desktop to your liking +#### 16\. 根据您的喜好调整和定制 GNOME 桌面 -I highly recommend [installing GNOME Tweaks tool][42]. This will give you access to a few additional settings to tweak. +我强烈推荐[安装 GNOME 设置工具][42]。这将让你可以通过额外的设置来进行定制。 -![Gnome Tweaks Tool][43] +![Gnome 设置工具][43] -For example, you can [display battery percentage][44], [fix right click in touchpad issue][45], change shell theme, change mouse pointer speed, display date and week numbers, change application window behavior etc. +比如,你可以[以百分比形式显示电池容量][44],[修正在touchpad中右键问题][45],改变 Shell 主题,改变鼠标指针速度,显示日期和星期数,改变应用程序窗口行为等。 -There is no end to customization and I cannot probably most of them here. This is why I recommend [reading these articles][42] about [customizing GNOME desktop][46]. +定制是没有尽头的,我可能仅使用了它的一小部分功能。这就是为什么我推荐[阅读这些文章]关于[自定义GNOME桌面][46]的[42]。 -You can also [install new themes in Ubuntu][47] though personally, I like the default theme in this release. This is the first time that I have stuck with the default icons and theme in an Ubuntu release. +你也可以[在Ubuntu中安装新主题][47],不过就我个人而言,我喜欢这个版本的默认主题。这是我第一次在Ubuntu发行版中使用默认的图标和主题。 -#### What do you do after installing Ubuntu? +#### 安装 Ubuntu 之后你会做什么? -If you are an Ubuntu beginner, I recommend [going through this collection of Ubuntu tutorials][48] to get started with it. +如果你是Ubuntu的初学者,我建议你[阅读这一系列Ubuntu教程][48]开始学习。 -So these were my recommendations. What are the steps you follow after installing Ubuntu? Share your favorite things and I might update this article with your suggestions. +这就是我的建议。安装Ubuntu之后你要做什么?分享你最喜欢的东西,我可能根据你的建议来更新这篇文章。 -------------------------------------------------------------------------------- @@ -226,7 +226,7 @@ via: https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[qfzy1233](https://github.com/qfzy1233) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From c837414cf1a1b6b87613972a6ee083d4c736410d Mon Sep 17 00:00:00 2001 From: qfzy1233 Date: Sun, 3 May 2020 16:33:20 +0800 Subject: [PATCH 106/178] Update and rename sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md to translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md --- .../20200424 16 Things to do After Installing Ubuntu 20.04.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) rename {sources => translated}/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md (99%) diff --git a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md similarity index 99% rename from sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md rename to translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index 1b5d5ef5c3..a221ef6a34 100644 --- a/sources/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -10,7 +10,7 @@ 安装完 Ubuntu 20.04 后要做的 16 件事 ====== -_**以下是安装 Ubuntu 20.04 之后需要做的一些调整和事情,它将使你获得更流畅、更好的桌面 Linux 体验。**_ +_**以下是安装 Ubuntu 20.04 之后需要做的一些调整和事项,它将使你获得更流畅、更好的桌面 Linux 体验。**_ [Ubuntu 20.04 LTS (长期支持版)带来了许多新的特性][1] 和观感上的变化。 如果你要安装 Ubuntu 20.04 ,让我向你展示一些推荐步骤便于你的使用。 From 81ee0f92303689fbaea59804ae2df4567a44af24 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 3 May 2020 21:15:13 +0800 Subject: [PATCH 107/178] PRF @wxy --- ...0420 4 Git scripts I can-t live without.md | 22 +++++++++---------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/translated/tech/20200420 4 Git scripts I can-t live without.md b/translated/tech/20200420 4 Git scripts I can-t live without.md index 089c81e20f..782cb082ce 100644 --- a/translated/tech/20200420 4 Git scripts I can-t live without.md +++ b/translated/tech/20200420 4 Git scripts I can-t live without.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (4 Git scripts I can't live without) @@ -12,11 +12,11 @@ > Git Extras 版本库包含了 60 多个脚本,它们是 Git 基本功能的补充。以下是如何安装、使用和贡献的方法。 -![Person using a laptop][1] +![](https://img.linux.net.cn/data/attachment/album/202005/03/211446dshwbzoh235b3gre.jpg) -2005 年,[Linus Torvalds][2] 创建了 [Git][3],以取代他之前用于维护 Linux 内核的专有的分布式源码控制管理解决方案。从那时起,Git 已经成为开源和云原生开发团队的主流版本控制解决方案。 +2005 年,[Linus Torvalds][2] 创建了 [Git][3],以取代他之前用于维护 Linux 内核的分布式源码控制管理的专有解决方案。从那时起,Git 已经成为开源和云原生开发团队的主流版本控制解决方案。 -但即使是像 Git 这样功能丰富的应用程序,也没有人们想要或需要的每个功能,所以人们会花大力气去创建这些功能。就 Git 而言,这个人就是 [TJ Holowaychuk][4]。他的 [Git Extras][5] 项目承载了 60 多个“附加功能”,这些功能扩展了 Git 的基本功能。 +但即使是像 Git 这样功能丰富的应用程序,也没有人们想要或需要的每个功能,所以会有人花大力气去创建这些缺少的功能。就 Git 而言,这个人就是 [TJ Holowaychuk][4]。他的 [Git Extras][5] 项目承载了 60 多个“附加功能”,这些功能扩展了 Git 的基本功能。 ### 使用 Git 附加功能 @@ -24,9 +24,9 @@ #### git-ignore -`git ignore` 是一个方便的附加功能,它可以让你手动添加文件类型和注释到 `.git-ignore` 文件中,而不需要打开文本编辑器。它可以操作你的个人用户帐户的全局忽略文件和单独用于你正在工作的版本库的忽略文件。 +`git ignore` 是一个方便的附加功能,它可以让你手动添加文件类型和注释到 `.git-ignore` 文件中,而不需要打开文本编辑器。它可以操作你的个人用户帐户的全局忽略文件和单独用于你正在工作的版本库中的忽略文件。 -在没有参数的情况下执行 `git ignore` 会先列出全局忽略文件,然后是本地的忽略文件。 +在不提供参数的情况下执行 `git ignore` 会先列出全局忽略文件,然后是本地的忽略文件。 ``` $ git ignore @@ -105,7 +105,7 @@ branch.master.merge=refs/heads/master * `git mr` 检出来自 GitLab 的合并请求。 * `git pr` 检出来自 GitHub 的拉取请求。 -无论是哪种情况,你只需要合并请求号、拉取请求号或完整的 URL,它就会抓取远程引用,检出分支,并调整配置,这样 Git 就知道要替换哪个分支了。 +无论是哪种情况,你只需要合并请求号/拉取请求号或完整的 URL,它就会抓取远程引用,检出分支,并调整配置,这样 Git 就知道要替换哪个分支了。 ``` $ git mr 51 @@ -142,7 +142,7 @@ $ git extras --help $ brew install git-extras ``` -在 Linux 上,每个平台的原生包管理器中都有 Git Extras。有时,你需要启用一个额外的仓库,比如在 CentOS 上的 [EPEL][10],然后运行一条命令。 +在 Linux 上,每个平台原生的包管理器中都包含有 Git Extras。有时,你需要启用额外的仓库,比如在 CentOS 上的 [EPEL][10],然后运行一条命令。 ``` $ sudo yum install git-extras @@ -152,9 +152,9 @@ $ sudo yum install git-extras ### 贡献 -你是否你认为 Git 中有缺少的功能,并且已经构建了一个脚本来处理它?为什么不把它作为 Git Extras 发布版的一部分,与全世界分享呢? +你是否认为 Git 中有缺少的功能,并且已经构建了一个脚本来处理它?为什么不把它作为 Git Extras 发布版的一部分,与全世界分享呢? -要做到这一点,请将该功能贡献到 Git Extras 仓库中。更多具体细节请参见仓库中的 [CONTRIBUTING.md][12] 文件,但基本的操作方法很简单。 +要做到这一点,请将该功能贡献到 Git Extras 仓库中。更多具体细节请参见仓库中的 [CONTRIBUTING.md][12] 文件,但基本的操作方法很简单: 1. 创建一个处理该功能的 Bash 脚本。 2. 创建一个基本的 man 文件,让大家知道如何使用它。 @@ -171,7 +171,7 @@ via: https://opensource.com/article/20/4/git-extras 作者:[Vince Power][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2790846b4c68ca7550819e9432138e464190ffc7 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 3 May 2020 21:16:14 +0800 Subject: [PATCH 108/178] PUB @wxy https://linux.cn/article-12180-1.html --- .../20200420 4 Git scripts I can-t live without.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200420 4 Git scripts I can-t live without.md (98%) diff --git a/translated/tech/20200420 4 Git scripts I can-t live without.md b/published/20200420 4 Git scripts I can-t live without.md similarity index 98% rename from translated/tech/20200420 4 Git scripts I can-t live without.md rename to published/20200420 4 Git scripts I can-t live without.md index 782cb082ce..43102c69e6 100644 --- a/translated/tech/20200420 4 Git scripts I can-t live without.md +++ b/published/20200420 4 Git scripts I can-t live without.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12180-1.html) [#]: subject: (4 Git scripts I can't live without) [#]: via: (https://opensource.com/article/20/4/git-extras) [#]: author: (Vince Power https://opensource.com/users/vincepower) From 4ad888c0c2ddebd6c0e809238c4f950d3a9d8f5e Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 3 May 2020 23:04:49 +0800 Subject: [PATCH 109/178] PRF @lxbwolf --- ...OS-RHEL 7-8 Systems in Single User Mode.md | 82 ++++++++++--------- 1 file changed, 44 insertions(+), 38 deletions(-) diff --git a/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md index c5eaa1dc84..a54892c575 100644 --- a/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md +++ b/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md @@ -1,108 +1,112 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode) [#]: via: (https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) -在单用户模式下启动 CentOS/RHEL 7/8 的三种方法 +以单用户模式启动 CentOS/RHEL 7/8 的三种方法 ====== +![](https://img.linux.net.cn/data/attachment/album/202005/03/230109uw1f9zvv9upbhwv8.jpg) + 单用户模式,也被称为维护模式,超级用户可以在此模式下恢复/修复系统问题。 -通常情况下,这些问题在多用户环境中修复不了。系统可以启动但功能不能正常运行或者你登录不了系统。 +通常情况下,这类问题在多用户环境中修复不了。系统可以启动但功能不能正常运行或者你登录不了系统。 -在基于 **[Red Hat][1]** (RHEL) 7/8 的系统中,使用 `runlevel1.target` 或 `rescue.target` 来实现。 +在基于 [Red Hat][1](RHEL)7/8 的系统中,使用 `runlevel1.target` 或 `rescue.target` 来实现。 在此模式下,系统会挂载所有的本地文件系统,但不开启网络接口。 系统仅启动特定的几个服务和修复系统必要的尽可能少的功能。 -当你想运行文件系统一致性检查来修复损坏的文件系统,或忘记 root 密码后重置密码,或修复系统上的一个挂载点问题时,这个方法会很有用。 +当你想运行文件系统一致性检查来修复损坏的文件系统,或忘记 root 密码后重置密码,或要修复系统上的一个挂载点问题时,这个方法会很有用。 -你可以用下面三种方法以单用户模式启动 **[CentOS][2]**/**[RHEL][3]** 7/8 系统。 +你可以用下面三种方法以单用户模式启动 [CentOS][2]/[RHEL][3] 7/8 系统。 - * **方法 1:** 通过向内核添加 “rd.break” 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 - * **方法 2:** 通过用 “init=/bin/bash“ 或 ”init=/bin/sh” 替换内核中的 “rhgb quiet” 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 - * **方法 3:** 通过用 “rw init=/sysroot/bin/sh” 参数替换内核中的 “ro” 语句以单用户模式启动 CentOS/RHEL 7/8 系统 + * 方法 1:通过向内核添加 `rd.break` 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 + * 方法 2:通过用 `init=/bin/bash` 或 `init=/bin/sh` 替换内核中的 `rhgb quiet` 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 + * 方法 3:通过用 `rw init=/sysroot/bin/sh` 参数替换内核中的 `ro` 语句以单用户模式启动 CentOS/RHEL 7/8 系统 +### 方法 1 - -### 方法 1: 通过向内核添加 “rd.break” 参数来以单用户模式启动 CentOS/RHEL 7/8 系统 +通过向内核添加 `rd.break` 参数来以单用户模式启动 CentOS/RHEL 7/8 系统。 重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。你需要选中第一行,第一个是最新的内核,然而如果你想用旧的内核启动系统你也可以选择其他的行。 -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-2.png) -根据你的 RHEL/CentOS 版本,找到 **“linux16”** 或 **“linux”** 语句,按下键盘上的 ”End“ 按钮,跳到行末,像下面截图中展示的那样添加关键词 **“rd.break”**,按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 +根据你的 RHEL/CentOS 版本,找到 `linux16` 或 `linux` 语句,按下键盘上的 `End` 键,跳到行末,像下面截图中展示的那样添加关键词 `rd.break`,按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 -如果你的系统是 RHEL/CentOS 7,你需要找 **`linux16`**,如果你的系统是 RHEL/CentOS 8,那么你需要找 **`linux`**。 +如果你的系统是 RHEL/CentOS 7,你需要找 `linux16`,如果你的系统是 RHEL/CentOS 8,那么你需要找 `linux`。 -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-3.png) -这个修改会让你的 root 文件系统以 **“只读 (RO)”** 模式挂载。你可以用下面的命令来验证下。下面的输出也明确地告诉你当前是在 **“紧急模式”**。 +这个修改会让你的 root 文件系统以 “只读(`ro`)” 模式挂载。你可以用下面的命令来验证下。下面的输出也明确地告诉你当前是在 “紧急模式Emergency Mode”。 ``` # mount | grep root ``` -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-5.png) -为了修改 **“sysroot”** 文件系统,你需要用 RW 模式重新挂载它。 +为了修改 `sysroot` 文件系统,你需要用读写模式(`rw`)重新挂载它。 ``` # mount -o remount,rw /sysroot ``` -运行下面的命令修改环境,这就是大家熟知的 “jailed directory” 或 “chroot jail”。 +运行下面的命令修改环境,这就是大家熟知的 “监禁目录” 或 “chroot 监狱”。 ``` # chroot /sysroot ``` -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-8.png) -现在,单用户模式的前期准备已经完成了。当你修复了你的问题要退出单用户模式时,执行下面的步骤。 +现在,单用户模式已经完全准备好了。当你修复了你的问题要退出单用户模式时,执行下面的步骤。 -CentOS/RHEL 7/8 默认使用 SELinux,因此创建下面的隐藏文件,这个文件会在下一次启动时重新确认所有文件。 +CentOS/RHEL 7/8 默认使用 SELinux,因此创建下面的隐藏文件,这个文件会在下一次启动时重新标记所有文件。 ``` # touch /.autorelabel ``` -最后,用下面的命令重启系统。你也可以输入两次 “exit” 命令来重启你的系统。 +最后,用下面的命令重启系统。你也可以输入两次 `exit` 命令来重启你的系统。 ``` # reboot -f ``` -### 方法 2: 通过用 “init=/bin/bash“ 或 ”init=/bin/sh” 替换内核中的 “rhgb quiet” 语句来以单用户模式启动 CentOS/RHEL 7/8 系统 +### 方法 2 + +通过用 `init=/bin/bash` 或 `init=/bin/sh` 替换内核中的 `rhgb quiet` 语句来以单用户模式启动 CentOS/RHEL 7/8 系统。 重启你的系统,在 GRUB2 启动界面,按下 `e` 键来编辑选中的内核。 -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-2.png) -找到语句 **“rhgb quiet”**,用 **“init=/bin/bash”** 或 **“init=/bin/sh”** 替换它,然后按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 +找到语句 `rhgb quiet`,用 `init=/bin/bash` 或 `init=/bin/sh` 替换它,然后按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 -**`init=/bin/bash`** 的截图。 +`init=/bin/bash` 的截图。 -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1.png) -**`init=/bin/sh`** 的截图。 +`init=/bin/sh` 的截图。 -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1a.png) -默认情况下,上面的操作会以只读(RO)模式挂载你的 “/” 分区,因此你需要以读写(RW)模式重新挂载 “/” 文件系统,这样才能修改它。 +默认情况下,上面的操作会以只读(`ro`)模式挂载你的 `/` 分区,因此你需要以读写(`rw`)模式重新挂载 `/` 文件系统,这样才能修改它。 ``` # mount -o remount,rw / ``` -![][4] +![](https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-4.png) -现在你可以执行你的任务了。当结束时,执行下面的命令来开启重启时的 SELinux 重新确认。 +现在你可以执行你的任务了。当结束时,执行下面的命令来开启重启时的 SELinux 重新标记。 ``` # touch /.autorelabel @@ -114,21 +118,23 @@ CentOS/RHEL 7/8 默认使用 SELinux,因此创建下面的隐藏文件,这 # exec /sbin/init 6 ``` -### 方法 3: 通过用 “rw init=/sysroot/bin/sh” 参数替换内核中的 “ro” 语句以单用户模式启动 CentOS/RHEL 7/8 系统 +### 方法 3 + +通过用 `rw init=/sysroot/bin/sh` 参数替换内核中的 `ro` 单词,以单用户模式启动 CentOS/RHEL 7/8 系统。 为了中断自动启动的过程,重启你的系统并在 GRUB2 启动界面按下任意键。 现在会展示你系统上所有可用的内核,选择最新的内核,按下 `e` 键来编辑选中的内核参数。 -找到以 **“linux”** 或 **“linux16”** 开头的语句,用 **“rw init=/sysroot/bin/sh”** 替换 **“ro”**。替换完后按下 **“Ctrl+x”** 或 **“F10”** 来进入单用户模式。 +找到以 `linux` 或 `linux16` 开头的语句,用 `rw init=/sysroot/bin/sh` 替换 `ro`。替换完后按下 `Ctrl+x` 或 `F10` 来进入单用户模式。 -运行下面的命令把环境切换为 “chroot jail”。 +运行下面的命令把环境切换为 “chroot 监狱”。 ``` # chroot /sysroot ``` -如果需要,做出必要的修改。修改完后,执行下面的命令来开启重启时的 SELinux 重新确认。 +如果需要,做出必要的修改。修改完后,执行下面的命令来开启重启时的 SELinux 重新标记。 ``` # touch /.autorelabel @@ -147,7 +153,7 @@ via: https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/ 作者:[Magesh Maruthamuthu][a] 选题:[lujun9972][b] 译者:[lxbwolf](https://github.com/lxbwolf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 4e0e7c3ea7de5e9d00b0d76eeb3429669056f06f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Sun, 3 May 2020 23:09:22 +0800 Subject: [PATCH 110/178] PUB @lxbwolf https://linux.cn/article-12181-1.html --- ...ethods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md (98%) diff --git a/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md b/published/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md similarity index 98% rename from translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md rename to published/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md index a54892c575..4b7edb67e9 100644 --- a/translated/tech/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md +++ b/published/20200430 Three Methods Boot CentOS-RHEL 7-8 Systems in Single User Mode.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12181-1.html) [#]: subject: (Three Methods Boot CentOS/RHEL 7/8 Systems in Single User Mode) [#]: via: (https://www.2daygeek.com/boot-centos-7-8-rhel-7-8-single-user-mode/) [#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) From fc7fc45fe2732f67cb11524709dc40ef74d4cc87 Mon Sep 17 00:00:00 2001 From: Acceleratorrrr <542383480@qq.com> Date: Sun, 3 May 2020 16:41:24 +0100 Subject: [PATCH 111/178] Update 20200427 How to secure your Linux email services with SSL-TLS.md MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 申请翻译。 --- ...427 How to secure your Linux email services with SSL-TLS.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md b/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md index 163e6b017f..3dcd2a5dc6 100644 --- a/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md +++ b/sources/tech/20200427 How to secure your Linux email services with SSL-TLS.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (Acceleratorrrr) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) @@ -7,6 +7,7 @@ [#]: via: (https://opensource.com/article/20/4/securing-linux-email) [#]: author: (Marc Skinner https://opensource.com/users/marc-skinner) + How to secure your Linux email services with SSL/TLS ====== Protect your Linux email services by understanding security From 84362af14f979d83445e0987e65aa3518566d98b Mon Sep 17 00:00:00 2001 From: DarkSun Date: Mon, 4 May 2020 00:55:28 +0800 Subject: [PATCH 112/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200503=2013=20t?= =?UTF-8?q?ips=20for=20getting=20your=20talk=20accepted=20at=20a=20tech=20?= =?UTF-8?q?conference?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200503 13 tips for getting your talk accepted at a tech conference.md --- ...your talk accepted at a tech conference.md | 127 ++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 sources/tech/20200503 13 tips for getting your talk accepted at a tech conference.md diff --git a/sources/tech/20200503 13 tips for getting your talk accepted at a tech conference.md b/sources/tech/20200503 13 tips for getting your talk accepted at a tech conference.md new file mode 100644 index 0000000000..b260116fb6 --- /dev/null +++ b/sources/tech/20200503 13 tips for getting your talk accepted at a tech conference.md @@ -0,0 +1,127 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (13 tips for getting your talk accepted at a tech conference) +[#]: via: (https://opensource.com/article/20/5/tips-conference-proposals) +[#]: author: (Todd Lewis https://opensource.com/users/toddlewis) + +13 tips for getting your talk accepted at a tech conference +====== +Before you respond to an event's call for papers, make sure your talk's +proposal aligns with these best practices. +![All Things Open check-in at registration booth][1] + +As tech conference organizers ramp up for the fall season, you may be seeing calls for papers (CFP) landing in your email box or social media feeds. We at [All Things Open][2] (ATO) have seen a lot of presentation proposals over the years, and we've learned a few things about what makes them successful. + +As we prepare for the eighth annual ATO in October 2020, we thought we'd offer a few best practices for writing successful CFP responses. If you're considering submitting a talk to ATO or another tech event, we hope these tips will help improve the chances that your proposal will be accepted. + +### 1\. Know the event you're submitting a talk to + +This seems like the proverbial _no-brainer_, but some people don't take the time to research an event before they submit a talk. Peruse the conference's website and review the talks, speakers, topics, etc. featured in the last couple of years. You can also find a lot of information simply by googling. The time you invest here will help you avoid a submission that is completely out of context for the event. + +### 2\. Understand what the event is looking for + +Look for information about what the event is looking for and what types of topics or talks it expects will be a good fit. We try to provide as much information as possible about the [ATO conference][3], [why someone would want to speak][4], and [what we're looking for][5] (both general and special interest topics). We also try to make the submission process as easy as possible (no doubt, there is room for improvement), in part because we believe this improves the quality of submissions and makes our review process go more smoothly. + +### 3\. Reach out to the organizer and ask questions + +If you're considering submitting a talk, don't hesitate to reach out and ask the event organizers any questions you have and for guidance specific to the event. If there is no or little response, that should be a red flag. If you have any questions about All Things Open, please reach out directly at [info@allthingsopen.org][6]. + +### 4\. Be clear about what attendees will learn from your talk + +This is one of the most common mistakes we see. Only about 25% of the proposals we receive clearly explain the proposed talk's takeaways. One reason you should include this is that nearly every event attendee makes their schedule based on what they will learn if they go to a session. But for organizers and proposal reviewers, having this information clearly stated upfront is pure gold. It simplifies and speeds up the assessment process, which gets you one step closer to being accepted as a speaker. A paragraph titled "Attendee Takeaways" with bullet points is the holy grail for everyone involved. + +### 5\. Keep recommended word counts in mind + +This is another mistake we see a lot. Many talks are submitted with either a single sentence description in the abstract or an extraordinary long volume of text. Neither is a good idea. The only exception we can think of is when a topic is very popular or topical, and that alone is enough to win the day even if the abstract is extremely short (but this is rare). Most abstracts should be between 75 and 250 words, and perhaps more for an extended workshop with prerequisites (e.g., preexisting knowledge or required downloads). Even then, try to keep your proposal as sharp, concise, and on-point as possible. + +Disregard this advice at your own risk; otherwise, there's a high likelihood that your proposal will be met with one of these reactions from reviewers: "They didn't take the time to write any more than this?" or "Sheesh, there's no way I have the time to read all that. I'm going to give it the lowest score and move on." + +### 6\. Choose a good title + +This is a debate we see all the time: Should a talk's title describe what the talk is about, or should it be written to stand out and get attention (e.g., evoking emotion, anchoring to a popular pop culture topic, or asking a compelling question)? There isn't a single correct answer to this question, but we definitely know when a title "works" and when it doesn't. We've seen some very creative titles work well and generate interest, and we've seen very straightforward titles work well, also. + +Here is our rule of thumb: If the talk covers a topic that has been around a while and is not particularly _hot_ right now, try getting creative and spicing it up a bit. If the topic is newer, a more straightforward title describing the talk in plain terms should be good. + +Titles on an event schedule may be the only thing attendees use to decide what talks to attend. So, run your potential talk titles by colleagues and friends, and seek their opinions. Ask: "If you were attending an event and saw this title on the schedule, would it pique your interest?" + +### 7\. Know the basic criteria that reviewers and organizers use to make decisions + +While this isn't a comprehensive list of review criteria, most reviewers and organizers consider one or more of the following when evaluating talk proposals. Therefore, at minimum, consider this list when you're creating a talk and the components that go with it. + + 1. **Timeliness of and estimated interest in the topic:** Is the topic applicable to the session's target audience? Will it deliver value? Is it timely? + 2. **Educational value:** Based on the abstract and speaker, is it clear that attendees will learn something from the talk? As mentioned in item 4 above, including an "Attendee Takeaways" section is really helpful to establish educational value. + 3. **Technical value:** Is the technology you intend to showcase applicable, unique, or being used in a new and creative way? Is there a live demo or a hands-on component? While some topics don't lend themselves to a demo, most people are visual learners and are better off if a presentation includes one (if it's relevant). For this reason, we place a lot of value on demos and hands-on content. + 4. **Diversity:** Yes, there are exceptions, but the majority of events, reviewers, and organizers agree that having a diverse speaker lineup is optimal and results in a better overall event in multiple ways. A topic delivered from a different perspective can often lead to creative breakthroughs for attendees, which is a huge value-add. See item 10 below for more on this. + 5. **Talk difficulty level:** We identify All Things Open talks as introductory, intermediate, or advanced. Having a good mix of talk levels ensures everyone in attendance can access applicable content. See item 9 below for more on this, but in general, it's smart to indicate your talk's level, whether or not the CFP requests it. + + + +### 8\. Stay current on the event's industry or sector + +Submitting a proposal on a relevant topic increases the probability your talk will be accepted. But how do you know what topics are of interest, especially if the CFP doesn't spell it out in simple terms? The best way to know what's timely and interesting is to deeply understand the sector the event focuses on. + +Yes, this requires time and effort, and it implies you enjoy the sector enough to stay current on it, but it will pay off. This knowledge will result in a higher _sector IQ_, which will be reflected in your topic, title, and abstract. It will be recognized by reviewers and immediately set you apart from others. At All Things Open, we spend the majority of our time reading about and staying current on the "open" space so that we can feature relevant, substantive, and informed content. Submitting a talk that is relevant, substantive, and informed greatly enhances the chance it will be accepted. + +### 9\. Describe whether the talk is introductory, intermediate, or advanced + +Some CFPs don't ask for this information, but you should offer it anyway. It will make the reviewers and organizer very happy for multiple reasons, including these: + + 1. Unless the event targets attendees with a certain skill or experience level (and most do not), organizers must include content that is appealing to a wide audience, including people of all skill, experience, and expertise levels. Even if an event focuses on a specific type of attendee (perhaps people with higher levels of experience or skills), most want to offer something a little different. Listing the talk level makes this much easier for organizers. + 2. News flash: Reviewers and organizers don't know everything and are not experts in every possible topic area. As a result, reviewers will sometimes look for a few keywords or other criteria, and adding the talk level can "seal the deal" and get your talk confirmed. + + + +### 10\. Tell organizers if you're a member of a historically underrepresented group + +A growing number of events are getting better at recognizing the value of diversity and ensuring their speaker lineup reflects it. If you're part of a group that hasn't typically been included in tech events and leadership, look to see if there is a place to indicate that on the submission form. If not, mention it in a conspicuous place somewhere in the abstract. This does not guarantee approval in any way—your proposal must still be well-written and relevant—but it does give reviewers and organizers pertinent information they may value and take into consideration. + +### 11\. Don't be ashamed of your credentials or speaking experience if it is light + +We talk to a lot of people who would like to deliver a presentation and have a lot to offer, but they never submit a talk because they don't feel they're qualified to speak. _Not true._ Some of the best talks we've seen are from first-time speakers or those very early in their speaking careers. Go ahead and submit the talk, and be honest when discussing your background. Most reviewers and organizers will focus on the substance of the submission over your experience and recognize that new ways of approaching and using technology often come from newbies rather than industry veterans. + +One caveat here: It still pays to know yourself. By this, we mean if you absolutely hate public speaking, have no desire to do it, and are only considering submitting a talk due to, for example, pressure from an employer, the talk is not likely to go well. It's better, to be honest, on the frontend than force something you have no desire to do. + +### 12\. Consider panel sessions carefully + +If you've got an idea for a panel session, please consider it carefully. In more than 10 years of hosting events we've seen some really good panel sessions, but we've seen far more that didn't go so well. Perhaps too many people were on the panel and not everyone had a chance to speak, perhaps a single panel member dominated the entire conversation, or perhaps the moderator didn't keep the dialogue and engagement flowing smoothly. Regardless of the issue, panels have the potential to go very wrong. + +That said, panels can still work and deliver a lot of value to attendees. If you do submit a panel session be sure to keep in mind the amount of time allotted for the session and confirm the number of panel members accordingly. Remember, less is always more when it comes to the panel format. Also, be sure the moderator understands the subject matter being discussed and doesn't mind enforcing format parameters and speaking time limits. Finally, let organizers know panel members and the moderator will engage in a pre-conference walk-through/preparation call before the event to ensure a smooth process in front of a live audience. Remember, organizers are well aware panels can be terrific but can also go in the opposite direction and very easily lead to a lot of negative feedback. + +### 13\. This is not an opportunity to sell  + +This is a sensitive topic, but one that absolutely must be mentioned. Over the years we've seen literally hundreds of talks "disqualified" by reviewers because they viewed the talk as a sales pitch. Few things evoke such a visceral response. Yes, there are events, tracks, and session slots where a sales pitch is appropriate (and maybe even required by the company paying your costs). However, make it a priority to know when and where this is appropriate and acceptable. And always, and we mean always, err on the side of making substance the focus of the talk rather than a sales angle.  + +It might sound like a cliche, but when a talk is delivered effectively with a focus on substance, people will **want** to buy what you're selling. And if you're not selling anything, they'll want to follow you on social media and generally engage with you—because you delivered value to them. Meaning: You gave them something they can apply themselves (education) or because your delivery style was entertaining and engaging. With rare exceptions, always focus any abstract on substance, and the rest will take care of itself.  + +### Go for it! + +We greatly admire and respect anyone who submits a talk for consideration—it takes a lot of time, thought, and courage. Therefore, we go to great lengths to thank everyone who goes through the process; we give free event passes to everyone who applies (regardless of approval or rejection), and we make every effort to host Q&A sessions to provide as much guidance as possible on the front end. Again, the more time and consideration speakers put into the submission process, the easier the lives of reviewers and organizers. We need to make all of this as easy as possible. + +While this is not a comprehensive list of best practices, it includes some of the things we think people can benefit from knowing before submitting a talk. There are a lot of people out there with more knowledge and experience, so please share your best tips for submitting conference proposals in the comments, so we can all learn from you. + +* * * + +_[All Things Open][2] is a universe of platforms and events focusing on open source, open tech, and the open web. It hosts the [All Things Open conference][3], the largest open source/tech/web event on the US East Coast. The conference regularly hosts thousands of attendees and many of the world's most influential companies from a wide variety of industries and sectors. In 2019, nearly 5,000 people attended from 41 US states and 24 countries. Please direct inquiries about ATO to the team at [info@allthingsopen.org][6]._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/tips-conference-proposals + +作者:[Todd Lewis][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/toddlewis +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ato2016_checkin_conference.jpg?itok=DJtoSS6t (All Things Open check-in at registration booth) +[2]: https://www.allthingsopen.org/ +[3]: https://2020.allthingsopen.org/ +[4]: https://2020.allthingsopen.org/call-for-speakers +[5]: https://www.allthingsopen.org/what-were-looking-for/ +[6]: mailto:info@allthingsopen.org From b6944f93a7459127c3cd0582224702aaef849cbf Mon Sep 17 00:00:00 2001 From: tinyeyeser Date: Mon, 4 May 2020 15:01:11 +0800 Subject: [PATCH 113/178] =?UTF-8?q?=E5=B7=B2=E7=BF=BB=E8=AF=91=20by=20?= =?UTF-8?q?=E5=B0=8F=E7=9C=BC=E5=84=BF?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- ...o avoid man-in-the-middle cyber attacks.md | 94 ------------------- ...o avoid man-in-the-middle cyber attacks.md | 84 +++++++++++++++++ 2 files changed, 84 insertions(+), 94 deletions(-) delete mode 100644 sources/tech/20200407 How to avoid man-in-the-middle cyber attacks.md create mode 100644 translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md diff --git a/sources/tech/20200407 How to avoid man-in-the-middle cyber attacks.md b/sources/tech/20200407 How to avoid man-in-the-middle cyber attacks.md deleted file mode 100644 index c8a72579ef..0000000000 --- a/sources/tech/20200407 How to avoid man-in-the-middle cyber attacks.md +++ /dev/null @@ -1,94 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (tinyeyeser ) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (How to avoid man-in-the-middle cyber attacks) -[#]: via: (https://opensource.com/article/20/4/mitm-attacks) -[#]: author: (Jackie Lam https://opensource.com/users/beenverified) - -How to avoid man-in-the-middle cyber attacks -====== -Understanding MITM attacks is the first step in not being a victim of -this high-tech style of eavesdropping. -![Security monster][1] - -Whether you're sending data on your computer or talking to someone online, you want to assume some level of security and privacy. - -But what if a third party is eavesdropping online, unbeknownst to you? And worse, what if they're impersonating someone from a business you trust in order to gain damaging information? This could put your personal data into the hands of dangerous, would-be thieves. - -Welcome to what's called a man-in-the-middle (MITM) attack. - -### What are man-in-the-middle attacks? - -A man-in-the-middle attack occurs when a cybercriminal inserts themselves into communications between you, the targeted victim, and a device in order to steal sensitive information that can be used for a variety of criminal purposes—most notably identity theft, says Steve J. J. Weisman, founder of Scamicide. - -"A man-in-the-middle-attack can also occur when the victim believes he or she is communicating with a legitimate app or website," says Weisman, "when the truth is that the victim is communicating with a phony website or app and thereby providing sensitive information to the criminal." - -One of the oldest forms of cyberattacks, MITM attacks have been around since the 1980s. What's more, they're quite common. As Weisman explains, there are a handful of ways a MITM attack can happen: - - * **Attacking a WiFi router that is not properly secured:** This typically occurs when someone is using public WiFi. "While home routers might be vulnerable, it's more common for criminals to attack public WiFi networks," says Weisman. The goal is to spy on unsuspecting people who are handling sensitive information, such as their online bank accounts, he adds. - * **Hacking email accounts of banks, financial advisers, and other companies:** "Once [the criminals] have hacked these email systems, they send out emails that appear to come from the legitimate bank or other company," Weisman says. "[They ask] for personal information, such as usernames and passwords, under the guise of an emergency. The targeted victim is lured into providing that information." - * **Sending phishing emails:** Thieves might also send emails pretending to be legitimate companies that the targeted victim does business with, asking the recipient for their personal information. "In many instances, the spear-phishing emails will direct the victim to a counterfeit website that appears to be that of a legitimate company with which the victim does business," says Weisman. - * **Using malicious code in legitimate websites:** Attackers can also place malicious code—usually JavaScript—into a legitimate website by way of a web application. "When the victim loads the legitimate page, the malicious code just sits in the background until the user enters sensitive information, such as account login or credit card details, which the malicious code then copies and sends to the attackers' servers," says Nicholas McBride, a cybersecurity consultant. - - - -### What is an example of an MITM attack? - -The Lenovo case is a well-known example of an MITM attack. In 2014 and 2015, the major computer manufacturer sold consumer laptops with preinstalled software that meddled with how a user's browser communicated with websites. Whenever the user's cursor hovered over a product, this software, called VisualDiscovery, sent pop-up ads from retail partners that sold similar products. - -Here's the kicker: This MITM attack allowed VisualDiscovery to access all of the user's personal data, including social security numbers, info about financial transactions, medical info, and logins and passwords. All without the user knowing or granting permission beforehand. The FTC deemed this a deceptive and unfair online scam. Lenovo agreed to pay $8.3 million in a class-action settlement in 2019. - -### How can I protect myself from an online attack? - - * **Avoid using public WiFi:** Weisman recommends never using public WiFi for financial transactions unless you've installed a reliable virtual private network (VPN) client on your device and have a VPN host you can use and trust. Over a VPN connection, your communications are encrypted, so your information can't be stolen. - - * **Be on the lookout:** Be wary of emails or text messages that ask you to update your password or provide your username or personal information. These methods can be used to steal your identity. - -If you are unsure of the actual identity of the party sending you the email, you can use tools such as a reverse phone or email search. With a reverse phone number lookup, you may be able to find out more about the identity of an unknown texter. And with a reverse email lookup, you can try to determine who might have sent you a message. - -Generally, if something's actually a problem, you'll hear from someone you know and trust within your company, or from someone you can also go and meet, in person, at your bank or school or other organization. Important account information is never the purview of an unknown technician. - - * **Don't click on links contained in emails:** If someone sends you an email telling you that you need to sign into an account, don't click on the link provided in the email. Instead, navigate to the site yourself, log in as you normally would, and look for an alert there. If you don't see an alert message in your account settings, contact a representative by phone using contact information on the site and _not_ from the email. - - * **Install reliable security software:** If you're on Windows, install good open source antivirus like [ClamAV][2]. On all platforms, keep your software up to date with the latest security patches. - - * **Take alerts seriously:** If you're visiting a site that starts with HTTPS, your browser might alert you to an issue, says McBride. For instance, if the domain name on the site's certificate doesn't match the one you're trying to visit. Don't ignore the alert. Heed it and navigate away from the site for now. Verify that you haven't [mistyped it][3], and if the problem persists, contact the site owner if you can. - - * **Use an ad blocker:** Pop-up ads (also known as _adware attacks_) can be used to intercept your personal information, so use an ad blocker. "The truth is, as an individual user, it's hard to protect against a MITM attack," says McBride, "as it is designed to leave the victim in the dark and to prevent them from noticing that there is anything wrong." - -A good open source ad blocker (or "wide-spectrum blocker," in the developer's words) is [uBlock origin][4]. It's available for both Firefox and Chromium (and all Chromium-based browsers, such as Chrome, Brave, Vivaldi, Edge, and so on), and even Safari. - - - - -### Stay alert - -Remember, you don't have to click anything online right away, and you don't have to follow random people's instructions, no matter how urgent they may seem. The internet will still be there after you step away from the computer and verify the identity of a person or site demanding your attention. - -While MITM attacks can happen to anyone, understanding what they are, knowing how they happen, and actively taking steps to prevent them can safeguard you from being a victim. - -* * * - -_This article was originally published on [BeenVerified.com][5] under a [CC BY-SA 2.0][6] license._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/mitm-attacks - -作者:[Jackie Lam][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/beenverified -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_chaos_engineer_monster.png?itok=J31aRccu (Security monster) -[2]: https://www.clamav.net -[3]: https://opensource.com/article/20/1/stop-typosquatting-attacks -[4]: https://github.com/gorhill/uBlock -[5]: https://www.beenverified.com/crime/what-is-a-man-in-the-middle-attack/ -[6]: https://creativecommons.org/licenses/by-sa/2.0/ diff --git a/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md b/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md new file mode 100644 index 0000000000..7ccce7c3bd --- /dev/null +++ b/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md @@ -0,0 +1,84 @@ +[#]: collector: "lujun9972" +[#]: translator: " " +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "How to avoid man-in-the-middle cyber attacks" +[#]: via: "https://opensource.com/article/20/4/mitm-attacks" +[#]: author: "Jackie Lam https://opensource.com/users/beenverified" + +如何避免中间人攻击 +====== + +首先搞明白到底什么是中间人攻击,才能避免成为此类高科技窃听的受害者。 + +![Security monster][1] + +当你使用电脑发送数据或与某人在线通话的时候,你一定采取了某种程度的安全隐私手段。 + +但如果有第三方在你不知情的情况下窃听,甚至冒充某个你信任的商业伙伴窃取破坏性的信息呢?你的私人数据就这样被放在了危险分子的手中。 + +这就是臭名昭著的中间人攻击。 + +### 到底什么是中间人攻击? + +黑客潜入到你与受害者或是某个设备间的通信过程中,窃取敏感信息——多数是身份信息——进而从事各种违法行为的过程,就是一次中间人攻击。Scamicide公司创始人Steve J. J. Weisman介绍说: + +“中间人攻击也可以发生在受害者与某个合法app或网页中间。当受害者以为自己面对的是正常app或网页时,其实Ta 正在与一个仿冒的app或网页互动,将自己的敏感信息透露给不法分子。” + +中间人攻击诞生于1980年代,是最古老的网络攻击形式之一。但它却更为常见。Weisman解释道,发生中间人攻击的场景有很多种: + + * **攻陷一个未有效加密的WiFi路由器**:该场景多见于人们使用公共WiFi的时候。“虽然家用路由器也很脆弱,但黑客攻击公共WiFi网络的情况更为常见。”Weisman说,“黑客的目标就是从毫无戒心的人们那里窃取在线银行账户这样的敏感信息。” + * **攻陷银行、金融顾问等机构的电子邮件账户**:“一旦黑客攻陷了这些电子邮件系统,他们就会冒充银行或此类公司给受害者发邮件”,Weisman说,”他们以紧急情况的名义索要个人信息,诸如用户名和密码。受害者很容易被诱骗交出这些信息。“ + * **发送钓鱼邮件**:窃贼们还可能冒充成与受害者有合作关系的公司,向其索要个人信息。”在多个案例中,钓鱼邮件会引导受害者访问一个伪造的网页,这个伪造的网页看起来就和受害者常常访问的合法公司网页一模一样。“Weisman说道。 + * **在合法网页中嵌入恶意代码**:攻击者还会把恶意代码——通常是JavaScript——嵌入到一个合法的网页中。”当受害者加载这个合法网页时,恶意代码首先按兵不动,直到用户输入账户登录或是信用卡信息时,恶意代码就会复制这些信息并将其发送至攻击者的服务器。“网络安全专家Nicholas McBride介绍说。 + +### 有哪些中间人攻击的著名案例? + +联想作为主流的计算机制造厂商,在2014到2015年售卖的消费级笔记本电脑中预装了一款叫做 VisualDiscovery 的软件,拦截用户的网页浏览行为。当用户的鼠标在某个产品页面经过时,这款软件就会弹出一个来自合作伙伴的类似产品的广告。 + +这起中间人攻击事件的关键在于:VisualDiscovery 拥有访问用户所有私人数据的权限,包括身份证号、金融交易信息、医疗信息、登录名和密码等等。所有这些访问行为都是在用户不知情和未获得授权的情况下进行的。联邦交易委员会(FTC)认定此次事件为欺诈与不公平竞争。2019年,联想同意为此支付8300万美元的集体诉讼罚款。 + +### 我如何才能避免遭受中间人攻击? + + * **避免使用公共WiFi:**Weisman建议,从来都不要使用公开的WiFi进行金融交易,除非你安装了可靠的VPN客户端并连接至可信任的VPN服务器。通过VPN连接,你的通信是加密的,信息也就不会失窃。 + * **时刻注意:**对要求你更新密码或是提供用户名等私人信息的邮件或文本消息要时刻保持警惕。这些手段很可能被用来窃取你的身份信息。 + +如果不确定收到的邮件来自于确切哪一方,你可以使用诸如电话反查或是邮件反查等工具。通过电话反查,你可以找出未知发件人的更多身份信息。通过邮件反查,你可以尝试确定谁给你发来了这条消息。 + +通常来讲,如果发现某些方面确实有问题,你可以听从公司中某个你认识或是信任的人的意见。或者,你也可以去你的银行、学校或其他某个组织,当面寻求他们的帮助。总之,重要的账户信息绝对不要透露给不认识的“技术人员”。 + + * **不要点击邮件中的链接:**如果有人给你发了一封邮件,说你需要登录某个账户,不要点击邮件中的链接。相反,要通过平常习惯的方式自行去访问,并留意是否有告警信息。如果在账户设置中没有看到告警信息,给客服打电话的时候也_不要_联系邮件中留的电话,而是站点页面中的联系人信息。 + * **安装可靠的安全软件:**如果你使用的是Windows操作系统,安装开源的杀毒软件,如[ClamAV][2]。如果使用的是其他平台,要保持你的软件安装有最新的安全补丁。 + * **认真对待告警信息:**如果你正在访问的页面以HTTPS开头,浏览器可能会出现一则告警信息。例如,站点证书的域名与你尝试访问的站点域名不相匹配。千万不要忽视此类告警信息。听从告警建议,迅速关掉页面。确认域名没有输入错误的情况下,如果情况依旧,要立刻联系站点所有者。 + * **使用广告屏蔽软件:**弹窗广告(也叫广告软件攻击)可被用于窃取个人信息,因此你还可以使用广告屏蔽类软件。对个人用户来说,中间人攻击其实是很难防范的,因为它被设计出来的时候,就是为了让受害者始终蒙在鼓里,意识不到任何异常。有一款不错的开源广告屏蔽软件叫 [uBlock origin][4]。可以同时支持Firefox和Chromium(以及所有基于Chromium的浏览器,例如Chrome、Brave、Vivaldi、Edge等),甚至还支持Safari。 + +### 保持警惕 + +要时刻记住,你并不需要立刻就点击某些链接,你也并不需要跟随某个陌生人的建议,无论这些信息看起来有多么紧急。互联网始终都在。你大可以先离开电脑,去证实一下这些人的真实身份,看看这些”无比紧急“的页面到底是真是假。 + +尽管任何人都可能遭遇中间人攻击,只要弄明白何为中间人攻击,理解中间人攻击如何发生,并采取有效的防范措施,就可以保护自己避免成为其受害者。 + +* * * + +_This article was originally published on [BeenVerified.com][5] under a [CC BY-SA 2.0][6] license._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/mitm-attacks + +作者:[Jackie Lam][a] +选题:[lujun9972][b] +译者:[tinyeyeser](https://github.com/tinyeyeser) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/beenverified +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_password_chaos_engineer_monster.png?itok=J31aRccu "Security monster" +[2]: https://www.clamav.net +[3]: https://opensource.com/article/20/1/stop-typosquatting-attacks +[4]: https://github.com/gorhill/uBlock +[5]: https://www.beenverified.com/crime/what-is-a-man-in-the-middle-attack/ +[6]: https://creativecommons.org/licenses/by-sa/2.0/ From 8ea914515c6c9a403688b8f5ad02c9ba854d4c45 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Mon, 4 May 2020 17:52:23 +0800 Subject: [PATCH 114/178] Translated (#18330) * Update 20200416 Learning to love systemd.md * Update 20200416 Learning to love systemd.md * Update 20200416 Learning to love systemd.md * Update 20200416 Learning to love systemd.md * Update 20200416 Learning to love systemd.md * Update 20200416 Learning to love systemd.md * Almost Done * Done * Update 20200416 Learning to love systemd.md --- .../tech/20200416 Learning to love systemd.md | 328 ------------------ .../tech/20200416 Learning to love systemd.md | 325 +++++++++++++++++ 2 files changed, 325 insertions(+), 328 deletions(-) delete mode 100644 sources/tech/20200416 Learning to love systemd.md create mode 100644 translated/tech/20200416 Learning to love systemd.md diff --git a/sources/tech/20200416 Learning to love systemd.md b/sources/tech/20200416 Learning to love systemd.md deleted file mode 100644 index 70243db915..0000000000 --- a/sources/tech/20200416 Learning to love systemd.md +++ /dev/null @@ -1,328 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (messon007) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Learning to love systemd) -[#]: via: (https://opensource.com/article/20/4/systemd) -[#]: author: (David Both https://opensource.com/users/dboth) - -Learning to love systemd -====== -systemd is the mother of all processes, responsible for bringing the -Linux host up to a state where productive work can be done. -![Penguin driving a car with a yellow background][1] - -systemd—yes, all lower-case, even at the beginning of a sentence—is the modern replacement for init and SystemV init scripts. It is also much more. - -Like most sysadmins, when I think of the init program and SystemV, I think of Linux startup and shutdown and not really much else, like managing services once they are up and running. Like init, systemd is the mother of all processes, and it is responsible for bringing the Linux host up to a state in which productive work can be done. Some of the functions assumed by systemd, which is far more extensive than the old init program, are to manage many aspects of a running Linux host, including mounting filesystems, managing hardware, handling timers, and starting and managing the system services that are required to have a productive Linux host. - -This series of articles, which is based in part on excerpts from my three-volume Linux training course, [_Using and administering Linux: zero to sysadmin_][2], explores systemd's functions both at startup and beginning after startup finishes. - -### Linux boot - -The complete process that takes a Linux host from an off state to a running state is complex, but it is open and knowable. Before getting into the details, I'll give a quick overview from when the host hardware is turned on until the system is ready for a user to log in. Most of the time, "the boot process" is discussed as a single entity, but that is not accurate. There are, in fact, three major parts to the full boot and startup process: - - * **Hardware boot:** Initializes the system hardware - * **Linux boot:** Loads the Linux kernel and then systemd - * **Linux startup:** Where systemd prepares the host for productive work - - - -The Linux startup sequence begins after the kernel has loaded either init or systemd, depending upon whether the distribution uses the old or new startup, respectively. The init and systemd programs start and manage all the other processes and are both known as the "mother of all processes" on their respective systems. - -It is important to separate the hardware boot from the Linux boot from the Linux startup and to explicitly define the demarcation points between them. Understanding these differences and what part each plays in getting a Linux system to a state where it can be productive makes it possible to manage these processes and better determine where a problem is occurring during what most people refer to as "boot." - -The startup process follows the three-step boot process and brings the Linux computer up to an operational state in which it is usable for productive work. The startup process begins when the kernel transfers control of the host to systemd. - -### systemd controversy - -systemd can evoke a wide range of reactions from sysadmins and others responsible for keeping Linux systems up and running. The fact that systemd is taking over so many tasks in many Linux systems has engendered pushback and discord among certain groups of developers and sysadmins. - -SystemV and systemd are two different methods of performing the Linux startup sequence. SystemV start scripts and the init program are the old methods, and systemd using targets is the new method. Although most modern Linux distributions use the newer systemd for startup, shutdown, and process management, there are still some that do not. One reason is that some distribution maintainers and some sysadmins prefer the older SystemV method over the newer systemd. - -I think both have advantages. - -#### Why I prefer SystemV - -I prefer SystemV because it is more open. Startup is accomplished using Bash scripts. After the kernel starts the init program, which is a compiled binary, init launches the **rc.sysinit** script, which performs many system initialization tasks. After **rc.sysinit** completes, init launches the **/etc/rc.d/rc** script, which in turn starts the various services defined by the SystemV start scripts in the **/etc/rc.d/rcX.d**, where "X" is the number of the runlevel being started. - -Except for the init program itself, all these programs are open and easily knowable scripts. It is possible to read through these scripts and learn exactly what is taking place during the entire startup process, but I don't think many sysadmins actually do that. Each start script is numbered so that it starts its intended service in a specific sequence. Services are started serially, and only one service starts at a time. - -systemd, developed by Red Hat's Lennart Poettering and Kay Sievers, is a complex system of large, compiled binary executables that are not understandable without access to the source code. It is open source, so "access to the source code" isn't hard, just less convenient. systemd appears to represent a significant refutation of multiple tenets of the Linux philosophy. As a binary, systemd is not directly open for the sysadmin to view or make easy changes. systemd tries to do everything, such as managing running services, while providing significantly more status information than SystemV. It also manages hardware, processes, and groups of processes, filesystem mounts, and much more. systemd is present in almost every aspect of the modern Linux host, making it the one-stop tool for system management. All of this is a clear violation of the tenets that programs should be small and that each program should do one thing and do it well. - -#### Why I prefer systemd - -I prefer systemd as my startup mechanism because it starts as many services as possible in parallel, depending upon the current stage in the startup process. This speeds the overall startup and gets the host system to a login screen faster than SystemV. - -systemd manages almost every aspect of a running Linux system. It can manage running services while providing significantly more status information than SystemV. It also manages hardware, processes and groups of processes, filesystem mounts, and much more. systemd is present in almost every aspect of the modern Linux operating system, making it the one-stop tool for system management. (Does this sound familiar?) - -The systemd tools are compiled binaries, but the tool suite is open because all the configuration files are ASCII text files. Startup configuration can be modified through various GUI and command-line tools, as well as adding or modifying various configuration files to suit the needs of the specific local computing environment. - -#### The real issue - -Did you think I could not like both startup systems? I do, and I can work with either one. - -In my opinion, the real issue and the root cause of most of the controversy between SystemV and systemd is that there is [no choice][3] on the sysadmin level. The choice of whether to use SystemV or systemd has already been made by the developers, maintainers, and packagers of the various distributions—but with good reason. Scooping out and replacing an init system, by its extreme, invasive nature, has a lot of consequences that would be hard to tackle outside the distribution design process. - -Despite the fact that this choice is made for me, my Linux hosts boot up and work, which is what I usually care the most about. As an end user and even as a sysadmin, my primary concern is whether I can get my work done, work such as writing my books and this article, installing updates, and writing scripts to automate everything. So long as I can do my work, I don't really care about the start sequence used on my distro. - -I do care when there is a problem during startup or service management. Regardless of which startup system is used on a host, I know enough to follow the sequence of events to find the failure and fix it. - -#### Replacing SystemV - -There have been previous attempts at replacing SystemV with something a bit more modern. For about two releases, Fedora used a thing called Upstart to replace the aging SystemV, but it did not replace init and provided no changes that I noticed. Because Upstart provided no significant changes to the issues surrounding SystemV, efforts in this direction were quickly dropped in favor of systemd. - -Despite the fact that most Linux developers agree that replacing the old SystemV startup is a good idea, many developers and sysadmins dislike systemd for that. Rather than rehash all the so-called issues that people have—or had—with systemd, I will refer you to two good, if somewhat old, articles that should cover most everything. Linus Torvalds, the creator of the Linux kernel, seems disinterested. In a 2014 ZDNet article, _[Linus Torvalds and others on Linux's systemd][4]_, Linus is clear about his feelings. - -> "I don't actually have any particularly strong opinions on systemd itself. I've had issues with some of the core developers that I think are much too cavalier about bugs and compatibility, and I think some of the design details are insane (I dislike the binary logs, for example), but those are details, not big issues." - -In case you don't know much about Linus, I can tell you that if he does not like something, he is very outspoken, explicit, and quite clear about that dislike. He has become more socially acceptable in his manner of addressing his dislike about things. - -In 2013, Poettering wrote a long blog post in which he debunks the [myths about systemd][5] while providing insight into some of the reasons for creating it. This is a very good read, and I highly recommend it. - -### systemd tasks - -Depending upon the options used during the compile process (which are not considered in this series), systemd can have as many as 69 binary executables that perform the following tasks, among others: - - * The systemd program runs as PID 1 and provides system startup of as many services in parallel as possible, which, as a side effect, speeds overall startup times. It also manages the shutdown sequence. - * The systemctl program provides a user interface for service management. - * Support for SystemV and LSB start scripts is offered for backward compatibility. - * Service management and reporting provide more service status data than SystemV. - * It includes tools for basic system configuration, such as hostname, date, locale, lists of logged-in users, running containers and virtual machines, system accounts, runtime directories and settings, daemons to manage simple network configuration, network time synchronization, log forwarding, and name resolution. - * It offers socket management. - * systemd timers provide advanced cron-like capabilities to include running a script at times relative to system boot, systemd startup, the last time the timer was started, and more. - * It provides a tool to analyze dates and times used in timer specifications. - * Mounting and unmounting of filesystems with hierarchical awareness allows safer cascading of mounted filesystems. - * It enables the positive creation and management of temporary files, including deletion. - * An interface to D-Bus provides the ability to run scripts when devices are plugged in or removed. This allows all devices, whether pluggable or not, to be treated as plug-and-play, which considerably simplifies device handling. - * Its tool to analyze the startup sequence can be used to locate the services that take the most time. - * It includes journals for storing system log messages and tools for managing the journals. - - - -### Architecture - -Those tasks and more are supported by a number of daemons, control programs, and configuration files. Figure 1 shows many of the components that belong to systemd. This is a simplified diagram designed to provide a high-level overview, so it does not include all of the individual programs or files. Nor does it provide any insight into data flow, which is so complex that it would be a useless exercise in the context of this series of articles. - -![systemd architecture][6] - -A full exposition of systemd would take a book on its own. You do not need to understand the details of how the systemd components in Figure 1 fit together; it's enough to know about the programs and components that enable managing various Linux services and deal with log files and journals. But it's clear that systemd is not the monolithic monstrosity it is purported to be by some of its critics. - -### systemd as PID 1 - -systemd is PID 1. Some of its functions, which are far more extensive than the old SystemV3 init program, are to manage many aspects of a running Linux host, including mounting filesystems and starting and managing system services required to have a productive Linux host. Any of systemd's tasks that are not related to the startup sequence are outside the scope of this article (but some will be explored later in this series). - -First, systemd mounts the filesystems defined by **/etc/fstab**, including any swap files or partitions. At this point, it can access the configuration files located in **/etc**, including its own. It uses its configuration link, **/etc/systemd/system/default.target**, to determine which state or target it should boot the host into. The **default.target** file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the **graphical.target**, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the **multi-user.target**, which is like runlevel 3 in SystemV. The **emergency.target** is similar to single-user mode. Targets and services are systemd units. - -The table below (Figure 2) compares the systemd targets with the old SystemV startup runlevels. systemd provides the systemd target aliases for backward compatibility. The target aliases allow scripts—and many sysadmins—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution. - -**systemd targets** | **SystemV runlevel** | **target aliases** | **Description** ----|---|---|--- -default.target | | | This target is always aliased with a symbolic link to either **multi-user.target** or **graphical.target**. systemd always uses the **default.target** to start the system. The **default.target** should never be aliased to **halt.target**, **poweroff.target**, or **reboot.target**. -graphical.target | 5 | runlevel5.target | **Multi-user.target** with a GUI -| 4 | runlevel4.target | Unused. Runlevel 4 was identical to runlevel 3 in the SystemV world. This target could be created and customized to start local services without changing the default **multi-user.target**. -multi-user.target | 3 | runlevel3.target | All services running, but command-line interface (CLI) only -| 2 | runlevel2.target | Multi-user, without NFS, but all other non-GUI services running -rescue.target | 1 | runlevel1.target | A basic system, including mounting the filesystems with only the most basic services running and a rescue shell on the main console -emergency.target | S | | Single-user mode—no services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system. -halt.target | | | Halts the system without powering it down -reboot.target | 6 | runlevel6.target | Reboot -poweroff.target | 0 | runlevel0.target | Halts the system and turns the power off - -Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies, which are the services required to run the Linux host at a specific level of functionality. When all the dependencies listed in the target configuration files are loaded and running, the system is running at that target level. In Figure 2, the targets with the most functionality are at the top of the table, with functionality declining towards the bottom of the table. - -systemd also looks at the legacy SystemV init directories to see if any startup files exist there. If so, systemd uses them as configuration files to start the services described by the files. The deprecated network service is a good example of one that still uses SystemV startup files in Fedora. - -Figure 3 (below) is copied directly from the bootup man page. It shows a map of the general sequence of events during systemd startup and the basic ordering requirements to ensure a successful startup. - - -``` -                                         cryptsetup-pre.target -                                                   | - (various low-level                                v -     API VFS mounts:                 (various cryptsetup devices...) -  mqueue, configfs,                                |    | -  debugfs, ...)                                    v    | -  |                                  cryptsetup.target  | -  |  (various swap                                 |    |    remote-fs-pre.target -  |   devices...)                                  |    |     |        | -  |    |                                           |    |     |        v -  |    v                       local-fs-pre.target |    |     |  (network file systems) -  |  swap.target                       |           |    v     v                 | -  |    |                               v           |  remote-cryptsetup.target  | -  |    |  (various low-level  (various mounts and  |             |              | -  |    |   services: udevd,    fsck services...)   |             |    remote-fs.target -  |    |   tmpfiles, random            |           |             |             / -  |    |   seed, sysctl, ...)          v           |             |            / -  |    |      |                 local-fs.target    |             |           / -  |    |      |                        |           |             |          / -  \\____|______|_______________   ______|___________/             |         / -                              \ /                                |        / -                               v                                 |       / -                        sysinit.target                           |      / -                               |                                 |     / -        ______________________/|\\_____________________           |    / -       /              |        |      |               \          |   / -       |              |        |      |               |          |  / -       v              v        |      v               |          | / -  (various       (various      |  (various            |          |/ -   timers...)      paths...)   |   sockets...)        |          | -       |              |        |      |               |          | -       v              v        |      v               |          | - timers.target  paths.target   |  sockets.target      |          | -       |              |        |      |               v          | -       v              \\_______ | _____/         rescue.service   | -                              \|/                     |          | -                               v                      v          | -                           basic.target         rescue.target    | -                               |                                 | -                       ________v____________________             | -                      /              |              \            | -                      |              |              |            | -                      v              v              v            | -                  display-    (various system   (various system  | -              manager.service     services        services)      | -                      |         required for        |            | -                      |        graphical UIs)       v            v -                      |              |            multi-user.target - emergency.service    |              |              | -         |            \\_____________ | _____________/ -         v                          \|/ - emergency.target                    v -                              graphical.target -``` - -The **sysinit.target** and **basic.target** targets can be considered checkpoints in the startup process. Although one of systemd's design goals is to start system services in parallel, certain services and functional targets must be started before other services and targets can start. These checkpoints cannot be passed until all of the services and targets required by that checkpoint are fulfilled. - -The **sysinit.target** is reached when all of the units it depends on are completed. All of those units, mounting filesystems, setting up swap files, starting udev, setting the random generator seed, initiating low-level services, and setting up cryptographic services (if one or more filesystems are encrypted), must be completed but, within the **sysinit.target**, those tasks can be performed in parallel. - -The **sysinit.target** starts up all of the low-level services and units required for the system to be marginally functional and that are required to enable moving onto the **basic.target**. - -After the **sysinit.target** is fulfilled, systemd then starts all the units required to fulfill the next target. The basic target provides some additional functionality by starting units that are required for all of the next targets. These include setting up things like paths to various executable directories, communication sockets, and timers. - -Finally, the user-level targets, **multi-user.target** or **graphical.target**, can be initialized. The **multi-user.target** must be reached before the graphical target dependencies can be met. The underlined targets in Figure 3 are the usual startup targets. When one of these targets is reached, startup has completed. If the **multi-user.target** is the default, then you should see a text-mode login on the console. If **graphical.target** is the default, then you should see a graphical login; the specific GUI login screen you see depends on your default display manager. - -The bootup man page also describes and provides maps of the boot into the initial RAM disk and the systemd shutdown process. - -systemd also provides a tool that lists dependencies of a complete startup or for a specified unit. A unit is a controllable systemd resource entity that can range from a specific service, such as httpd or sshd, to timers, mounts, sockets, and more. Try the following command and scroll through the results. - - -``` -`systemctl list-dependencies graphical.target` -``` - -Notice that this fully expands the top-level target units list required to bring the system up to the graphical target run mode. Use the **\--all** option to expand all of the other units as well. - - -``` -`systemctl list-dependencies --all graphical.target` -``` - -You can search for strings such as "target," "slice," and "socket" using the search tools of the **less** command. - -So now, try the following. - - -``` -`systemctl list-dependencies multi-user.target` -``` - -and - - -``` -`systemctl list-dependencies rescue.target` -``` - -and - - -``` -`systemctl list-dependencies local-fs.target` -``` - -and - - -``` -`systemctl list-dependencies dbus.service` -``` - -This tool helps me visualize the specifics of the startup dependencies for the host I am working on. Go ahead and spend some time exploring the startup tree for one or more of your Linux hosts. But be careful because the systemctl man page contains this note: - -> _"Note that this command only lists units currently loaded into memory by the service manager. In particular, this command is not suitable to get a comprehensive list at all reverse dependencies on a specific unit, as it won't list the dependencies declared by units currently not loaded."_ - -### Final thoughts - -Even before getting very deep into systemd, it's obvious that it is both powerful and complex. It is also apparent that systemd is not a single, huge, monolithic, and unknowable binary file. Rather, it is composed of a number of smaller components and subcommands that are designed to perform specific tasks. - -The next article in this series will explore systemd startup in more detail, as well as systemd configuration files, changing the default target, and how to create a simple service unit. - -### Resources - -There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup. - - * The Fedora Project has a good, practical [guide][7] [to systemd][7]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd. - * The Fedora Project also has a good [cheat sheet][8] that cross-references the old SystemV commands to comparable systemd ones. - * For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][9]'s [description of systemd][10]. - * [Linux.com][11]'s "More systemd fun" offers more advanced systemd [information and tips][12]. - - - -There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers. - - * [Rethinking PID 1][13] - * [systemd for Administrators, Part I][14] - * [systemd for Administrators, Part II][15] - * [systemd for Administrators, Part III][16] - * [systemd for Administrators, Part IV][17] - * [systemd for Administrators, Part V][18] - * [systemd for Administrators, Part VI][19] - * [systemd for Administrators, Part VII][20] - * [systemd for Administrators, Part VIII][21] - * [systemd for Administrators, Part IX][22] - * [systemd for Administrators, Part X][23] - * [systemd for Administrators, Part XI][24] - - - -Alison Chiaken, a Linux kernel and systems programmer at Mentor Graphics, offers a preview of her... - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/systemd - -作者:[David Both][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dboth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background) -[2]: http://www.both.org/?page_id=1183 -[3]: http://www.osnews.com/story/28026/Editorial_Thoughts_on_Systemd_and_the_Freedom_to_Choose -[4]: https://www.zdnet.com/article/linus-torvalds-and-others-on-linuxs-systemd/ -[5]: http://0pointer.de/blog/projects/the-biggest-myths.html -[6]: https://opensource.com/sites/default/files/uploads/systemd-architecture.png (systemd architecture) -[7]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html -[8]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet -[9]: http://Freedesktop.org -[10]: http://www.freedesktop.org/wiki/Software/systemd -[11]: http://Linux.com -[12]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/ -[13]: http://0pointer.de/blog/projects/systemd.html -[14]: http://0pointer.de/blog/projects/systemd-for-admins-1.html -[15]: http://0pointer.de/blog/projects/systemd-for-admins-2.html -[16]: http://0pointer.de/blog/projects/systemd-for-admins-3.html -[17]: http://0pointer.de/blog/projects/systemd-for-admins-4.html -[18]: http://0pointer.de/blog/projects/three-levels-of-off.html -[19]: http://0pointer.de/blog/projects/changing-roots -[20]: http://0pointer.de/blog/projects/blame-game.html -[21]: http://0pointer.de/blog/projects/the-new-configuration-files.html -[22]: http://0pointer.de/blog/projects/on-etc-sysinit.html -[23]: http://0pointer.de/blog/projects/instances.html -[24]: http://0pointer.de/blog/projects/inetd.html diff --git a/translated/tech/20200416 Learning to love systemd.md b/translated/tech/20200416 Learning to love systemd.md new file mode 100644 index 0000000000..34bc8ea314 --- /dev/null +++ b/translated/tech/20200416 Learning to love systemd.md @@ -0,0 +1,325 @@ +[#]: collector: (lujun9972) +[#]: translator: (messon007) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Learning to love systemd) +[#]: via: (https://opensource.com/article/20/4/systemd) +[#]: author: (David Both https://opensource.com/users/dboth) + +学会爱上systemd +====== + +systemd是所有进程的源头,负责将Linux主机启动到可以做生产性任务的状态。 +![Penguin driving a car with a yellow background][1] + +systemd(是的,全小写,即使在句子开头也是小写),是init和SystemV init脚本的现代替代者。它还有更多功能。 + +当我想到init和SystemV时,像大多数系统管理员一样,我想到的是Linux的启动和关闭,而没有太多其他的,例如在服务启动和运行后对其进行管理。像init一样,systemd是所有进程的源头,它负责使Linux主机启动到可以做生产性任务的状态。 systemd设定的一些功能比老的init要广泛得多,它要管理正在运行的Linux主机的许多方面,包括挂载文件系统,管理硬件,处理定时器以及启动和管理生产性主机所需的系统服务。 + +本系列文章是基于我的部分三期Linux培训课程[_使用和管理Linux:从零开始进行学习系统管理_][2]的摘录,探讨了systemd在启动和启动完成后的功能。 + +### Linux启动 + +Linux主机从关机状态到运行状态的完整启动过程很复杂,但它是开放的并且是可知的。在详细介绍之前,我将简要介绍一下从主机硬件被上电到系统准备好用户登录(的过程)。大多数时候,“启动过程”被作为单个概念来讨论,但这是不准确的。实际上,完整的引导和启动过程包含三个主要部分: + +   * **硬件引导:** 初始化系统硬件 + * **Linux引导:** 加载Linux内核和systemd + * **Linux启动:** systemd启动, 为生产工作做准备 + + +Linux启动阶段在内核加载了init或systemd(取决于具体发行版使用的是旧的方式还是还是新的方式)之后开始。init和systemd程序启动并管理所有其他进程,他们在各自的系统上都被称为“所有进程之母”。 + +将硬件引导与Linux引导及Linux启动区分开,并明确定义它们之间的分界点是很重要的。理解他们的差异以及他们每一个在使Linux系统进入生产准备状态所起的作用,才能够管理这些进程并更好地确定大部分人所谓的“启动”问题出在哪里。 + +启动过程按照三步引导流程使Linux计算机进入可进行生产工作的状态。当内核将主机的控制权转移到systemd时,启动环节开始。 + +### systemd之争 + +systemd引起了系统管理员和其他负责维护Linux系统正常运行人员的广泛回应。systemd正在许多Linux系统中接管大量任务的事实造成了某些开发人群和系统管理员群组之间的阻挠和争议。 + +SystemV和systemd是执行Linux启动环节的两种不同的方法。 SystemV启动脚本和init程序是老的方法,而使用目标(targets)的systemd是新方法。尽管大多数现代Linux发行版都使用较新的systemd进行启动,关机和进程管理,但仍有一些发行版未采用。原因之一是某些发行版维护者和系统管理员喜欢老的SystemV方法,而不是新的systemd。 + +我认为两者都有其优势。 + +#### 为何我更喜欢SystemV + +我更喜欢SystemV,因为它更开放。使用Bash脚本来完成启动。内核启动init程序(编译后的二进制)后,init启动 **rc.sysinit** 脚本,该脚本执行许多系统初始化任务。 **rc.sysinit** 执行完后,init启动 **/etc/rc.d/rc** 脚本,该脚本依次启动 **/etc/rc.d/rcX.d** 中由SystemV启动脚本定义的各种服务。 其中“ X”是待启动的运行级别号。 + +除了init程序本身之外,所有这些程序都是开放且易于理解的脚本。可以通读这些脚本并确切了解整个启动过程中发生的事情,但是我不认为有太多系统管理员会实际这样做。每个启动脚本都被编了号,以便按特定顺序启动预期的服务。服务是串行启动的,一次只能启动一个服务。 + +由Red Hat的Lennart Poettering和Kay Sievers开发的systemd是一个由大的已编译的二进制可执行文件构成的复杂系统,不访问其源码就无法理解。它是开源的,因此“访问其源代码”并不难,只是不太方便。systemd似乎表现出对Linux哲学多个原则的重大驳斥。作为二进制文件,systemd无法被直接打开供系统管理员查看或进行简单更改。systemd试图做所有事情,例如管理正在运行的服务,同时提供比SystemV更多的状态信息。它还管理硬件,进程,进程组,文件系统挂载等。 systemd几乎涉足于现代Linux主机的每方面,使它成为系统管理的一站式工具。所有这些都明显违反了"程序应该小且每个程序都应该只做一件事并且做好"的原则。 + +#### 为何我更喜欢systemd + +我更喜欢用systemd作为启动机制,因为它会根据启动阶段并行地启动尽可能多的服务。这样可以加快整个的启动速度,使得主机系统比SystemV更快地到达登录屏幕。 + +systemd几乎可以管理正在运行的Linux系统的各个方面。它可以管理正在运行的服务,同时提供比SystemV多得多的状态信息。它还管理硬件,进程和进程组,文件系统挂载等。 systemd几乎涉足于现代Linux操作系统的每方面,使其成为系统管理的一站式工具。(听起来熟悉吧?) + +systemd工具是编译后的二进制文件,但该工具包是开放的,因为所有配置文件都是ASCII文本文件。可以通过各种GUI和命令行工具来修改启动配置,也可以添加或修改各种配置文件来满足特定的本地计算环境的需求。 + +#### 真正的问题 + +您认为我不能喜欢两种启动系统吗?我能,我会用它们中的任何一个。 + +我认为,SystemV和systemd之间大多数争议的真正问题和根本原因在于,系统管理阶段[没有选择权][3]。使用SystemV还是systemd已经由各种发行版的开发人员,维护人员和打包人员选择了(但有充分的理由)。由于init极端的侵入性, 挖出(scooping out)并替换init系统会带来很多影响,发行版设计过程之外(的环节)很难处理这些影响。 + +尽管该选择实际上是为我而选的,我通常最关心的是我的Linux主机仍然可以启动并正常工作。作为最终用户,甚至是系统管理员,我主要关心的是我是否可以完成我的工作,例如写我的书和这篇文章,安装更新以及编写脚本来自动化所有事情。只要我能做我的工作,我就不会真正在意发行版中使用的启动系统。 + +在启动或服务管理出现问题时,我会在意。无论主机上使用哪种启动系统,我都足够了解如何沿着事件顺序来查找故障并进行修复。 + +#### 替换SystemV + +以前曾有过用更现代的东西替代SystemV的尝试。在大约两个版本中,Fedora使用了一个叫作Upstart的东西来替换老化的SystemV,但是它没有替换init并且没有我能感知到的变化。由于Upstart并未对SystemV的问题进行任何重大更改,因此这个方向的努力很快就被systemd放弃了。 + +尽管大部分Linux开发人员都认可替换旧的SystemV启动系统是个好主意,但许多开发人员和系统管理员并不喜欢systemd。与其重新讨论人们在systemd中遇到的或曾经遇到过的所有所谓的问题,不如带您去看两篇好文章,尽管有些陈旧,但它们涵盖了大多数内容。Linux内核的创建者Linus Torvalds对systemd似乎不感兴趣。在2014年ZDNet的文章_[Linus Torvalds和其他人对Linux上的systemd的看法][4]_中,Linus清楚地表达了他的感受。 + +>“实际上我对systemd本身没有任何特别强烈的意见。我对一些核心开发人员有一些意见,我认为它们在对待bugs和兼容性方面过于轻率,而且我认为某些设计细节是疯狂的(例如,我不喜欢二进制日志),但这只是细节,不是大问题。” + +如果您对Linus不太了解,我可以告诉您,如果他不喜欢某事,那么他非常直率,坦率,并且非常清楚这种不喜欢。他解决自己对事物不满的方式已经被社会更好地接受了。 + +2013年,Poettering写了一篇很长的博客,其中他在揭穿[systemd的神话][5]的同时透露了创建它的一些原因。这是一本很好的读物,我强烈建议您阅读。 + +### systemd任务 + +根据编译过程中使用的选项(不在本系列中介绍),systemd可以有多达69个二进制可执行文件用于执行任务,其中包括: + + * systemd程序以1号进程(PID 1)运行,并提供使尽可能多服务并行启动的系统启动能力,它额外加快了总体启动时间。它还管理关机顺序。 + * systemctl程序提供了服务管理的用户接口。 + * 支持SystemV和LSB启动脚本,以便向后兼容。 +  * 服务管理和报告提供了比SystemV更多的服务状态数据。 + * 提供基本的系统配置工具,例如主机名,日期,语言环境,已登录用户的列表,正在运行的容器和虚拟机,系统帐户,运行时目录和设置;用于简易网络配置,网络时间同步,日志转发和名称解析的守护程序。 +  * 提供套接字管理。 + * systemd定时器提供类似cron的高级功能,包括在相对于系统启动,systemd启动,定时器上次启动时刻的某个时间点运行脚本。 + * 提供了一个工具来分析定时器规格中使用的日期和时间。 + * 能感知层次的文件系统挂载和卸载可以更安全地级联挂载的文件系统。 + * 允许主动的创建和管理临时文件,包括删除文件。 + * D-Bus的接口提供在插入或移除设备时运行脚本的能力。这允许将所有设备(无论是否可插拔)都被视为即插即用,从而大大简化了设备的处理。 + * 分析启动顺序的工具可用于查找耗时最多的服务。 +  * 包括用于存储系统消息的日志以及管理日志的工具。 + + +### 架构 + +这些和更多的任务通过许多守护程序,控制程序和配置文件来支持。图1显示了许多属于systemd的组件。这是一个简化的图,旨在提供概要描述,因此它并不包括所有独立的程序或文件。它也不提供数据流的视角,数据流是如此复杂,因此在本系列文章的背景下没用。 + +![系统架构][6] + +完整的systemd讲解就需要一本书。您不需要了解图1中的systemd组件是如何组合在一起的细节。了解支持各种Linux服务管理以及日志文件和日志处理的程序和组件就够了。 但是很明显,systemd并不是某些批评者所说的那样的庞然大物。 + +### 作为1号进程的systemd + +systemd是1号进程(PID 1)。它的一些功能(比老的SystemV3 init要广泛得多)用于管理正在运行的Linux主机的许多方面,包括挂载文件系统以及启动和管理Linux生产主机所需的系统服务。与启动顺序无关的任何systemd任务都不在本文讨论范围之内(但本系列后面的一些文章将探讨其中的一些任务)。 + +首先,systemd挂载 **/etc/fstab** 所定义的文件系统,包括所有交换文件或分区。此时,它可以访问位于 **/etc** 中的配置文件,包括它自己的配置文件。它使用其配置链接 **/etc/systemd/system/default.target** 来确定将主机引导至哪个状态或目标。 **default.target** 文件是指向真实目标文件的符号链接。对于桌面工作站,通常是 **graphical.target**,它相当于SystemV中的运行级别5。对于服务器,默认值更可能是 **multi-user.target**,相当于SystemV中的运行级别3。 **emergency.target** 类似于单用户模式。目标(targets)和服务(services)是systemd的单位。 + +下表(图2)将systemd目标与老的SystemV启动运行级别进行了比较。systemd提供systemd目标别名以便向后兼容。目标别名允许脚本(以及许多系统管理员)使用SystemV命令(如**init 3**)更改运行级别。当然,SystemV命令被转发给systemd进行解释和执行。 + +**systemd目标** | **SystemV运行级别** | **目标别名** | **描述** +--- | --- | ---- |- +default.target | | |此目标总是通过符号连接的方式成为“多用户目标”或“图形化目标”的别名。systemd始终使用 **default.target** 来启动系统。 ** default.target** 绝不应该设为 **halt.target**,**poweroff.target** 或 **reboot.target** 的别名 +graphic.target | 5 | runlevel5.target |带有GUI的 **Multi-user.target** +| 4 | runlevel4.target |未用。在SystemV中运行级别4与运行级别3相同。可以创建并自定义此目标以启动本地服务,而无需更改默认的 **multi-user.target** +multi-user.target | 3 | runlevel3.target |所有服务在运行,但仅有命令行界面(CLI) +| 2 | runlevel2.target |多用户,没有NFS,其他所有非GUI服务在运行 +rescue.target | 1 | runlevel1.target |基本系统,包括挂载文件系统,运行最基本的服务和主控制台的恢复shell +Emergency.target | S | |单用户模式-没有服务运行;不挂载文件系统。这是最基本的工作级别,只有主控制台上运行的一个紧急Shell供用户与系统交互 +halt.target | | |在不关电源的情况下停止系统 +reboot.target | 6 | runlevel6.target |重启 +poweroff.target | 0 | runlevel0.target |停止系统并关闭电源 + +每个目标在其配置文件中都描述了一个依赖集。systemd启动必须的依赖,这些依赖是运行Linux主机到特定功能级别所需的服务。当目标配置文件中列出的所有依赖项被加载并运行后,系统就在该目标级别运行了。 在图2中,功能最多的目标位于表的顶部,从顶向下,功能逐步递减。 + +systemd还会检查老的SystemV init目录,以确认是否存在任何启动文件。如果有,systemd会将它们作为配置文件以启动它们描述的服务。网络服务是一个很好的例子,在Fedora中它仍然使用SystemV启动文件。 + +图3(如下)是直接从启动手册页复制来的。它显示了systemd启动期间普遍的事件顺序以及确保成功启动的基本顺序要求。 + +``` +                                         cryptsetup-pre.target +                                                   | + (various low-level                                v +     API VFS mounts:                 (various cryptsetup devices...) +  mqueue, configfs,                                |    | +  debugfs, ...)                                    v    | +  |                                  cryptsetup.target  | +  |  (various swap                                 |    |    remote-fs-pre.target +  |   devices...)                                  |    |     |        | +  |    |                                           |    |     |        v +  |    v                       local-fs-pre.target |    |     |  (network file systems) +  |  swap.target                       |           |    v     v                 | +  |    |                               v           |  remote-cryptsetup.target  | +  |    |  (various low-level  (various mounts and  |             |              | +  |    |   services: udevd,    fsck services...)   |             |    remote-fs.target +  |    |   tmpfiles, random            |           |             |             / +  |    |   seed, sysctl, ...)          v           |             |            / +  |    |      |                 local-fs.target    |             |           / +  |    |      |                        |           |             |          / +  \\____|______|_______________   ______|___________/             |         / +                              \ /                                |        / +                               v                                 |       / +                        sysinit.target                           |      / +                               |                                 |     / +        ______________________/|\\_____________________           |    / +       /              |        |      |               \          |   / +       |              |        |      |               |          |  / +       v              v        |      v               |          | / +  (various       (various      |  (various            |          |/ +   timers...)      paths...)   |   sockets...)        |          | +       |              |        |      |               |          | +       v              v        |      v               |          | + timers.target  paths.target   |  sockets.target      |          | +       |              |        |      |               v          | +       v              \\_______ | _____/         rescue.service   | +                              \|/                     |          | +                               v                      v          | +                           basic.target         rescue.target    | +                               |                                 | +                       ________v____________________             | +                      /              |              \            | +                      |              |              |            | +                      v              v              v            | +                  display-    (various system   (various system  | +              manager.service     services        services)      | +                      |         required for        |            | +                      |        graphical UIs)       v            v +                      |              |            multi-user.target + emergency.service    |              |              | +         |            \\_____________ | _____________/ +         v                          \|/ + emergency.target                    v +                              graphical.target +``` + +**sysinit.target** 和 **basic.target** 目标可以看作启动过程中的检查点。尽管systemd的设计目标之一是并行启动系统服务,但是某些服务和功能目标必须先启动,然后才能启动其他服务和目标。直到该检查点所需的所有服务和目标被满足后才能通过这些检查点。 + +当它依赖的所有单元都完成时,将到达 **sysinit.target**。所有这些单元,挂载文件系统,设置交换文件,启动udev,设置随机数生成器种子,启动低层服务以及配置安全服务(如果一个或多个文件系统是加密的)都必须被完成,但 **sysinit.target** 的这些任务可以并行执行。 + +**sysinit.target** 将启动系统接近正常运行所需的所有低层服务和单元,以及转移到 **basic.target** 所需的服务和单元。 + +在完成 **sysinit.target** 目标之后,systemd会启动实现下一个目标所需的所有单元。基本目标通过启动所有下一目标所需的单元来提供一些其他功能。包括设置如PATHs为各种可执行程序的路径,设置通信套接字和计时器之类。 + +最后,用户级目标 **multi-user.target** 或 **graphical.target** 被初始化。要满足图形目标的依赖必须先达到**multi-user.target**。图3中带下划线的目标是通常的启动目标。当达到这些目标之一时,启动就完成了。如果 **multi-user.target** 是默认设置,那么您应该在控制台上看到文本模式的登录界面。如果 **graphical.target** 是默认设置,那么您应该看到图形的登录界面。您看到的特定的GUI登录界面取决于您默认的显示管理器。 + +引导手册页还描述并提供了引导到初始RAM磁盘和systemd关机过程的地图。 + +systemd还提供了一个工具,该工具列出了完整启动或指定单元的依赖。单元是可控制的systemd资源实体,其范围从特定服务(例如httpd或sshd)到计时器,挂载,套接字等。尝试以下命令并滚动查看结果。 + +``` +`systemctl list-dependencies graphical.target` +``` + +注意,这完全展开了使系统进入图形目标运行模式所需的顶层目标单元列表。 也可以使用 **\-all** 选项来展开所有其他单元。 + +``` +`systemctl list-dependencies --all graphical.target` +``` + +您可以使用 **less** 命令来搜索诸如“target”,“slice”和“ socket”之类的字符串。 + +现在尝试下面的方法。 + +``` +`systemctl list-dependencies multi-user.target` +``` + +和 + + +``` +`systemctl list-dependencies rescue.target` +``` + +和 + + +``` +`systemctl list-dependencies local-fs.target` +``` + +和 + + +``` +`systemctl list-dependencies dbus.service` +``` +``` +`systemctl list-dependencies graphic.target` +``` + + + +这个工具帮助我可视化我正用的主机的启动依赖细节。继续花一些时间探索一个或多个Linux主机的启动树。但是要小心,因为systemctl手册页包含以下注释: + +> _“请注意,此命令仅列出当前被服务管理器加载到内存的单元。尤其是,此命令根本不适合用于获取特定单元的全部反向依赖列表,因为它不会列出被单元声明了但是未加载的依赖项。” _ + +### 结尾语 + +即使在深入研究systemd之前,很明显能看出它既强大又复杂。显然,systemd不是单一,庞大,整体且不可知的二进制文件。相反,它是由许多较小的组件和旨在执行特定任务的子命令组成。 + +本系列的下一篇文章将更详细地探讨systemd的启动,以及systemd的配置文件,更改默认的目标以及如何创建简单服务单元。 + +### 资源 + +互联网上有大量关于systemd的信息,但是很多都简短,晦涩甚至是误导。除了本文提到的资源外,以下网页还提供了有关systemd启动的更详细和可靠的信息。 + + * Fedora项目有一个很好的,实用的[guide to systemd][7]。它有你需要知道的通过systemd来配置,管理和维护Fedora主机所需的几乎所有知识。 + * Fedora项目还有一个不错的[cheat sheet][8],将老的SystemV命令与对比的systemd命令相互关联。 +  * 有关systemd及其创建原因的详细技术信息,请查看[Freedesktop.org][9]的[systemd描述][10]。 + * [Linux.com][11]的“systemd的更多乐趣”提供了更高级的systemd [信息和技巧][12]。 + + +还有systemd的设计师和主要开发者Lennart Poettering撰写的针对Linux系统管理员的一系列技术文章。这些文章是在2010年4月至2011年9月之间撰写的,但它们现在和那时一样有用。关于systemd及其生态的其他许多好文都基于这些论文。 + + * [重新思考1号进程][13] + * [systemd之系统管理员, I][14] + * [systemd之系统管理员, II][15] + * [systemd之系统管理员, III][16] + * [systemd之系统管理员, IV][17] + * [systemd之系统管理员, V][18] + * [systemd之系统管理员, VI][19] + * [systemd之系统管理员, VII][20] + * [systemd之系统管理员, VIII][21] + * [systemd之系统管理员, IX][22] + * [systemd之系统管理员, X][23] + * [systemd之系统管理员, XI][24] + + + +Mentor Graphics的Linux内核和系统程序员Alison Chiaken预览了此文... +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/systemd + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[messon007](https://github.com/messon007) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc (Penguin driving a car with a yellow background) +[2]: http://www.both.org/?page_id=1183 +[3]: http://www.osnews.com/story/28026/Editorial_Thoughts_on_Systemd_and_the_Freedom_to_Choose +[4]: https://www.zdnet.com/article/linus-torvalds-and-others-on-linuxs-systemd/ +[5]: http://0pointer.de/blog/projects/the-biggest-myths.html +[6]: https://opensource.com/sites/default/files/uploads/systemd-architecture.png (systemd architecture) +[7]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html +[8]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet +[9]: http://Freedesktop.org +[10]: http://www.freedesktop.org/wiki/Software/systemd +[11]: http://Linux.com +[12]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/ +[13]: http://0pointer.de/blog/projects/systemd.html +[14]: http://0pointer.de/blog/projects/systemd-for-admins-1.html +[15]: http://0pointer.de/blog/projects/systemd-for-admins-2.html +[16]: http://0pointer.de/blog/projects/systemd-for-admins-3.html +[17]: http://0pointer.de/blog/projects/systemd-for-admins-4.html +[18]: http://0pointer.de/blog/projects/three-levels-of-off.html +[19]: http://0pointer.de/blog/projects/changing-roots +[20]: http://0pointer.de/blog/projects/blame-game.html +[21]: http://0pointer.de/blog/projects/the-new-configuration-files.html +[22]: http://0pointer.de/blog/projects/on-etc-sysinit.html +[23]: http://0pointer.de/blog/projects/instances.html +[24]: http://0pointer.de/blog/projects/inetd.html From 9a52a7fe5c3b5855d0dd88bfbd7bcdc0ff5b08db Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Mon, 4 May 2020 19:43:48 +0800 Subject: [PATCH 115/178] 20180612 --- ...12 Systemd Services- Reacting to Change.md | 67 +++++++++++++++++-- 1 file changed, 63 insertions(+), 4 deletions(-) diff --git a/sources/tech/20180612 Systemd Services- Reacting to Change.md b/sources/tech/20180612 Systemd Services- Reacting to Change.md index 125b871602..740b695f08 100644 --- a/sources/tech/20180612 Systemd Services- Reacting to Change.md +++ b/sources/tech/20180612 Systemd Services- Reacting to Change.md @@ -1,35 +1,49 @@ -//messon007 translating -Systemd Services: Reacting to Change +Systemd Services: Reacting to Change Systemd服务:响应变更 ====== ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/webcam.png?itok=zzYUs5VK) [I have one of these Compute Sticks][1] (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don't have problems getting it to work with drivers for my printer, and that’s what it does most days: it interfaces with the shared printer and scanner in my living room. +[我有其中一个计算棒] [1](图1),并将其用作通用服务器。 它不起眼且安静,并且由于它是基于x86架构构建的,因此我可以将其与打印机驱动程序配合使用时没有任何问题,这就是大多数日子所做的事情:它与我的共享打印机和扫描仪接口 客厅。 ![ComputeStick][3] An Intel ComputeStick. Euro coin for size. +英特尔计算棒。 大小的欧洲硬币。 [Used with permission][4] Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn't come with its own camera, and it wouldn't need to be spying all the time. I also didn't want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door. +大多数情况下,它处于空闲状态,尤其是当我们外出时,因此我认为将其用作监视系统是个好主意。 该设备没有自带的摄像头,也不需要一直监视。 我也不想手动开始图像捕获,因为这意味着必须使用SSH登录到Stick,并在冲出门之前在shell中编写命令来启动该过程。 + So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected. +因此,我认为要做的就是抓住USB网络摄像头,然后仅通过将其插入即可自动启动监视系统。如果在Stick重新启动后监视系统也启动,则奖励积分。 连接的。 + In prior installments, we saw that [systemd services can be started or stopped by hand][5] or [when certain conditions are met][6]. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service. +在先前的文章中,我们看到[可以手动启动或停止系统服务] [5]或[在满足某些条件时] [6]。 这些条件不仅限于操作系统在启动或关机顺序中达到某种状态时,还可以在您插入新硬件或文件系统发生变化时进行。 您可以通过将Udev规则与systemd服务相结合来实现。 ### Hotplugging with Udev Udev rules live in the _/etc/udev/rules_ directory and are usually a single line containing _conditions_ and _assignments_ that lead to an _action_. That was a bit cryptic. Let's try again: +Udev规则位于_ / etc / udev / rules_目录中,通常是包含_conditions_和_assignments_的单行,它们导致_action_。 + +那有点神秘。让我们再试一次: Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the _conditions_ mentioned earlier. +通常,在Udev规则中,您告诉systemd连接设备时要查找什么。例如,您可能要检查刚插入的设备的品牌和型号是否与您要让Udev等待的设备的品牌和型号相对应。这些是前面提到的条件。 Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you're going to want users to be able to read information from the printer (the user's printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the _assignments_ you read about earlier. +然后,您可能需要更改一些内容,以便以后可以轻松使用该设备。例如,更改对设备的读写权限:如果插入USB打印机,您将希望用户能够从打印机中读取信息(用户的打印应用程序希望了解模型,制造商,以及是否准备好接受打印作业)并向其写入内容,即发送要打印的内容。更改设备的读写权限是使用您之前阅读的_assignments_之一完成的。 + Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an _action_ mentioned above. +最后,您可能希望系统在满足上述条件时执行某些操作,例如在插入某个外部硬盘驱动器时启动备份应用程序以复制重要文件。这就是上述_action_的示例。 + With that in mind, ponder this: ``` @@ -45,9 +59,12 @@ ATTRS{idProduct}=="e207" [etc... ] ``` shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (`ACTION=="add"`) to the machine, it has to be integrated into the `video4linux` subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (`ATTRS{idVendor}=="03f0"`) and a model (`ATTRS{idProduct}=="e207"`) of the device. +显示在执行系统要执行的其他任何操作之前设备必须满足的条件。 必须将设备添加到(ACTION ==“ add”)机器上,并且必须将其集成到video4linux子系统中。 为了确保仅在插入正确的设备时才应用该规则,您必须确保Udev正确标识制造商(`ATTRS {idVendor} ==“ 03f0”`)和型号(`ATTRS {idProduct} == 设备的“ e207”`)。 In this case, we're talking about this device (Figure 2): +在这种情况下,我们正在讨论此设备(图2): + ![webcam][8] The HP webcam used in this experiment. @@ -55,13 +72,19 @@ The HP webcam used in this experiment. [Used with permission][4] Notice how you use `==` to indicate that these are a logical operation. You would read the above snippet of the rule like this: +注意如何使用“ ==”来表示这些是逻辑操作。 您将阅读以下规则的摘要: ``` if the device is added and the device controlled by the video4linux subsystem and the manufacturer of the device is 03f0 and the model is e207, then... ``` +``` +如果添加了设备并且该设备由video4linux子系统控制 +而设备的制造商是03f0,型号是e207,则... +``` But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The `IdVendor` and `idProduct` you can get by plugging the webcam into your machine and running `lsusb`: +但是,您从哪里获得所有这些信息? 您在哪里找到触发事件的动作,制造商,模型等? 您可能必须使用多个资源。 您可以通过将摄像头插入计算机并运行`lsusb`来获得`IdVendor`和`idProduct`: ``` lsusb @@ -76,11 +99,16 @@ Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ``` The webcam I’m using is made by HP, and you can only see one HP device in the list above. The `ID` gives you the manufacturer and the model numbers separated by a colon (`:`). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run `lsusb` again and check what's missing. +我正在使用的网络摄像头是由HP制造的,并且您在上面的列表中只能看到一台HP设备。 “ ID”为您提供制造商和型号,以冒号(`:`)分隔。 如果同一制造商提供的设备不止一个,并且不确定是哪个设备,请拔下网络摄像头,再次运行`lsusb`并检查缺少的内容。 OR... Unplug the webcam, wait a few seconds, run the command `udevadmin monitor --environment` and then plug the webcam back in again. When you do that with the HP webcam, you get: +要么... + +拔下网络摄像头,等待几秒钟,运行命令“ udevadmin monitor --environment”,然后重新插入网络摄像头。 使用HP网络摄像头进行操作时,您将获得: + ``` udevadmin monitor --environment UDEV [35776.495221] add /devices/pci0000:00/0000:00:1c.3/0000:04:00.0 @@ -124,25 +152,38 @@ XKBVARIANT= That may look like a lot to process, but, check this out: the `ACTION` field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer's ID number (`ID_VENDOR_ID=03f0`) and the model number (`ID_VENDOR_ID=03f0`). +可能要处理很多事情,但是,请检查一下:列表前面的“ ACTION”字段告诉您刚刚发生了什么事件,即设备已添加到系统中。您还可以在几行中看到设备名称的拼写,因此可以确定它是您要查找的设备。输出还显示制造商的ID号(ID_VENDOR_ID = 03f0)和型号(ID_VENDOR_ID = 03f0)。 + This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says: +这为您提供了规则条件部分需要的四个值中的三个。您可能会想起它也给您第四,因为还有一行这样写: + ``` SUBSYSTEM=input ``` +``` +SUBSYSTEM =输入 +``` Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the _usb_ subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all. +小心!尽管USB网络摄像头确实是提供输入的设备(键盘和鼠标也是如此),但它也属于_usb_子系统和其他几个子系统。这意味着您的网络摄像头已添加到多个子系统,并且看起来像多个设备。如果选择了错误的子系统,则您的规则可能无法按您希望的那样工作,或者甚至根本无法工作。 So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run: +因此,您需要检查的第三件事是网络摄像头已添加到的所有子系统中,并选择了正确的子系统。为此,请再次拔下网络摄像头,然后运行: + ``` ls /dev/video* ``` This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as `/dev/video0`. Plug your webcam back in and run `ls /dev/video*` again. +这将向您显示连接到本机的所有视频设备。 如果您使用的是笔记本电脑,则大多数笔记本电脑都带有内置摄像头,它可能会显示为`/ dev / video0`。 重新插入网络摄像头,然后再次运行`ls / dev / video *`。 Now you should see one more video device (probably `/dev/video1`). +现在,您应该再看到一个视频设备(可能是“ / dev / video1”)。 Now you can find out all the subsystems it belongs to by running `udevadm info -a /dev/video1`: +现在,您可以通过运行`udevadm info -a / dev / video1`找出它所属的所有子系统: ``` udevadm info -a /dev/video1 @@ -166,20 +207,29 @@ and the attributes from one single parent device. ``` The output goes on for quite a while, but what you're interested is right at the beginning: `SUBSYSTEM=="video4linux"`. This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule. +输出持续了一段时间,但是您感兴趣的只是开始:`SUBSYSTEM ==“ video4linux”`。您可以按实际情况将这行复制并粘贴到规则中。输出的其余部分(为简便起见未显示)为您提供了更多的块,例如制造商和模式ID,它们的格式也可以复制并粘贴到规则中。 Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device. +现在,您有了一种识别设备的方式,什么事件应该明确触发该操作,现在该对设备进行修改了。 The next section in the rule, `SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"` tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. _/dev/video1_ ) to _/dev/mywebcam_. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be _/dev/video0_ while the external one will become _/dev/video1_. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become _/dev/video1_ and the external one _/dev/video0_. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can't rely on it being _/dev/video0_ or _/dev/video1_. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the _video4linux_ subsystem and you will make your script point to that. +规则的下一部分,`SYMLINK + =“ mywebcam”,TAG + =“ systemd”,MODE =“ 0666”`告诉Udev做三件事:首先,您要创建从设备到的符号链接(例如_ / dev / video1_)到_ / dev / mywebcam_。这是因为您无法预测默认情况下系统将调用什么设备。当您拥有内置摄像头并热插拔新摄像头时,内置摄像头通常为_ / dev / video0_,而外部摄像头通常为_ / dev / video1_。但是,如果您在插入外部USB网络摄像头的情况下引导计算机,则可能会相反,并且内部网络摄像头可能会变成_ / dev / video1_,而外部网络摄像头会变成_ / dev / video0_。这告诉您的是,尽管您的图像捕获脚本(稍后将看到)始终需要指向外部网络摄像头设备,但是您不能依靠它是_ / dev / video0_或_ / dev / video1_。为了解决这个问题,您告诉Udev创建一个符号链接,该链接在将设备添加到_video4linux_子系统的那一刻就不会改变,并且您将使脚本指向该链接。 The second thing you do is add `"systemd"` to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service. +您要做的第二件事是将“ systemd”添加到与此规则关联的Udev标记列表中。这告诉Udev,该规则将触发的操作将由systemd管理,即它将是某种systemd服务。 Notice how in both cases you use `+=` operator. This adds the value to a list, which means you can add more than one value to `SYMLINK` and `TAG`. +注意在两种情况下如何使用“ + =”运算符。这会将值添加到列表中,这意味着您可以向“ SYMLINK”和“ TAG”添加多个值。 The `MODE` values, on the other hand, can only contain one value (hence you use the simple `=` assignment operator). What `MODE` does is tell Udev who can read from or write to the device. If you are familiar with `chmod` (and, if you are reading this, you should be), you will also be familiar of [how you can express permissions using numbers][9]. That is what this is: `0666` means " _give read and write privileges to the device to everybody_ ". +另一方面,“ MODE”值只能包含一个值(因此,您可以使用简单的“ =”赋值运算符)。 MODE的作用是告诉Udev谁可以读取或写入设备。如果您熟悉`chmod`(并且应该阅读),那么您还将熟悉[如何使用数字表示权限] [9]。这就是它的意思:“ 0666”的意思是“向所有人授予对设备的读写特权”。 At last, `ENV{SYSTEMD_WANTS}="webcam.service"` tells Udev what systemd service to run. +最后,`ENV {SYSTEMD_WANTS} =“ webcam.service”`告诉Udev要运行什么systemd服务。 + Save this rule into file called _90-webcam.rules_ (or something like that) in _/etc/udev/rules.d_ and you can load it either by rebooting your machine, or by running: +将此规则保存到_ / etc / udev / rules.d_中名为_90-webcam.rules_(或类似名称)的文件中,您可以通过重新启动计算机或运行以下命令来加载它: ``` sudo udevadm control --reload-rules && udevadm trigger @@ -188,7 +238,7 @@ sudo udevadm control --reload-rules && udevadm trigger ## Service at Last The service the Udev rule triggers is ridiculously simple: - +Udev规则触发的服务非常简单: ``` # webcam.service @@ -198,8 +248,10 @@ ExecStart=/home/[user name]/bin/checkimage.sh ``` Basically, it just runs the _checkimage.sh_ script stored in your personal _bin/_ and pushes it the background. [This is something you saw how to do in prior installments][5]. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a _device_ unit. Congratulations. +基本上,它只运行存储在您个人_bin / _中的_checkimage.sh_脚本并将其推入后台。 [这是您在先前的部分中看到的操作方法] [5]。 它看起来似乎很少,但是仅由于它是由Udev规则调用的,因此您刚刚创建了一种特殊的systemd单元,称为_device_ unit。 恭喜你 As for the _checkimage.sh_ script _webcam.service_ calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what _checkimage.sh_ does), but this is how I did it: +至于_checkimage.sh_脚本_webcam.service_调用,有几种方法可以从网络摄像头获取图像并将其与前一个图像进行比较以检查更改(_checkimage.sh_所做的工作),但这就是我的方法 它: ``` #!/bin/bash @@ -226,29 +278,36 @@ done ``` Start by using [MPlayer][10] to grab a frame ( _00000001.png_ ) from the webcam. Notice how we point `mplayer` to the `mywebcam` symbolic link we created in our Udev rule, instead of to `video0` or `video1`. Then you transfer the image to the _monitor/_ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses [Image Magick's _compare_ tool][11] to see if there any differences between the last image captured and the one that is already in the _monitor/_ directory. +首先使用[MPlayer] [10]从摄像头抓取一个帧(_00000001.png_)。 注意,我们如何将mplayer指向在Udev规则中创建的mywebcam符号链接,而不是指向video0或video1。 然后,将映像传输到主目录中的_monitor / _目录。 然后运行一个无限循环,一次又一次地执行相同的操作,但是还使用[Image Magick的_compare_工具] [11]来查看最后捕获的图像与_monitor / _目录中的图像之间是否存在差异。 If the images are different, it means something has moved within the webcam's frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement. +如果图像不同,则表示网络摄像头框架内已移动了某些东西。 该脚本将新图像覆盖原始图像,并继续比较以等待更多移动。 ### Plugged With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the _webcam.service_. The _webcam.service_ will execute _checkimage.sh_ in the background, and _checkimage.sh_ will start taking pictures every half a second. You will know because your webcam's LED will start flashing indicating every time it takes a snap. +一切零碎,将网络摄像头插入电源后,您的Udev规则将被触发并启动_webcam.service_。 _webcam.service_将在后台执行_checkimage.sh_,而_checkimage.sh_将每半秒开始拍照。 您会知道,因为网络摄像头的LED指示灯将开始闪烁,表明每次需要快照。 As always, if something goes wrong, run +与往常一样,如果出现问题,请运行 ``` systemctl status webcam.service ``` to check what your service and script are up to. +检查您的服务和脚本正在做什么。 ### Coming up You may be wondering: Why overwrite the original image? Surely you would want to see what's going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy. +您可能想知道:为什么要覆盖原始图像? 当然,如果系统检测到任何移动,您肯定想知道发生了什么,对吗? 您将是对的,但是如您在下一部分中将看到的那样,将它们保持原样,并使用另一种类型的systemd单元处理图像将使事情变得更好,干净和容易。 Just wait and see. +请稍等。 Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX. - +通过Linux基金会和edX的免费[[Linux简介]] [12]课程了解有关Linux的更多信息。 -------------------------------------------------------------------------------- via: https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change From 02bfb14a066651c49f9a0b42e3f6137a11755803 Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Mon, 4 May 2020 19:52:10 +0800 Subject: [PATCH 116/178] APL --- ...0200429 Drop PNG and JPG for your online images- Use WebP.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md index d664a9fdfc..f4e6711933 100644 --- a/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ b/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (lxbwolf) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From f07a211ab2f50ef80293164cf8752557f61a4c27 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 4 May 2020 22:03:06 +0800 Subject: [PATCH 117/178] PRF @qfzy1223 --- ...ngs to do After Installing Ubuntu 20.04.md | 170 +++++++++--------- 1 file changed, 85 insertions(+), 85 deletions(-) diff --git a/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md index a221ef6a34..e16ba06e54 100644 --- a/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (qfzy1233) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (16 Things to do After Installing Ubuntu 20.04) @@ -10,29 +10,29 @@ 安装完 Ubuntu 20.04 后要做的 16 件事 ====== -_**以下是安装 Ubuntu 20.04 之后需要做的一些调整和事项,它将使你获得更流畅、更好的桌面 Linux 体验。**_ +> 以下是安装 Ubuntu 20.04 之后需要做的一些调整和事项,它将使你获得更流畅、更好的桌面 Linux 体验。 -[Ubuntu 20.04 LTS (长期支持版)带来了许多新的特性][1] 和观感上的变化。 如果你要安装 Ubuntu 20.04 ,让我向你展示一些推荐步骤便于你的使用。 +[Ubuntu 20.04 LTS(长期支持版)带来了许多新的特性][1]和观感上的变化。如果你要安装 Ubuntu 20.04,让我向你展示一些推荐步骤便于你的使用。 -### 安装完 Ubuntu 20.04 LTS “Focal Fossa”后要做的 16 件事 +### 安装完 Ubuntu 20.04 LTS “Focal Fossa” 后要做的 16 件事 ![][2] -我在这里要提到的步骤仅是我的建议。如果一些定制或调整不适合你的需要和兴趣,你可以忽略它们。 +我在这里提到的步骤仅是我的建议。如果一些定制或调整不适合你的需要和兴趣,你可以忽略它们。 同样的,有些步骤看起来很简单,但是对于一个 Ubuntu 新手来说是必要的。 -这里的一些建议适用于启用 GNOME 作为默认桌面Ubuntu 20.04。所以请检查[Ubuntu版本][3]和[桌面环境][4]。 +这里的一些建议适用于启用 GNOME 作为默认桌面 Ubuntu 20.04,所以请检查 [Ubuntu 版本][3]和[桌面环境][4]。 以下列表便是安装了代号为 Focal Fossa 的 Ubuntu 20.04 LTS 之后要做的事。 -#### 1\. 通过更新和启用额外的 repos 来准备您的系统 +#### 1、通过更新和启用额外的软件仓库来准备你的系统 -安装Ubuntu或任何其他Linux发行版之后,你应该做的第一件事就是更新它。Linux 的可用包数据库工作于本地。这个缓存需要同步以便你能够安装任何软件。 +安装 Ubuntu 或任何其他 Linux 发行版之后,你应该做的第一件事就是更新它。Linux 的运作是建立在本地的可用软件包数据库上,而这个缓存需要同步以便你能够安装软件。 -升级Ubuntu非常简单。你可以运行软件更新从菜单( 按 Win 键并搜索软件更新): +升级 Ubuntu 非常简单。你可以运行软件更新从菜单(按 `Super` 键并搜索 “software updater”): -![Ubuntu 20.04 的软件升级中心][5] +![Ubuntu 20.04 的软件升级器][5] 你也可以在终端使用以下命令更新你的系统: @@ -40,111 +40,111 @@ _**以下是安装 Ubuntu 20.04 之后需要做的一些调整和事项,它将 sudo apt update && sudo apt upgrade ``` -接下来,您应该确保启用了[universe和multiverse存储库][6]。使用这些存储库,你可以访问更多的软件。我还推荐阅读关于[Ubuntu软件库][6]的书籍,以了解它背后的基本概念。 +接下来,你应该确保启用了 [universe(宇宙)和 multiverse(多元宇宙)软件仓库][6]。使用这些软件仓库,你可以访问更多的软件。我还推荐阅读关于 [Ubuntu 软件仓库][6]的文章,以了解它背后的基本概念。 -搜索软件和放大器;更新菜单: +在菜单中搜索 “Software & Updates”: ![软件及更新设置项][7] -请务必选中存储库前面的方框: +请务必选中软件仓库前面的勾选框: -![启用额外的存储库][8] +![启用额外的软件仓库][8] -#### 2\. 安装媒体解码器来播放MP3、MPEG4和其他格式媒体文件 +#### 2、安装媒体解码器来播放 MP3、MPEG4 和其他格式媒体文件 -如果你想播放媒体文件,如MP3, MPEG4, AVI等,你需要安装媒体解码器。由于不同国家的版权问题,Ubuntu在默认情况下不会安装它。 +如果你想播放媒体文件,如 MP3、MPEG4、AVI 等,你需要安装媒体解码器。由于各个国家的版权问题, Ubuntu 在默认情况下不会安装它。 -作为个人,你可以很容易地安装这些媒体编解码器[使用 Ubuntu 额外安装包][9]。这将安装媒体编解码器,Adobe Flash播放器和[微软 True Type 字体在您的Ubuntu系统][10]。 +作为个人,你可以[使用 Ubuntu Restricted Extra 安装包][9]很轻松地安装这些媒体编解码器。这将[在你的 Ubuntu 系统安装][10]媒体编解码器、Adobe Flash 播放器和微软 True Type 字体等。 -你可以通过[点击这个链接][11](它将要求打开软件中心)来安装它,或者使用以下命令: +你可以通过[点击这个链接][11]来安装它(它会要求在软件中心打开它),或者使用以下命令: ``` sudo apt install ubuntu-restricted-extras ``` -如果遇到EULA或许可证界面,请记住使用tab键在选项之间进行选择,然后按enter键确认你的选择。 +如果遇到 EULA 或许可证界面,请记住使用 `tab` 键在选项之间进行选择,然后按回车键确认你的选择。 -![按tab键选择OK并按enter键][12] +![按 tab 键选择 OK 并按回车键][12] -#### 3\. 从软件中心或网络上安装软件 +#### 3、从软件中心或网络上安装软件 -现在已经设置了存储库并更新了包缓存,应该开始安装所需的软件了。 +现在已经设置好了软件仓库并更新了软件包缓存,应该开始安装所需的软件了。 -有几种方法可以在Ubuntu中安装应用程序。最简单和正式的方法是使用软件中心。 +在 Ubuntu 中安装应用程序有几种方法,最简单和正式的方法是使用软件中心。 ![Ubuntu 软件中心][14] -如果你想要一些关于软件的建议,请参考这个扩展的[不同用途的Ubuntu应用程序列表][15]。 +如果你想要一些关于软件的建议,请参考这个[丰富的各种用途的 Ubuntu 应用程序列表][15]。 -一些软件供应商提供 .deb 文件来方便地安装他们的应用程序。你可以从他们的网站获得 deb 文件。例如,要[在 Ubuntu 上安装谷歌 Chrome ][16],你可以从它的网站上获得 deb 文件,双击它开始安装。 +一些软件供应商提供了 .deb 文件来方便地安装他们的应用程序。你可以从他们的网站获得 .deb 文件。例如,要[在 Ubuntu 上安装谷歌 Chrome][16],你可以从它的网站上获得 .deb 文件,双击它开始安装。 -#### 4\. 享受 Steam Proton 和 GameModeEnjoy 上的游戏 +#### 4、享受 Steam Proton 和 GameModeEnjoy 上的游戏 -[ Linux 上的游戏][17] 已经有了长足的发展。你不受限于自带的少数游戏。你可以[在 Ubuntu 上安装 Steam ][18]并享受许多游戏。 +[在 Linux 上进行游戏][17]已经有了长足的发展。你不再受限于自带的少数游戏。你可以[在 Ubuntu 上安装 Steam][18]并享受许多游戏。 -[Steam 新的 Proton 项目][19]可以让你在Linux上玩许多只适用于windows的游戏。除此之外,Ubuntu 20.04还默认安装了[Feral Interactive的游戏][20]。 +[Steam 新的 Proton 项目][19]可以让你在 Linux 上玩许多只适用于 Windows 的游戏。除此之外,Ubuntu 20.04 还默认安装了 [Feral Interactive 的 GameMode][20]。 -游戏模式会自动调整Linux系统的性能,使游戏具有比其他后台进程更高的优先级。 +GameMode 会自动调整 Linux 系统的性能,使游戏具有比其他后台进程更高的优先级。 -这意味着一些支持游戏模式的游戏(如[古墓丽影崛起][21])在 Ubuntu 上的性能应该有所提高。 +这意味着一些支持 GameMode 的游戏(如[古墓丽影·崛起][21])在 Ubuntu 上的性能应该有所提高。 -#### 5\. 管理自动更新(适用于进阶和专家) +#### 5、管理自动更新(适用于进阶用户和专家) -最近,Ubuntu 已经开始自动下载和安装对你的系统至关重要的安全更新。这是一个安全功能,作为一个普通用户,你应该让它保持默认, +最近,Ubuntu 已经开始自动下载并安装对你的系统至关重要的安全更新。这是一个安全功能,作为一个普通用户,你应该让它保持默认开启。 但是,如果你喜欢自己进行配置更新,而这个自动更新经常导致你[“无法锁定管理目录”错误][22],也许你可以改变自动更新行为。 -你可以选择更新是提示,以便它通知你的安全更新是否可用,而不是自动安装。 +你可以选择“立即显示”,这样一有安全更新就会立即通知你,而不是自动安装。 ![管理自动更新设置][23] -#### 6\. 控制电脑的自动挂起和屏幕锁定 +#### 6、控制电脑的自动挂起和屏幕锁定 -如果你在笔记本电脑上使用Ubuntu 20.04,那么你可能需要注意一些电源和屏幕锁定设置。 +如果你在笔记本电脑上使用 Ubuntu 20.04,那么你可能需要注意一些电源和屏幕锁定设置。 -如果你的笔记本电脑是电池模式,Ubuntu会在20分钟不工作后休眠系统。这样做是为了节省电池电量。就我个人而言,我不喜欢它,因此我禁用了它。 +如果你的笔记本电脑处于电池模式,Ubuntu 会在 20 分钟不活动后休眠系统。这样做是为了节省电池电量。就我个人而言,我不喜欢它,因此我禁用了它。 -类似地,如果您离开系统几分钟,它会自动锁定屏幕。我也不喜欢这种行为,所以我宁愿禁用它。 +类似地,如果你离开系统几分钟,它会自动锁定屏幕。我也不喜欢这种行为,所以我宁愿禁用它。 -![Ubuntu 20.04的电源设置][24] +![Ubuntu 20.04 的电源设置][24] -#### 7\. 享受夜间模式 +#### 7、享受夜间模式 -一个[谈论最多的 Ubuntu 20.04 特性][25]是夜间模式。你可以通过进入设置并在外观部分中选择它来启用夜间模式。 +[Ubuntu 20.04 中最受关注的特性][25]之一是夜间模式。你可以通过进入设置并在外观部分中选择它来启用夜间模式。 -![开启夜间主题 Ubuntu ][26] +![开启夜间主题 Ubuntu][26] -你可能需要做一些额外的调整来启用 Ubuntu 20.04 的深度夜间模式。 +你可能需要做一些[额外的调整来获得完整的 Ubuntu 20.04 夜间模式][27]。 -#### 8\. 控制桌面图标和启动程序 +#### 8、控制桌面图标和启动程序 -如果你想要一个最简的桌面,你可以禁用桌面上的图标。您还可以从左侧禁用启动程序,并在顶部面板中禁用软件状态栏。 +如果你想要一个最简的桌面,你可以禁用桌面上的图标。你还可以从左侧禁用启动程序,并在顶部面板中禁用软件状态栏。 -所有这些都可以通过默认的新 GNOME 扩展来控制。 +所有这些都可以通过默认的新 GNOME 扩展来控制,该程序默认情况下已经可用。 -![][28] +![禁用 Ubuntu 20 04 的 Dock][28] -顺便说一下,你也可以通过设置-外观来改变启动栏的位置到底部或者右边。 +顺便说一下,你也可以通过“设置”->“外观”来将启动栏的位置改变到底部或者右边。 -#### 9\. 使用emojis(表情)和特殊字符,或从搜索中禁用它 +#### 9、使用表情符和特殊字符,或从搜索中禁用它 -Ubuntu提供了一个使用 emojis 或表情符号的简单方法。在默认情况下,有一个专用的应用程序叫做 Characters。它可以给你基本表情符号的[Unicode][29]码。 +Ubuntu 提供了一个使用表情符号的简单方法。在默认情况下,有一个专用的应用程序叫做“字符”。它基本上可以为你提供表情符号的 [Unicode][29]。 -不仅是表情符号,你还可以使用它来获得法语、德语、俄语和拉丁语字符的 unicode 。单击符号你可以复制 unicode ,当你粘贴这段代码时,你所选择的符号便被插入。 +不仅是表情符号,你还可以使用它来获得法语、德语、俄语和拉丁语字符的 unicode。单击符号你可以复制 unicode,当你粘贴该代码时,你所选择的符号便被插入。 -! [Emoji Ubuntu] [30] +![Ubuntu 表情符][30] 你也能在桌面搜索中找到这些特殊的字符和表情符号。也可以从搜索结果中复制它们。 -![Emojis 出现在桌面搜索中][31] +![表情符出现在桌面搜索中][31] 如果你不想在搜索结果中看到它们,你应该禁用搜索功能对它们的访问。下一节将讨论如何做到这一点。 -#### 10\. 掌握桌面搜索 +#### 10、掌握桌面搜索 -GNOME桌面拥有强大的搜索功能。大多数人使用它来搜索已安装的应用程序,但它不仅限于此。 +GNOME 桌面拥有强大的搜索功能,大多数人使用它来搜索已安装的应用程序,但它不仅限于此。 -按超级键(Win键)并搜索一些东西。它将显示与搜索词匹配的任何应用程序,然后是系统设置和软件中心提供的匹配应用程序。 +按 `Super` 键并搜索一些东西,它将显示与搜索词匹配的任何应用程序,然后是系统设置和软件中心提供的匹配应用程序。 ![桌面搜索][32] @@ -152,47 +152,47 @@ GNOME桌面拥有强大的搜索功能。大多数人使用它来搜索已安装 ![Ubuntu搜索的快速计算][33] -你可以通过进入设置来控制搜索的内容和顺序。 +你可以进入“设置”中来控制可以搜索的内容和顺序。 ![][34] -#### 11\. 使用夜灯功能,减少夜间眼睛疲劳 +#### 11、使用夜灯功能,减少夜间眼睛疲劳 如果你在晚上使用电脑或智能手机,你应该使用夜灯功能来减少眼睛疲劳。我觉得这很有帮助。 夜灯的特点是在屏幕上增加了一种黄色的色调,比白光少了一些挤压感。 -你可以在设置->显示并切换到夜灯选项卡。你可以根据自己的喜好设置“黄色”。 +你可以在“设置”->“显示”切换到夜灯选项卡来开启夜光功能。你可以根据自己的喜好设置“黄度”。 ![夜灯功能][35] -#### 12\.使用 2K/4K 显示器? 使用分辨率缩放得到更大的图标和字体 +#### 12、使用 2K/4K 显示器?使用分辨率缩放得到更大的图标和字体 如果你觉得图标、字体、文件夹在你的高分辨率屏幕上看起来都太小了,你可以利用分辨率缩放。 -启用分级缩放可以让你有更多的选项来从100%增加到200%。你可以选择适合自己喜好的缩放尺寸。 +启用分辨率缩放可以让你有更多的选项来从 100% 增加到 200%。你可以选择适合自己喜好的缩放尺寸。 -![启用高分缩放从设置->显示][36] +![在设置->显示中启用高分缩放][36] -#### 13\. 探索GNOME扩展以扩展GNOME桌面的可用性 +#### 13、探索 GNOME 扩展功能以扩展 GNOME 桌面可用性 -GNOME桌面有称为扩展的小插件或附加组件。你应该[学习使用 GNOM E扩展][37]来扩展系统的可用性。 +GNOME 桌面有称为“扩展”的小插件或附加组件。你应该[学会使用 GNOME 扩展][37]来扩展系统的可用性。 -如下图所示,天气扩展顶部面板中显示了天气信息。不起眼但十分有用。您也可以在这里查看一些[最佳 GNOME 扩展][38]。不要全部安装,只使用那些对你有用的。 +如下图所示,天气扩展顶部面板中显示了天气信息。不起眼但十分有用。你也可以在这里查看一些[最佳 GNOME 扩展][38]。不需要全部安装,只使用那些对你有用的。 -![天气 扩展][39] +![天气扩展][39] -#### 14\.启用“勿扰”模式,专注于工作 +#### 14、启用“勿扰”模式,专注于工作 如果你想专注于工作,禁用桌面通知会很方便。你可以轻松地启用“勿扰”模式,并静音所有通知。 ![启用“请勿打扰”清除桌面通知][40] -这些通知仍然会在消息栏中,以便您以后可以阅读它们,但是它们不会在桌面上弹出。 +这些通知仍然会在消息栏中,以便你以后可以阅读它们,但是它们不会在桌面上弹出。 -#### 15\. 清理你的系统 +#### 15、清理你的系统 -这是你安装Ubuntu后不需要马上做的事情。但是记住它会对你有帮助。 +这是你安装 Ubuntu 后不需要马上做的事情。但是记住它会对你有帮助。 随着时间的推移,你的系统将有大量不再需要的包。你可以用这个命令一次性删除它们: @@ -202,23 +202,23 @@ sudo apt autoremove 还有其他[清理 Ubuntu 以释放磁盘空间的方法][41],但这是最简单和最安全的。 -#### 16\. 根据您的喜好调整和定制 GNOME 桌面 +#### 16、根据你的喜好调整和定制 GNOME 桌面 我强烈推荐[安装 GNOME 设置工具][42]。这将让你可以通过额外的设置来进行定制。 ![Gnome 设置工具][43] -比如,你可以[以百分比形式显示电池容量][44],[修正在touchpad中右键问题][45],改变 Shell 主题,改变鼠标指针速度,显示日期和星期数,改变应用程序窗口行为等。 +比如,你可以[以百分比形式显示电池容量][44]、[修正在触摸板右键问题][45]、改变 Shell 主题、改变鼠标指针速度、显示日期和星期数、改变应用程序窗口行为等。 -定制是没有尽头的,我可能仅使用了它的一小部分功能。这就是为什么我推荐[阅读这些文章]关于[自定义GNOME桌面][46]的[42]。 +定制是没有尽头的,我可能仅使用了它的一小部分功能。这就是为什么我推荐[阅读这些][42]关于[自定义 GNOME 桌面][46]的文章。 -你也可以[在Ubuntu中安装新主题][47],不过就我个人而言,我喜欢这个版本的默认主题。这是我第一次在Ubuntu发行版中使用默认的图标和主题。 +你也可以[在 Ubuntu 中安装新主题][47],不过就我个人而言,我喜欢这个版本的默认主题。这是我第一次在 Ubuntu 发行版中使用默认的图标和主题。 #### 安装 Ubuntu 之后你会做什么? -如果你是Ubuntu的初学者,我建议你[阅读这一系列Ubuntu教程][48]开始学习。 +如果你是 Ubuntu 的初学者,我建议你[阅读这一系列 Ubuntu 教程][48]开始学习。 -这就是我的建议。安装Ubuntu之后你要做什么?分享你最喜欢的东西,我可能根据你的建议来更新这篇文章。 +这就是我的建议。安装 Ubuntu 之后你要做什么?分享你最喜欢的东西,我可能根据你的建议来更新这篇文章。 -------------------------------------------------------------------------------- @@ -227,31 +227,31 @@ via: https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/ 作者:[Abhishek Prakash][a] 选题:[lujun9972][b] 译者:[qfzy1233](https://github.com/qfzy1233) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/abhishek/ [b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/ubuntu-20-04-release-features/ +[1]: https://linux.cn/article-12146-1.html [2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/things-to-do-after-installing-ubuntu-20-04.jpg?ssl=1 -[3]: https://itsfoss.com/how-to-know-ubuntu-unity-version/ -[4]: https://itsfoss.com/find-desktop-environment/ +[3]: https://linux.cn/article-9872-1.html +[4]: https://linux.cn/article-12124-1.html [5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-updater-ubuntu-20-04.jpg?ssl=1 [6]: https://itsfoss.com/ubuntu-repositories/ [7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-updates-settings-ubuntu-20-04.jpg?ssl=1 [8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/extra-repositories-ubuntu-20.jpg?ssl=1 -[9]: https://itsfoss.com/install-media-codecs-ubuntu/ -[10]: https://itsfoss.com/install-microsoft-fonts-ubuntu/ -[11]: https://ubuntu-restricted-extras/ +[9]: https://linux.cn/article-11906-1.html +[10]: https://linux.cn/article-12074-1.html +[11]: //ubuntu-restricted-extras/ [12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/02/installing_ubuntu_restricted_extras.jpg?ssl=1 [13]: https://itsfoss.com/remove-install-software-ubuntu/ [14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/software-center-ubuntu-20.png?resize=800%2C509&ssl=1 [15]: https://itsfoss.com/best-ubuntu-apps/ [16]: https://itsfoss.com/install-chrome-ubuntu/ -[17]: https://itsfoss.com/linux-gaming-guide/ +[17]: https://linux.cn/article-7316-1.html [18]: https://itsfoss.com/install-steam-ubuntu-linux/ -[19]: https://itsfoss.com/steam-play/ +[19]: https://linux.cn/article-10054-1.html [20]: https://github.com/FeralInteractive/gamemode [21]: https://en.wikipedia.org/wiki/Rise_of_the_Tomb_Raider [22]: https://itsfoss.com/could-not-get-lock-error/ @@ -259,7 +259,7 @@ via: https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/ [24]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/power-settings-ubuntu-20-04.png?fit=800%2C591&ssl=1 [25]: https://www.youtube.com/watch?v=lpq8pm_xkSE [26]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/enable-dark-theme-ubuntu.png?ssl=1 -[27]: https://itsfoss.com/dark-mode-ubuntu/ +[27]: https://linux.cn/article-12098-1.html [28]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/disable-dock-ubuntu-20-04.png?ssl=1 [29]: https://en.wikipedia.org/wiki/List_of_Unicode_characters [30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/emoji-ubuntu.jpg?ssl=1 From 93b384eaa29d5444203f9c5cbf354acee5fe4a42 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 4 May 2020 22:04:01 +0800 Subject: [PATCH 118/178] PUB @qfzy1223 https://linux.cn/article-12183-1.html --- .../20200424 16 Things to do After Installing Ubuntu 20.04.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200424 16 Things to do After Installing Ubuntu 20.04.md (99%) diff --git a/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md b/published/20200424 16 Things to do After Installing Ubuntu 20.04.md similarity index 99% rename from translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md rename to published/20200424 16 Things to do After Installing Ubuntu 20.04.md index e16ba06e54..ded221ab6c 100644 --- a/translated/tech/20200424 16 Things to do After Installing Ubuntu 20.04.md +++ b/published/20200424 16 Things to do After Installing Ubuntu 20.04.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (qfzy1233) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12183-1.html) [#]: subject: (16 Things to do After Installing Ubuntu 20.04) [#]: via: (https://itsfoss.com/things-to-do-after-installing-ubuntu-20-04/) [#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) From 894a99a02162444c101b9d653bf24f94c1c57ddc Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Mon, 4 May 2020 22:42:37 +0800 Subject: [PATCH 119/178] 20180612 --- ...12 Systemd Services- Reacting to Change.md | 106 +++++++++--------- 1 file changed, 52 insertions(+), 54 deletions(-) diff --git a/sources/tech/20180612 Systemd Services- Reacting to Change.md b/sources/tech/20180612 Systemd Services- Reacting to Change.md index 740b695f08..bdb7d7c44c 100644 --- a/sources/tech/20180612 Systemd Services- Reacting to Change.md +++ b/sources/tech/20180612 Systemd Services- Reacting to Change.md @@ -4,54 +4,53 @@ Systemd Services: Reacting to Change Systemd服务:响应变更 ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/webcam.png?itok=zzYUs5VK) [I have one of these Compute Sticks][1] (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don't have problems getting it to work with drivers for my printer, and that’s what it does most days: it interfaces with the shared printer and scanner in my living room. -[我有其中一个计算棒] [1](图1),并将其用作通用服务器。 它不起眼且安静,并且由于它是基于x86架构构建的,因此我可以将其与打印机驱动程序配合使用时没有任何问题,这就是大多数日子所做的事情:它与我的共享打印机和扫描仪接口 客厅。 +[我有一个这样的电脑棒][1](图1),并将其用作通用服务器。它很小且安静,由于它是基于x86架构,因此我为我的打印机安装驱动没有任何问题,而且这就是它大多数时候干的事:与客厅的共享打印机和扫描仪通信。 ![ComputeStick][3] An Intel ComputeStick. Euro coin for size. -英特尔计算棒。 大小的欧洲硬币。 +一个英特尔电脑棒。欧元硬币大小。 [Used with permission][4] Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn't come with its own camera, and it wouldn't need to be spying all the time. I also didn't want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door. -大多数情况下,它处于空闲状态,尤其是当我们外出时,因此我认为将其用作监视系统是个好主意。 该设备没有自带的摄像头,也不需要一直监视。 我也不想手动开始图像捕获,因为这意味着必须使用SSH登录到Stick,并在冲出门之前在shell中编写命令来启动该过程。 +大多数时候,它都闲着,尤其是当我们外出时,因此我认为用它作监视系统是个好主意。该设备没有自带的摄像头,也不需要一直监视。我也不想手动启动图像捕获,因为这样就意味着在出门前必须通过SSH登录,并在shell中编写命令来启动该进程。 So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected. -因此,我认为要做的就是抓住USB网络摄像头,然后仅通过将其插入即可自动启动监视系统。如果在Stick重新启动后监视系统也启动,则奖励积分。 连接的。 +因此,我以为应该这么做:抓住USB摄像头,然后只需插入它即可自动启动监视系统。如果Stick重启后发现连接了摄像头也启动监视系统就更加分了。 In prior installments, we saw that [systemd services can be started or stopped by hand][5] or [when certain conditions are met][6]. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service. -在先前的文章中,我们看到[可以手动启动或停止系统服务] [5]或[在满足某些条件时] [6]。 这些条件不仅限于操作系统在启动或关机顺序中达到某种状态时,还可以在您插入新硬件或文件系统发生变化时进行。 您可以通过将Udev规则与systemd服务相结合来实现。 +在先前的文章中,我们看到[systemd服务可以手动启动或停止][5]或[在满足某些条件时][6]。这些条件不限于操作系统在启动或关机时序中达到某种状态,还可以在您插入新硬件或文件系统发生变化时进行。您可以通过将Udev规则与systemd服务结合起来实现。 -### Hotplugging with Udev +### Hotplugging with Udev 有Udev(支持)的热插拔 Udev rules live in the _/etc/udev/rules_ directory and are usually a single line containing _conditions_ and _assignments_ that lead to an _action_. +Udev规则位于 _/etc/udev/rules_ 目录中,通常是由导致一个 _动作(action)_ 的 _条件(conditions)_ 和 _赋值(assignments)_ 的单行语句来描述。 That was a bit cryptic. Let's try again: -Udev规则位于_ / etc / udev / rules_目录中,通常是包含_conditions_和_assignments_的单行,它们导致_action_。 - -那有点神秘。让我们再试一次: +有点神秘。让我们再试一次: Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the _conditions_ mentioned earlier. -通常,在Udev规则中,您告诉systemd连接设备时要查找什么。例如,您可能要检查刚插入的设备的品牌和型号是否与您要让Udev等待的设备的品牌和型号相对应。这些是前面提到的条件。 +通常,在Udev规则中,您告诉systemd当连接一个设备时需要查看什么信息。例如,您可能想检查刚插入的设备的品牌和型号是否与您让Udev等待的设备的品牌和型号相对应。这些就是前面提到的条件。 Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you're going to want users to be able to read information from the printer (the user's printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the _assignments_ you read about earlier. - -然后,您可能需要更改一些内容,以便以后可以轻松使用该设备。例如,更改对设备的读写权限:如果插入USB打印机,您将希望用户能够从打印机中读取信息(用户的打印应用程序希望了解模型,制造商,以及是否准备好接受打印作业)并向其写入内容,即发送要打印的内容。更改设备的读写权限是使用您之前阅读的_assignments_之一完成的。 +然后,您可能想要更改一些内容,以便以后可以轻松使用该设备。例如,更改设备的读写权限:如果插入USB打印机,您将希望用户能够从打印机读取信息(用户的打印应用程序需要知道其模型,制造商,以及是否准备好接受打印作业)并向其写入内容,即发送要打印的内容。更改设备的读写权限是通过您之前阅读的 _赋值(assignments)_ 之一完成的。 Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an _action_ mentioned above. - -最后,您可能希望系统在满足上述条件时执行某些操作,例如在插入某个外部硬盘驱动器时启动备份应用程序以复制重要文件。这就是上述_action_的示例。 +最后,您可能希望系统在满足上述条件时执行某些动作,例如在插入某个外接硬盘时启动备份程序以复制重要文件。这就是上面提到的 _动作(action)_ 的例子。 With that in mind, ponder this: +了解这些之后, 来看看以下几点: ``` ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service" ``` -The first part of the rule, +The first part of the rule, +规则的第一部分, ``` ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", @@ -59,32 +58,32 @@ ATTRS{idProduct}=="e207" [etc... ] ``` shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (`ACTION=="add"`) to the machine, it has to be integrated into the `video4linux` subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (`ATTRS{idVendor}=="03f0"`) and a model (`ATTRS{idProduct}=="e207"`) of the device. -显示在执行系统要执行的其他任何操作之前设备必须满足的条件。 必须将设备添加到(ACTION ==“ add”)机器上,并且必须将其集成到video4linux子系统中。 为了确保仅在插入正确的设备时才应用该规则,您必须确保Udev正确标识制造商(`ATTRS {idVendor} ==“ 03f0”`)和型号(`ATTRS {idProduct} == 设备的“ e207”`)。 +表明了执行您想让系统执行的其他动作之前设备必须满足的条件。设备必须被添加到(`ACTION=="add"`)机器上,并且必须添加到 `video4linux` 子系统中。为了确保仅在插入正确的设备时才应用该规则,您必须确保Udev正确识别设备的制造商(`ATTRS{idVendor}=="03f0"`)和型号(`ATTRS{idProduct}=="e207"`)。 In this case, we're talking about this device (Figure 2): - -在这种情况下,我们正在讨论此设备(图2): +在本例中,我们讨论的是这个设备(图2): ![webcam][8] The HP webcam used in this experiment. +这个试验使用的是HP的摄像头。 [Used with permission][4] Notice how you use `==` to indicate that these are a logical operation. You would read the above snippet of the rule like this: -注意如何使用“ ==”来表示这些是逻辑操作。 您将阅读以下规则的摘要: +注意怎样用`==`来表示这是一个逻辑操作。您应该像这样阅读上面的简要规则: ``` if the device is added and the device controlled by the video4linux subsystem and the manufacturer of the device is 03f0 and the model is e207, then... ``` ``` -如果添加了设备并且该设备由video4linux子系统控制 -而设备的制造商是03f0,型号是e207,则... +如果添加了一个设备并且该设备由video4linux子系统控制 +而且该设备的制造商是03f0,型号是e207,那么... ``` But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The `IdVendor` and `idProduct` you can get by plugging the webcam into your machine and running `lsusb`: -但是,您从哪里获得所有这些信息? 您在哪里找到触发事件的动作,制造商,模型等? 您可能必须使用多个资源。 您可以通过将摄像头插入计算机并运行`lsusb`来获得`IdVendor`和`idProduct`: +但是,您从哪里获取的这些信息? 您在哪里找到触发事件的动作,制造商,模型等等?您可能必须使用多个来源。您可以通过将摄像头插入机器并运行`lsusb`来获得`IdVendor`和`idProduct`: ``` lsusb @@ -99,15 +98,15 @@ Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ``` The webcam I’m using is made by HP, and you can only see one HP device in the list above. The `ID` gives you the manufacturer and the model numbers separated by a colon (`:`). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run `lsusb` again and check what's missing. -我正在使用的网络摄像头是由HP制造的,并且您在上面的列表中只能看到一台HP设备。 “ ID”为您提供制造商和型号,以冒号(`:`)分隔。 如果同一制造商提供的设备不止一个,并且不确定是哪个设备,请拔下网络摄像头,再次运行`lsusb`并检查缺少的内容。 +我用的摄像头是HP的,您在上面的列表中只能看到一个HP设备。`ID`告诉了制造商和型号,它们以冒号(`:`)分隔。如果您有同一制造商的多个设备,不确定哪个是哪个设备,请拔下摄像头,再次运行`lsusb`, 看看少了什么。 OR... Unplug the webcam, wait a few seconds, run the command `udevadmin monitor --environment` and then plug the webcam back in again. When you do that with the HP webcam, you get: -要么... +或者... -拔下网络摄像头,等待几秒钟,运行命令“ udevadmin monitor --environment”,然后重新插入网络摄像头。 使用HP网络摄像头进行操作时,您将获得: +拔下摄像头,等待几秒钟,运行命令`udevadmin monitor --environment`,然后重新插入摄像头。当您使用的是HP摄像头时,您将看到: ``` udevadmin monitor --environment @@ -152,38 +151,35 @@ XKBVARIANT= That may look like a lot to process, but, check this out: the `ACTION` field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer's ID number (`ID_VENDOR_ID=03f0`) and the model number (`ID_VENDOR_ID=03f0`). -可能要处理很多事情,但是,请检查一下:列表前面的“ ACTION”字段告诉您刚刚发生了什么事件,即设备已添加到系统中。您还可以在几行中看到设备名称的拼写,因此可以确定它是您要查找的设备。输出还显示制造商的ID号(ID_VENDOR_ID = 03f0)和型号(ID_VENDOR_ID = 03f0)。 +可能看起来有很多信息要处理,但是,看一下这个:列表前面的`ACTION`字段, 它告诉您刚刚发生了什么事件,即一个设备被添加到系统中。您还可以在几行中看到设备名称的拼写,因此可以非常确定它就是您要找的设备。输出里还显示了制造商的ID(`ID_VENDOR_ID = 03f0`)和型号(`ID_VENDOR_ID = 03f0`)。 This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says: -这为您提供了规则条件部分需要的四个值中的三个。您可能会想起它也给您第四,因为还有一行这样写: +这为您提供了规则条件部分需要的四个值中的三个。您可能也会想到它还给了您第四个,因为还有一行这样写道: ``` SUBSYSTEM=input ``` -``` -SUBSYSTEM =输入 -``` Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the _usb_ subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all. -小心!尽管USB网络摄像头确实是提供输入的设备(键盘和鼠标也是如此),但它也属于_usb_子系统和其他几个子系统。这意味着您的网络摄像头已添加到多个子系统,并且看起来像多个设备。如果选择了错误的子系统,则您的规则可能无法按您希望的那样工作,或者甚至根本无法工作。 +小心!尽管USB摄像头确实是提供输入的设备(键盘和鼠标也是),但它也属于 _usb_ 子系统和其他几个子系统。这意味着您的摄像头被添加到了多个子系统,并且看起来像多个设备。如果您选择了错误的子系统,那么您的规则可能无法按您期望的那样工作,或者根本无法工作。 So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run: -因此,您需要检查的第三件事是网络摄像头已添加到的所有子系统中,并选择了正确的子系统。为此,请再次拔下网络摄像头,然后运行: +因此,您必须检查的第三件事是摄像头添加到的所有子系统,并选择正确的那个。为此,请再次拔下摄像头,然后运行: ``` ls /dev/video* ``` This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as `/dev/video0`. Plug your webcam back in and run `ls /dev/video*` again. -这将向您显示连接到本机的所有视频设备。 如果您使用的是笔记本电脑,则大多数笔记本电脑都带有内置摄像头,它可能会显示为`/ dev / video0`。 重新插入网络摄像头,然后再次运行`ls / dev / video *`。 +这将向您显示连接到本机的所有视频设备。如果您使用的是笔记本,大多数笔记本都带有内置摄像头,它可能会显示为`/dev/video0`。重新插入摄像头,然后再次运行`ls /dev/video*`。 Now you should see one more video device (probably `/dev/video1`). -现在,您应该再看到一个视频设备(可能是“ / dev / video1”)。 +现在,您应该看到多一个视频设备(可能是`/dev/video1`)。 Now you can find out all the subsystems it belongs to by running `udevadm info -a /dev/video1`: -现在,您可以通过运行`udevadm info -a / dev / video1`找出它所属的所有子系统: +现在,您可以通过运行`udevadm info -a /dev/video1`找出它所属的所有子系统: ``` udevadm info -a /dev/video1 @@ -207,35 +203,36 @@ and the attributes from one single parent device. ``` The output goes on for quite a while, but what you're interested is right at the beginning: `SUBSYSTEM=="video4linux"`. This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule. -输出持续了一段时间,但是您感兴趣的只是开始:`SUBSYSTEM ==“ video4linux”`。您可以按实际情况将这行复制并粘贴到规则中。输出的其余部分(为简便起见未显示)为您提供了更多的块,例如制造商和模式ID,它们的格式也可以复制并粘贴到规则中。 +输出持续了一段时间,但是您感兴趣的只是开始:`SUBSYSTEM =="video4linux"`。您可以将这行按文本复制并粘贴到规则中。输出的其余部分(为简便起见未显示)为您提供了更多的块,例如制造商和模型ID,您也可以以同样的格式复制并粘贴到规则中。 Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device. -现在,您有了一种识别设备的方式,什么事件应该明确触发该操作,现在该对设备进行修改了。 +现在,您有了识别设备的方式并明确了什么事件应该触发该动作,该对设备进行修改了。 The next section in the rule, `SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"` tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. _/dev/video1_ ) to _/dev/mywebcam_. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be _/dev/video0_ while the external one will become _/dev/video1_. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become _/dev/video1_ and the external one _/dev/video0_. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can't rely on it being _/dev/video0_ or _/dev/video1_. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the _video4linux_ subsystem and you will make your script point to that. -规则的下一部分,`SYMLINK + =“ mywebcam”,TAG + =“ systemd”,MODE =“ 0666”`告诉Udev做三件事:首先,您要创建从设备到的符号链接(例如_ / dev / video1_)到_ / dev / mywebcam_。这是因为您无法预测默认情况下系统将调用什么设备。当您拥有内置摄像头并热插拔新摄像头时,内置摄像头通常为_ / dev / video0_,而外部摄像头通常为_ / dev / video1_。但是,如果您在插入外部USB网络摄像头的情况下引导计算机,则可能会相反,并且内部网络摄像头可能会变成_ / dev / video1_,而外部网络摄像头会变成_ / dev / video0_。这告诉您的是,尽管您的图像捕获脚本(稍后将看到)始终需要指向外部网络摄像头设备,但是您不能依靠它是_ / dev / video0_或_ / dev / video1_。为了解决这个问题,您告诉Udev创建一个符号链接,该链接在将设备添加到_video4linux_子系统的那一刻就不会改变,并且您将使脚本指向该链接。 + +规则的下一部分,`SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"`告诉Udev做三件事:首先,您要创建设备的符号链接(例如_/dev/video1_)到_/dev/mywebcam_。这是因为您无法预测系统默认情况下会把那个设备叫什么。当您拥有内置摄像头并热插拔一个新的时,内置摄像头通常为_/dev/video0_,而外部摄像头通常为_/dev/video1_。但是,如果您在插入外部USB摄像头的情况下重启计算机,则可能会相反,内部摄像头可能会变成_/dev/video1_,而外部摄像头会变成_/dev/video0_。这想告诉您的是,尽管您的图像捕获脚本(稍后将看到)总是需要指向外部摄像头设备,但是您不能依赖它是_/dev/video0_或_/dev/video1_。为了解决这个问题,您告诉Udev创建一个符号链接,该链接在设备被添加到 _video4linux_ 子系统的那一刻起就不会再变,您将使您的脚本指向该链接。 The second thing you do is add `"systemd"` to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service. -您要做的第二件事是将“ systemd”添加到与此规则关联的Udev标记列表中。这告诉Udev,该规则将触发的操作将由systemd管理,即它将是某种systemd服务。 +您要做的第二件事是将`"systemd"`添加到与此规则关联的Udev标记列表中。这告诉Udev,该规则触发的动作将由systemd管理,即它将是某种systemd服务。 Notice how in both cases you use `+=` operator. This adds the value to a list, which means you can add more than one value to `SYMLINK` and `TAG`. -注意在两种情况下如何使用“ + =”运算符。这会将值添加到列表中,这意味着您可以向“ SYMLINK”和“ TAG”添加多个值。 +注意在两种情况下该如何使用 `+=` 运算符。这会将值添加到列表中,这意味着您可以向 `SYMLINK` 和 `TAG` 添加多个值。 The `MODE` values, on the other hand, can only contain one value (hence you use the simple `=` assignment operator). What `MODE` does is tell Udev who can read from or write to the device. If you are familiar with `chmod` (and, if you are reading this, you should be), you will also be familiar of [how you can express permissions using numbers][9]. That is what this is: `0666` means " _give read and write privileges to the device to everybody_ ". -另一方面,“ MODE”值只能包含一个值(因此,您可以使用简单的“ =”赋值运算符)。 MODE的作用是告诉Udev谁可以读取或写入设备。如果您熟悉`chmod`(并且应该阅读),那么您还将熟悉[如何使用数字表示权限] [9]。这就是它的意思:“ 0666”的意思是“向所有人授予对设备的读写特权”。 +另一方面,`MODE` 值只能包含一个值(因此,您可以使用简单的 `=` 赋值运算符)。`MODE` 的作用是告诉Udev谁可以读或写该设备。如果您熟悉 `chmod`(您读到此文, 应该会熟悉),您就也会熟悉[如何用数字表示权限][9]。这就是它的含义: `0666` 的含义是 “ _向所有人授予对设备的读写权限_ ”。 At last, `ENV{SYSTEMD_WANTS}="webcam.service"` tells Udev what systemd service to run. -最后,`ENV {SYSTEMD_WANTS} =“ webcam.service”`告诉Udev要运行什么systemd服务。 +最后, `ENV{SYSTEMD_WANTS}="webcam.service"` 告诉Udev要运行什么systemd服务。 Save this rule into file called _90-webcam.rules_ (or something like that) in _/etc/udev/rules.d_ and you can load it either by rebooting your machine, or by running: -将此规则保存到_ / etc / udev / rules.d_中名为_90-webcam.rules_(或类似名称)的文件中,您可以通过重新启动计算机或运行以下命令来加载它: +将此规则保存到 _/etc/udev/rules.d_ 目录名为 _90-webcam.rules_ (或类似的名称)的文件中,您可以通过重启机器或运行以下命令来加载它: ``` sudo udevadm control --reload-rules && udevadm trigger ``` -## Service at Last +## Service at Last 最后描述服务 The service the Udev rule triggers is ridiculously simple: Udev规则触发的服务非常简单: @@ -248,10 +245,10 @@ ExecStart=/home/[user name]/bin/checkimage.sh ``` Basically, it just runs the _checkimage.sh_ script stored in your personal _bin/_ and pushes it the background. [This is something you saw how to do in prior installments][5]. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a _device_ unit. Congratulations. -基本上,它只运行存储在您个人_bin / _中的_checkimage.sh_脚本并将其推入后台。 [这是您在先前的部分中看到的操作方法] [5]。 它看起来似乎很少,但是仅由于它是由Udev规则调用的,因此您刚刚创建了一种特殊的systemd单元,称为_device_ unit。 恭喜你 +基本上,它只是运行存储在您个人 _bin/_ 中的 _checkimage.sh_ 脚本并将其放到后台。 [这是您在先前的部分中看过的内容][5]。 它看起来似乎很小,但那只是因为它是被Udev规则调用的,您刚刚创建了一种特殊的systemd单元,称为 _device_ 单元。 恭喜。 As for the _checkimage.sh_ script _webcam.service_ calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what _checkimage.sh_ does), but this is how I did it: -至于_checkimage.sh_脚本_webcam.service_调用,有几种方法可以从网络摄像头获取图像并将其与前一个图像进行比较以检查更改(_checkimage.sh_所做的工作),但这就是我的方法 它: +至于 _webcam.service_ 调用的 _checkimage.sh_ 脚本,有几种方法从摄像头抓取图像并将其与前一个图像进行比较以检查变化(这是 _checkimage.sh_ 所做的事),但这是我的方法: ``` #!/bin/bash @@ -278,15 +275,16 @@ done ``` Start by using [MPlayer][10] to grab a frame ( _00000001.png_ ) from the webcam. Notice how we point `mplayer` to the `mywebcam` symbolic link we created in our Udev rule, instead of to `video0` or `video1`. Then you transfer the image to the _monitor/_ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses [Image Magick's _compare_ tool][11] to see if there any differences between the last image captured and the one that is already in the _monitor/_ directory. -首先使用[MPlayer] [10]从摄像头抓取一个帧(_00000001.png_)。 注意,我们如何将mplayer指向在Udev规则中创建的mywebcam符号链接,而不是指向video0或video1。 然后,将映像传输到主目录中的_monitor / _目录。 然后运行一个无限循环,一次又一次地执行相同的操作,但是还使用[Image Magick的_compare_工具] [11]来查看最后捕获的图像与_monitor / _目录中的图像之间是否存在差异。 +首先使用[MPlayer][10]从摄像头抓取一帧(_00000001.png_)。注意,我们怎样将 `mplayer` 指向Udev规则中创建的 `mywebcam` 符号链接,而不是指向 `video0` 或 `video1` 。然后,将图像传输到主目录中的 _monitor/_ 目录。然后执行一个无限循环,一次又一次地执行相同的操作,但还使用了[Image Magick的_compare_工具][11]来查看最后捕获的图像与 _monitor/_ 目录中已有的图像之间是否存在差异。 If the images are different, it means something has moved within the webcam's frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement. -如果图像不同,则表示网络摄像头框架内已移动了某些东西。 该脚本将新图像覆盖原始图像,并继续比较以等待更多移动。 +如果图像不同,则表示摄像头的镜框里某些东西动了。该脚本将新图像覆盖原始图像,并继续比较以等待更多变动。 -### Plugged +### Plugged 插线 With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the _webcam.service_. The _webcam.service_ will execute _checkimage.sh_ in the background, and _checkimage.sh_ will start taking pictures every half a second. You will know because your webcam's LED will start flashing indicating every time it takes a snap. -一切零碎,将网络摄像头插入电源后,您的Udev规则将被触发并启动_webcam.service_。 _webcam.service_将在后台执行_checkimage.sh_,而_checkimage.sh_将每半秒开始拍照。 您会知道,因为网络摄像头的LED指示灯将开始闪烁,表明每次需要快照。 + +所有东西准备好后,当您插入摄像头后,您的Udev规则将被触发并启动 _webcam.service_ 。 _webcam.service_ 将在后台执行 _checkimage.sh_ ,而 _checkimage.sh_ 将开始每半秒拍一次照。您会感觉到,因为摄像头的LED在每次拍照时将开始闪。 As always, if something goes wrong, run 与往常一样,如果出现问题,请运行 @@ -298,23 +296,23 @@ systemctl status webcam.service to check what your service and script are up to. 检查您的服务和脚本正在做什么。 -### Coming up +### Coming up 接下来 You may be wondering: Why overwrite the original image? Surely you would want to see what's going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy. -您可能想知道:为什么要覆盖原始图像? 当然,如果系统检测到任何移动,您肯定想知道发生了什么,对吗? 您将是对的,但是如您在下一部分中将看到的那样,将它们保持原样,并使用另一种类型的systemd单元处理图像将使事情变得更好,干净和容易。 +您可能想知道:为什么要覆盖原始图像? 当然,系统检测到任何动静,您都想知道发生了什么,对吗?您是对的,但是如您在下一部分中将看到的那样,将它们保持原样,并使用另一种类型的systemd单元处理图像将更好,更清晰和更简单。 Just wait and see. 请稍等。 Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX. -通过Linux基金会和edX的免费[[Linux简介]] [12]课程了解有关Linux的更多信息。 +通过Linux基金会和edX的免费["Linux简介"][12]课程了解有关Linux的更多信息。 -------------------------------------------------------------------------------- via: https://www.linux.com/blog/intro-to-linux/2018/6/systemd-services-reacting-change 作者:[Paul Brown][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[messon007](https://github.com/messon007) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 5f8264ce45fa65af7206005dffc7793b98ce4eb5 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Mon, 4 May 2020 22:51:40 +0800 Subject: [PATCH 120/178] Almost done --- ...12 Systemd Services- Reacting to Change.md | 86 +++---------------- 1 file changed, 13 insertions(+), 73 deletions(-) diff --git a/sources/tech/20180612 Systemd Services- Reacting to Change.md b/sources/tech/20180612 Systemd Services- Reacting to Change.md index bdb7d7c44c..22c176ee80 100644 --- a/sources/tech/20180612 Systemd Services- Reacting to Change.md +++ b/sources/tech/20180612 Systemd Services- Reacting to Change.md @@ -1,47 +1,34 @@ -Systemd Services: Reacting to Change Systemd服务:响应变更 +Systemd服务:响应变更 ====== ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/webcam.png?itok=zzYUs5VK) -[I have one of these Compute Sticks][1] (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don't have problems getting it to work with drivers for my printer, and that’s what it does most days: it interfaces with the shared printer and scanner in my living room. [我有一个这样的电脑棒][1](图1),并将其用作通用服务器。它很小且安静,由于它是基于x86架构,因此我为我的打印机安装驱动没有任何问题,而且这就是它大多数时候干的事:与客厅的共享打印机和扫描仪通信。 ![ComputeStick][3] -An Intel ComputeStick. Euro coin for size. 一个英特尔电脑棒。欧元硬币大小。 [Used with permission][4] -Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn't come with its own camera, and it wouldn't need to be spying all the time. I also didn't want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door. - 大多数时候,它都闲着,尤其是当我们外出时,因此我认为用它作监视系统是个好主意。该设备没有自带的摄像头,也不需要一直监视。我也不想手动启动图像捕获,因为这样就意味着在出门前必须通过SSH登录,并在shell中编写命令来启动该进程。 -So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected. - 因此,我以为应该这么做:抓住USB摄像头,然后只需插入它即可自动启动监视系统。如果Stick重启后发现连接了摄像头也启动监视系统就更加分了。 -In prior installments, we saw that [systemd services can be started or stopped by hand][5] or [when certain conditions are met][6]. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service. 在先前的文章中,我们看到[systemd服务可以手动启动或停止][5]或[在满足某些条件时][6]。这些条件不限于操作系统在启动或关机时序中达到某种状态,还可以在您插入新硬件或文件系统发生变化时进行。您可以通过将Udev规则与systemd服务结合起来实现。 -### Hotplugging with Udev 有Udev(支持)的热插拔 +### 有Udev(支持)的热插拔 -Udev rules live in the _/etc/udev/rules_ directory and are usually a single line containing _conditions_ and _assignments_ that lead to an _action_. Udev规则位于 _/etc/udev/rules_ 目录中,通常是由导致一个 _动作(action)_ 的 _条件(conditions)_ 和 _赋值(assignments)_ 的单行语句来描述。 -That was a bit cryptic. Let's try again: 有点神秘。让我们再试一次: -Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the _conditions_ mentioned earlier. 通常,在Udev规则中,您告诉systemd当连接一个设备时需要查看什么信息。例如,您可能想检查刚插入的设备的品牌和型号是否与您让Udev等待的设备的品牌和型号相对应。这些就是前面提到的条件。 -Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you're going to want users to be able to read information from the printer (the user's printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the _assignments_ you read about earlier. 然后,您可能想要更改一些内容,以便以后可以轻松使用该设备。例如,更改设备的读写权限:如果插入USB打印机,您将希望用户能够从打印机读取信息(用户的打印应用程序需要知道其模型,制造商,以及是否准备好接受打印作业)并向其写入内容,即发送要打印的内容。更改设备的读写权限是通过您之前阅读的 _赋值(assignments)_ 之一完成的。 -Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an _action_ mentioned above. 最后,您可能希望系统在满足上述条件时执行某些动作,例如在插入某个外接硬盘时启动备份程序以复制重要文件。这就是上面提到的 _动作(action)_ 的例子。 -With that in mind, ponder this: 了解这些之后, 来看看以下几点: ``` @@ -49,7 +36,6 @@ ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduc SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service" ``` -The first part of the rule, 规则的第一部分, ``` @@ -57,33 +43,24 @@ ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207" [etc... ] ``` -shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (`ACTION=="add"`) to the machine, it has to be integrated into the `video4linux` subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (`ATTRS{idVendor}=="03f0"`) and a model (`ATTRS{idProduct}=="e207"`) of the device. 表明了执行您想让系统执行的其他动作之前设备必须满足的条件。设备必须被添加到(`ACTION=="add"`)机器上,并且必须添加到 `video4linux` 子系统中。为了确保仅在插入正确的设备时才应用该规则,您必须确保Udev正确识别设备的制造商(`ATTRS{idVendor}=="03f0"`)和型号(`ATTRS{idProduct}=="e207"`)。 -In this case, we're talking about this device (Figure 2): 在本例中,我们讨论的是这个设备(图2): ![webcam][8] -The HP webcam used in this experiment. 这个试验使用的是HP的摄像头。 [Used with permission][4] -Notice how you use `==` to indicate that these are a logical operation. You would read the above snippet of the rule like this: -注意怎样用`==`来表示这是一个逻辑操作。您应该像这样阅读上面的简要规则: +注意怎样用 `==` 来表示这是一个逻辑操作。您应该像这样阅读上面的简要规则: -``` -if the device is added and the device controlled by the video4linux subsystem -and the manufacturer of the device is 03f0 and the model is e207, then... -``` ``` 如果添加了一个设备并且该设备由video4linux子系统控制 而且该设备的制造商是03f0,型号是e207,那么... ``` -But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The `IdVendor` and `idProduct` you can get by plugging the webcam into your machine and running `lsusb`: -但是,您从哪里获取的这些信息? 您在哪里找到触发事件的动作,制造商,模型等等?您可能必须使用多个来源。您可以通过将摄像头插入机器并运行`lsusb`来获得`IdVendor`和`idProduct`: +但是,您从哪里获取的这些信息? 您在哪里找到触发事件的动作,制造商,模型等等?您可能必须使用多个来源。您可以通过将摄像头插入机器并运行 `lsusb` 来获得 `IdVendor` 和 `idProduct` : ``` lsusb @@ -97,16 +74,11 @@ Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub ``` -The webcam I’m using is made by HP, and you can only see one HP device in the list above. The `ID` gives you the manufacturer and the model numbers separated by a colon (`:`). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run `lsusb` again and check what's missing. -我用的摄像头是HP的,您在上面的列表中只能看到一个HP设备。`ID`告诉了制造商和型号,它们以冒号(`:`)分隔。如果您有同一制造商的多个设备,不确定哪个是哪个设备,请拔下摄像头,再次运行`lsusb`, 看看少了什么。 - -OR... - -Unplug the webcam, wait a few seconds, run the command `udevadmin monitor --environment` and then plug the webcam back in again. When you do that with the HP webcam, you get: +我用的摄像头是HP的,您在上面的列表中只能看到一个HP设备。 `ID` 告诉了制造商和型号,它们以冒号( `:` )分隔。如果您有同一制造商的多个设备,不确定哪个是哪个设备,请拔下摄像头,再次运行 `lsusb` , 看看少了什么。 或者... -拔下摄像头,等待几秒钟,运行命令`udevadmin monitor --environment`,然后重新插入摄像头。当您使用的是HP摄像头时,您将看到: +拔下摄像头,等待几秒钟,运行命令 `udevadmin monitor --environment` ,然后重新插入摄像头。当您使用的是HP摄像头时,您将看到: ``` udevadmin monitor --environment @@ -149,11 +121,7 @@ XKBOPTIONS= XKBVARIANT= ``` -That may look like a lot to process, but, check this out: the `ACTION` field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer's ID number (`ID_VENDOR_ID=03f0`) and the model number (`ID_VENDOR_ID=03f0`). - -可能看起来有很多信息要处理,但是,看一下这个:列表前面的`ACTION`字段, 它告诉您刚刚发生了什么事件,即一个设备被添加到系统中。您还可以在几行中看到设备名称的拼写,因此可以非常确定它就是您要找的设备。输出里还显示了制造商的ID(`ID_VENDOR_ID = 03f0`)和型号(`ID_VENDOR_ID = 03f0`)。 - -This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says: +可能看起来有很多信息要处理,但是,看一下这个:列表前面的 `ACTION` 字段, 它告诉您刚刚发生了什么事件,即一个设备被添加到系统中。您还可以在几行中看到设备名称的拼写,因此可以非常确定它就是您要找的设备。输出里还显示了制造商的ID(`ID_VENDOR_ID = 03f0`)和型号(`ID_VENDOR_ID = 03f0`)。 这为您提供了规则条件部分需要的四个值中的三个。您可能也会想到它还给了您第四个,因为还有一行这样写道: @@ -161,24 +129,18 @@ This gives you three of the four values the condition part of the rule needs. Yo SUBSYSTEM=input ``` -Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the _usb_ subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all. 小心!尽管USB摄像头确实是提供输入的设备(键盘和鼠标也是),但它也属于 _usb_ 子系统和其他几个子系统。这意味着您的摄像头被添加到了多个子系统,并且看起来像多个设备。如果您选择了错误的子系统,那么您的规则可能无法按您期望的那样工作,或者根本无法工作。 -So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run: - 因此,您必须检查的第三件事是摄像头添加到的所有子系统,并选择正确的那个。为此,请再次拔下摄像头,然后运行: ``` ls /dev/video* ``` -This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as `/dev/video0`. Plug your webcam back in and run `ls /dev/video*` again. -这将向您显示连接到本机的所有视频设备。如果您使用的是笔记本,大多数笔记本都带有内置摄像头,它可能会显示为`/dev/video0`。重新插入摄像头,然后再次运行`ls /dev/video*`。 +这将向您显示连接到本机的所有视频设备。如果您使用的是笔记本,大多数笔记本都带有内置摄像头,它可能会显示为 `/dev/video0` 。重新插入摄像头,然后再次运行 `ls /dev/video*` 。 -Now you should see one more video device (probably `/dev/video1`). 现在,您应该看到多一个视频设备(可能是`/dev/video1`)。 -Now you can find out all the subsystems it belongs to by running `udevadm info -a /dev/video1`: 现在,您可以通过运行`udevadm info -a /dev/video1`找出它所属的所有子系统: ``` @@ -202,39 +164,28 @@ and the attributes from one single parent device. [etc...] ``` -The output goes on for quite a while, but what you're interested is right at the beginning: `SUBSYSTEM=="video4linux"`. This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule. 输出持续了一段时间,但是您感兴趣的只是开始:`SUBSYSTEM =="video4linux"`。您可以将这行按文本复制并粘贴到规则中。输出的其余部分(为简便起见未显示)为您提供了更多的块,例如制造商和模型ID,您也可以以同样的格式复制并粘贴到规则中。 -Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device. 现在,您有了识别设备的方式并明确了什么事件应该触发该动作,该对设备进行修改了。 -The next section in the rule, `SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"` tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. _/dev/video1_ ) to _/dev/mywebcam_. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be _/dev/video0_ while the external one will become _/dev/video1_. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become _/dev/video1_ and the external one _/dev/video0_. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can't rely on it being _/dev/video0_ or _/dev/video1_. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the _video4linux_ subsystem and you will make your script point to that. +规则的下一部分,`SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"`告诉Udev做三件事:首先,您要创建设备的符号链接(例如 _/dev/video1_ 到 _/dev/mywebcam_ 。这是因为您无法预测系统默认情况下会把那个设备叫什么。当您拥有内置摄像头并热插拔一个新的时,内置摄像头通常为 _/dev/video0_ ,而外部摄像头通常为 _/dev/video1_ 。但是,如果您在插入外部USB摄像头的情况下重启计算机,则可能会相反,内部摄像头可能会变成 _/dev/video1_ ,而外部摄像头会变成 _/dev/video0_ 。这想告诉您的是,尽管您的图像捕获脚本(稍后将看到)总是需要指向外部摄像头设备,但是您不能依赖它是 _/dev/video0_ 或 _/dev/video1_ 。为了解决这个问题,您告诉Udev创建一个符号链接,该链接在设备被添加到 _video4linux_ 子系统的那一刻起就不会再变,您将使您的脚本指向该链接。 -规则的下一部分,`SYMLINK+="mywebcam", TAG+="systemd", MODE="0666"`告诉Udev做三件事:首先,您要创建设备的符号链接(例如_/dev/video1_)到_/dev/mywebcam_。这是因为您无法预测系统默认情况下会把那个设备叫什么。当您拥有内置摄像头并热插拔一个新的时,内置摄像头通常为_/dev/video0_,而外部摄像头通常为_/dev/video1_。但是,如果您在插入外部USB摄像头的情况下重启计算机,则可能会相反,内部摄像头可能会变成_/dev/video1_,而外部摄像头会变成_/dev/video0_。这想告诉您的是,尽管您的图像捕获脚本(稍后将看到)总是需要指向外部摄像头设备,但是您不能依赖它是_/dev/video0_或_/dev/video1_。为了解决这个问题,您告诉Udev创建一个符号链接,该链接在设备被添加到 _video4linux_ 子系统的那一刻起就不会再变,您将使您的脚本指向该链接。 +您要做的第二件事是将 `"systemd"` 添加到与此规则关联的Udev标记列表中。这告诉Udev,该规则触发的动作将由systemd管理,即它将是某种systemd服务。 -The second thing you do is add `"systemd"` to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service. -您要做的第二件事是将`"systemd"`添加到与此规则关联的Udev标记列表中。这告诉Udev,该规则触发的动作将由systemd管理,即它将是某种systemd服务。 - -Notice how in both cases you use `+=` operator. This adds the value to a list, which means you can add more than one value to `SYMLINK` and `TAG`. 注意在两种情况下该如何使用 `+=` 运算符。这会将值添加到列表中,这意味着您可以向 `SYMLINK` 和 `TAG` 添加多个值。 -The `MODE` values, on the other hand, can only contain one value (hence you use the simple `=` assignment operator). What `MODE` does is tell Udev who can read from or write to the device. If you are familiar with `chmod` (and, if you are reading this, you should be), you will also be familiar of [how you can express permissions using numbers][9]. That is what this is: `0666` means " _give read and write privileges to the device to everybody_ ". 另一方面,`MODE` 值只能包含一个值(因此,您可以使用简单的 `=` 赋值运算符)。`MODE` 的作用是告诉Udev谁可以读或写该设备。如果您熟悉 `chmod`(您读到此文, 应该会熟悉),您就也会熟悉[如何用数字表示权限][9]。这就是它的含义: `0666` 的含义是 “ _向所有人授予对设备的读写权限_ ”。 -At last, `ENV{SYSTEMD_WANTS}="webcam.service"` tells Udev what systemd service to run. - 最后, `ENV{SYSTEMD_WANTS}="webcam.service"` 告诉Udev要运行什么systemd服务。 -Save this rule into file called _90-webcam.rules_ (or something like that) in _/etc/udev/rules.d_ and you can load it either by rebooting your machine, or by running: 将此规则保存到 _/etc/udev/rules.d_ 目录名为 _90-webcam.rules_ (或类似的名称)的文件中,您可以通过重启机器或运行以下命令来加载它: ``` sudo udevadm control --reload-rules && udevadm trigger ``` -## Service at Last 最后描述服务 +## 最后描述服务 -The service the Udev rule triggers is ridiculously simple: Udev规则触发的服务非常简单: ``` # webcam.service @@ -244,10 +195,8 @@ Type=simple ExecStart=/home/[user name]/bin/checkimage.sh ``` -Basically, it just runs the _checkimage.sh_ script stored in your personal _bin/_ and pushes it the background. [This is something you saw how to do in prior installments][5]. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a _device_ unit. Congratulations. 基本上,它只是运行存储在您个人 _bin/_ 中的 _checkimage.sh_ 脚本并将其放到后台。 [这是您在先前的部分中看过的内容][5]。 它看起来似乎很小,但那只是因为它是被Udev规则调用的,您刚刚创建了一种特殊的systemd单元,称为 _device_ 单元。 恭喜。 -As for the _checkimage.sh_ script _webcam.service_ calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what _checkimage.sh_ does), but this is how I did it: 至于 _webcam.service_ 调用的 _checkimage.sh_ 脚本,有几种方法从摄像头抓取图像并将其与前一个图像进行比较以检查变化(这是 _checkimage.sh_ 所做的事),但这是我的方法: ``` @@ -274,37 +223,28 @@ do done ``` -Start by using [MPlayer][10] to grab a frame ( _00000001.png_ ) from the webcam. Notice how we point `mplayer` to the `mywebcam` symbolic link we created in our Udev rule, instead of to `video0` or `video1`. Then you transfer the image to the _monitor/_ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses [Image Magick's _compare_ tool][11] to see if there any differences between the last image captured and the one that is already in the _monitor/_ directory. 首先使用[MPlayer][10]从摄像头抓取一帧(_00000001.png_)。注意,我们怎样将 `mplayer` 指向Udev规则中创建的 `mywebcam` 符号链接,而不是指向 `video0` 或 `video1` 。然后,将图像传输到主目录中的 _monitor/_ 目录。然后执行一个无限循环,一次又一次地执行相同的操作,但还使用了[Image Magick的_compare_工具][11]来查看最后捕获的图像与 _monitor/_ 目录中已有的图像之间是否存在差异。 -If the images are different, it means something has moved within the webcam's frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement. 如果图像不同,则表示摄像头的镜框里某些东西动了。该脚本将新图像覆盖原始图像,并继续比较以等待更多变动。 -### Plugged 插线 - -With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the _webcam.service_. The _webcam.service_ will execute _checkimage.sh_ in the background, and _checkimage.sh_ will start taking pictures every half a second. You will know because your webcam's LED will start flashing indicating every time it takes a snap. +### 插线 所有东西准备好后,当您插入摄像头后,您的Udev规则将被触发并启动 _webcam.service_ 。 _webcam.service_ 将在后台执行 _checkimage.sh_ ,而 _checkimage.sh_ 将开始每半秒拍一次照。您会感觉到,因为摄像头的LED在每次拍照时将开始闪。 -As always, if something goes wrong, run 与往常一样,如果出现问题,请运行 ``` systemctl status webcam.service ``` -to check what your service and script are up to. 检查您的服务和脚本正在做什么。 -### Coming up 接下来 +### 接下来 -You may be wondering: Why overwrite the original image? Surely you would want to see what's going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy. 您可能想知道:为什么要覆盖原始图像? 当然,系统检测到任何动静,您都想知道发生了什么,对吗?您是对的,但是如您在下一部分中将看到的那样,将它们保持原样,并使用另一种类型的systemd单元处理图像将更好,更清晰和更简单。 -Just wait and see. 请稍等。 -Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX. 通过Linux基金会和edX的免费["Linux简介"][12]课程了解有关Linux的更多信息。 -------------------------------------------------------------------------------- From d5ed90bbfa6f756208c30b4ef1810f28e6150fa7 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Mon, 4 May 2020 22:54:22 +0800 Subject: [PATCH 121/178] translated --- .../tech/20180612 Systemd Services- Reacting to Change.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename {sources => translated}/tech/20180612 Systemd Services- Reacting to Change.md (100%) diff --git a/sources/tech/20180612 Systemd Services- Reacting to Change.md b/translated/tech/20180612 Systemd Services- Reacting to Change.md similarity index 100% rename from sources/tech/20180612 Systemd Services- Reacting to Change.md rename to translated/tech/20180612 Systemd Services- Reacting to Change.md From f43234ffd735d70afe9150860d0c31e75e424224 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 4 May 2020 23:04:29 +0800 Subject: [PATCH 122/178] PRF @lxbwolf --- .../tech/20200502 Mid-stack inlining in Go.md | 69 +++++++++---------- 1 file changed, 33 insertions(+), 36 deletions(-) diff --git a/translated/tech/20200502 Mid-stack inlining in Go.md b/translated/tech/20200502 Mid-stack inlining in Go.md index 9d8edcab43..0f8b22306a 100644 --- a/translated/tech/20200502 Mid-stack inlining in Go.md +++ b/translated/tech/20200502 Mid-stack inlining in Go.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Mid-stack inlining in Go) @@ -10,19 +10,21 @@ Go 中对栈中函数进行内联 ====== -[上一篇文章][1]中我论述了叶子内联是怎样让 Go 编译器减少函数调用的开销的,以及延伸出了跨函数边界的优化的机会。本文中,我要论述内联的限制以及叶子与栈中内联的对比。 +![](https://img.linux.net.cn/data/attachment/album/202005/04/230304avxkxlyoozbiw1bn.jpg) + +[上一篇文章][1]中我论述了叶子内联leaf inlining是怎样让 Go 编译器减少函数调用的开销的,以及延伸出了跨函数边界的优化的机会。本文中,我要论述内联的限制以及叶子内联与栈中内联mid-stack inlining的对比。 ### 内联的限制 -把函数内联到它的调用处消除了调用的开销,为编译器进行其他的优化提供了更好的机会,那么问题来了,既然内联这么好,内联得越多开销就越少,_为什么不尽可能多地内联呢?_ +把函数内联到它的调用处消除了调用的开销,为编译器进行其他的优化提供了更好的机会,那么问题来了,既然内联这么好,内联得越多开销就越少,*为什么不尽可能多地内联呢?* -内联用可能的增加程序大小换来了更快的执行时间。限制内联的最主要原因是,创建太多的函数内联的备份会增加编译时间,并且作为边际效应会增加生成的二进制文件的大小。即使把内联带来的进一步的优化机会考虑在内,太激进的内联也可能会增加生成的二进制文件的大小和编译时间。 +内联可能会以增加程序大小换来更快的执行时间。限制内联的最主要原因是,创建许多函数的内联副本会增加编译时间,并导致生成更大的二进制文件的边际效应。即使把内联带来的进一步的优化机会考虑在内,太激进的内联也可能会增加生成的二进制文件的大小和编译时间。 -内联收益最大的是[小函数][2],相对于调用它们的开销来说,这些函数做很少的工作。随着函数大小的增长,函数内部做的工作与函数调用的开销相比省下的时间越来越少。函数越大通常越复杂,因此对它们内联后进行优化与不内联相比的收益没有(对小函数进行内联)那么大。 +内联收益最大的是[小函数][2],相对于调用它们的开销来说,这些函数做很少的工作。随着函数大小的增长,函数内部做的工作与函数调用的开销相比省下的时间越来越少。函数越大通常越复杂,因此优化其内联形式相对于原地优化的好处会减少。 ### 内联预算 -在编译过程中,每个函数的内联能力是用_内联预算_计算的。开销的计算过程可以巧妙地内化,像一元和二元等简单操作,在抽象语法数(Abstract Syntax Tree,AST)中通常是每个节点一个单元,更复杂的操作如 `make` 可能单元更多。考虑下面的例子: +在编译过程中,每个函数的内联能力是用*内联预算*计算的 [^1]。开销的计算过程可以巧妙地内化,像一元和二元等简单操作,在抽象语法数Abstract Syntax Tree(AST)中通常是每个节点一个单位,更复杂的操作如 `make` 可能单位更多。考虑下面的例子: ```go package main @@ -79,28 +81,26 @@ func main() { ./inl.go:39:7: inlining call to small func() string { s := "hello, world!"; return s } ``` -编译器根据函数 `func small()` 的开销(7)决定可以对它内联,而`func large()` 的开销太大,编译器决定不进行内联。`func main()` 被标记为适合内联的,分配了 68 的开销;其中 `small` 占用 7,调用 `small` 函数占用 57,剩余的(4)是它自己的开销。 +编译器根据函数 `func small()` 的开销(7)决定可以对它内联,而 `func large()` 的开销太大,编译器决定不进行内联。`func main()` 被标记为适合内联的,分配了 68 的开销;其中 `small` 占用 7,调用 `small` 函数占用 57,剩余的(4)是它自己的开销。 可以用 `-gcflag=-l` 参数控制内联预算的等级。下面是可使用的值: * `-gcflags=-l=0` 默认的内联等级。 - * `-gcflags=-l` (或 `-gcflags=-l=1`) 取消内联。 - * `-gcflags=-l=2` 和 `-gcflags=-l=3` 现在已经不使用了。不影响 `-gcflags=-l=0` - * `-gcflags=-l=4` 减少非叶子函数和通过接口调用的函数的开销。[2][4] + * `-gcflags=-l`(或 `-gcflags=-l=1`)取消内联。 + * `-gcflags=-l=2` 和 `-gcflags=-l=3` 现在已经不使用了。和 `-gcflags=-l=0` 相比没有区别。 + * `-gcflags=-l=4` 减少非叶子函数和通过接口调用的函数的开销。[^2] +#### 不确定语句的优化 +一些函数虽然内联的开销很小,但由于太复杂它们仍不适合进行内联。这就是函数的不确定性,因为一些操作的语义在内联后很难去推导,如 `recover`、`break`。其他的操作,如 `select` 和 `go` 涉及运行时的协调,因此内联后引入的额外的开销不能抵消内联带来的收益。 -#### 难以理解的优化 - -一些函数虽然内联的开销很小,但由于太复杂它们仍不适合进行内联。这就是函数的不确定性,因为一些操作的语义在内联后很难去推导,如 `recover`,`break`。其他的操作,如 `select` 和 `go` 涉及运行时的协调,因此内联后引入的额外的开销不能抵消内联带来的收益。 - -难理解的语句也包括 `for` 和 `range`,这些语句不一定开销很大,但目前为止还没有对它们进行优化。 +不确定的语句也包括 `for` 和 `range`,这些语句不一定开销很大,但目前为止还没有对它们进行优化。 ### 栈中函数优化 -在过去,Go 编译器只对叶子函数进行内联 — 只有那些不调用其他函数的函数才有资格。在上一段难以理解的的语句的探讨内容中,一次函数调用会让这个函数失去内联的资格。 +在过去,Go 编译器只对叶子函数进行内联 —— 只有那些不调用其他函数的函数才有资格。在上一段不确定的语句的探讨内容中,一次函数调用就会让这个函数失去内联的资格。 -进入栈中进行内联,就像它的名字一样,能内联在函数调用栈中间的函数,不需要先让它下面的所有的函数都被标记为有资格内联的。栈中内联是 David Lazar 在 Go 1.9 中引入的,并在随后的版本中做了改进。[这篇文章][5]深入探究保留栈追踪的表现和被深度内联后的代码路径里的 `runtime.Caller` 们的难点。 +进入栈中进行内联,就像它的名字一样,能内联在函数调用栈中间的函数,不需要先让它下面的所有的函数都被标记为有资格内联的。栈中内联是 David Lazar 在 Go 1.9 中引入的,并在随后的版本中做了改进。[这篇文稿][5]深入探究了保留栈追踪行为和被深度内联后的代码路径里的 `runtime.Callers` 的难点。 在前面的例子中我们看到了栈中函数内联。内联后,`func main()` 包含了 `func small()` 的函数体和对 `func large()` 的一次调用,因此它被判定为非叶子函数。在过去,这会阻止它被继续内联,虽然它的联合开销小于内联预算。 @@ -134,15 +134,16 @@ func main() { } ``` -在这个例子中, `r.Area()` 是个简单的函数,调用了两个函数。`r.Width()` 可以被内联,`r.Height()` 这里用 `//go:noinline` 指令标注了,不能被内联。[3][6] +在这个例子中, `r.Area()` 是个简单的函数,调用了两个函数。`r.Width()` 可以被内联,`r.Height()` 这里用 `//go:noinline` 指令标注了,不能被内联。[^3] ```bash -% go build -gcflags='-m=2' square.go +% go build -gcflags='-m=2' square.go # command-line-arguments -./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline +./square.go:12:6: cannot inline (*Rectangle).Height: marked go:noinline ./square.go:17:6: can inline (*Rectangle).Width with cost 2 as: method(*Rectangle) func() int { return 6 } -./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } -./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 +./square.go:21:6: can inline (*Rectangle).Area with cost 67 as: method(*Rectangle) func() int { return r.Height() * r.Width() } +./square.go:21:61: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } +./square.go:23:6: cannot inline main: function too complex: cost 150 exceeds budget 80 ./square.go:25:20: inlining call to (*Rectangle).Area method(*Rectangle) func() int { return r.Height() * r.Width() } ./square.go:25:20: inlining call to (*Rectangle).Width method(*Rectangle) func() int { return 6 } ``` @@ -151,33 +152,29 @@ func main() { #### 快速路径内联 -关于栈中内联的效果最令人吃惊的例子是 2019 年 [Carlo Alberto Ferraris][7] 通过允许把 `sync.Mutex.Lock()` 的快速路径,非竞争的情况,内联到它的调用方来[提升它的性能][7]。在这个修改之前,`sync.Mutex.Lock()` 是个很大的函数,包含很多难以理解的条件,使得它没有资格被内联。即使锁可用时,调用者也要付出调用 `sync.Mutex.Lock()` 的代价。 +关于栈中内联的效果最令人吃惊的例子是 2019 年 [Carlo Alberto Ferraris][7] 通过允许把 `sync.Mutex.Lock()` 的快速路径(非竞争的情况)内联到它的调用方来[提升它的性能][7]。在这个修改之前,`sync.Mutex.Lock()` 是个很大的函数,包含很多难以理解的条件,使得它没有资格被内联。即使锁可用时,调用者也要付出调用 `sync.Mutex.Lock()` 的代价。 -Carlo 把 `sync.Mutex.Lock()` 分成了两个函数(他自己称为*外联*)。外部的 `sync.Mutex.Lock()` 方法现在调用 `sync/atomic.CompareAndSwapInt32()` 且如果 CAS(Compare and Swap)成功了之后立即返回给调用者。如果 CAS 失败,函数会走到 `sync.Mutex.lockSlow()` 慢速路径,需要对锁进行注册,暂停 goroutine。[4][8] +Carlo 把 `sync.Mutex.Lock()` 分成了两个函数(他自己称为外联outlining)。外部的 `sync.Mutex.Lock()` 方法现在调用 `sync/atomic.CompareAndSwapInt32()` 且如果 CAS(比较并交换Compare and Swap)成功了之后立即返回给调用者。如果 CAS 失败,函数会走到 `sync.Mutex.lockSlow()` 慢速路径,需要对锁进行注册,暂停 goroutine。[^4] ```bash % go build -gcflags='-m=2 -l=0' sync 2>&1 | grep '(*Mutex).Lock' -../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled {  }; return  }; m.lockSlow() } +../go/src/sync/mutex.go:72:6: can inline (*Mutex).Lock with cost 69 as: method(*Mutex) func() { if "sync/atomic".CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled { }; return }; m.lockSlow() } ``` 通过把函数分割成一个简单的不能再被分割的外部函数,和(如果没走到外部函数就走到的)一个处理慢速路径的复杂的内部函数,Carlo 组合了栈中函数内联和[编译器对基础操作的支持][9],减少了非竞争锁 14% 的开销。之后他在 `sync.RWMutex.Unlock()` 重复这个技巧,节省了另外 9% 的开销。 - 1. 不同发布版本中,在考虑该函数是否适合内联时,Go 编译器对同一函数的预算是不同的。[][10] - 2. 时刻记着编译器的作者警告过[“更高的内联等级(比 -l 更高)可能导致 bug 或不被支持”][11]。 Caveat emptor。[][12] - 3. 编译器有足够的能力来内联像 `strconv.ParseInt` 的复杂函数。作为一个实验,你可以尝试去掉 `//go:noinline` 注释,使用 `-gcflags=-m=2` 编译后观察。[][13] - 4. `race.Enable` 表达式是通过传递给 `go` 工具的 `-race` 参数控制的一个常量。对于普通编译,它的值是 `false`,此时编译器可以完全省略代码路径。[][14] +[^1]: 不同发布版本中,在考虑该函数是否适合内联时,Go 编译器对同一函数的预算是不同的。 +[^2]: 时刻记着编译器的作者警告过[“更高的内联等级(比 -l 更高)可能导致错误或不被支持”][11]。 Caveat emptor。 +[^3]: 编译器有足够的能力来内联像 `strconv.ParseInt` 的复杂函数。作为一个实验,你可以尝试去掉 `//go:noinline` 注释,使用 `-gcflags=-m=2` 编译后观察。 +[^4]: `race.Enable` 表达式是通过传递给 `go` 工具的 `-race` 参数控制的一个常量。对于普通编译,它的值是 `false`,此时编译器可以完全省略代码路径。 - - -#### 相关文章: +### 相关文章: 1. [Go 中的内联优化][15] 2. [goroutine 的栈为什么会无限增长?][16] 3. [栈追踪和 errors 包][17] 4. [零值是什么,为什么它很有用?][18] - - -------------------------------------------------------------------------------- via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go @@ -185,13 +182,13 @@ via: https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go 作者:[Dave Cheney][a] 选题:[lujun9972][b] 译者:[lxbwolf](https://github.com/lxbwolf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://dave.cheney.net/author/davecheney [b]: https://github.com/lujun9972 -[1]: https://dave.cheney.net/2020/04/25/inlining-optimisations-in-go +[1]: https://linux.cn/article-12176-1.html [2]: https://medium.com/@joshsaintjacque/small-functions-considered-awesome-c95b3fd1812f [3]: tmp.FyRthF1bbF#easy-footnote-bottom-1-4076 (The budget the Go compiler applies to each function when considering if it is eligible for inlining changes release to release.) [4]: tmp.FyRthF1bbF#easy-footnote-bottom-2-4076 (Keep in mind that the compiler authors warn that “Additional levels of inlining (beyond -l) may be buggy and are not supported”. Caveat emptor.) From 2656cf5b960cea155dcadbc7d04abde8e1cd1f18 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Mon, 4 May 2020 23:05:02 +0800 Subject: [PATCH 123/178] PUB @lxbwolf https://linux.cn/article-12184-1.html --- .../tech => published}/20200502 Mid-stack inlining in Go.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200502 Mid-stack inlining in Go.md (99%) diff --git a/translated/tech/20200502 Mid-stack inlining in Go.md b/published/20200502 Mid-stack inlining in Go.md similarity index 99% rename from translated/tech/20200502 Mid-stack inlining in Go.md rename to published/20200502 Mid-stack inlining in Go.md index 0f8b22306a..588286626a 100644 --- a/translated/tech/20200502 Mid-stack inlining in Go.md +++ b/published/20200502 Mid-stack inlining in Go.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12184-1.html) [#]: subject: (Mid-stack inlining in Go) [#]: via: (https://dave.cheney.net/2020/05/02/mid-stack-inlining-in-go) [#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) From 534199ced4194f1d29907b5446dde9ef418a25d8 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 5 May 2020 00:52:45 +0800 Subject: [PATCH 124/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200504=204=20co?= =?UTF-8?q?ol=20new=20projects=20to=20try=20in=20COPR=20for=20May=202020?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200504 4 cool new projects to try in COPR for May 2020.md --- ...ew projects to try in COPR for May 2020.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 sources/tech/20200504 4 cool new projects to try in COPR for May 2020.md diff --git a/sources/tech/20200504 4 cool new projects to try in COPR for May 2020.md b/sources/tech/20200504 4 cool new projects to try in COPR for May 2020.md new file mode 100644 index 0000000000..d1b4801083 --- /dev/null +++ b/sources/tech/20200504 4 cool new projects to try in COPR for May 2020.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 cool new projects to try in COPR for May 2020) +[#]: via: (https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2020/) +[#]: author: (Dominik Turecek https://fedoramagazine.org/author/dturecek/) + +4 cool new projects to try in COPR for May 2020 +====== + +![][1] + +COPR is a [collection][2] of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software. + +This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the [COPR User Documentation][3] for how to get started. + +### Ytop + +[Ytop][4] is a command-line system monitor similar to _htop_. The main difference between them is that _ytop_, on top of showing processes and their CPU and memory usage, shows graphs of system CPU, memory, and network usage over time. Additionally, _ytop_ shows disk usage and temperatures of the machine. Finally, _ytop_ supports multiple color schemes as well as an option to create new ones. + +![][5] + +#### Installation instructions + +The [repo][6] currently provides _ytop_ for Fedora 30, 31, 32, and Rawhide, as well as EPEL 7. To install _ytop_, use these commands [with _sudo_][7]: + +``` +sudo dnf copr enable atim/ytop +sudo dnf install ytop +``` + +### Ctop + +[Ctop][8] is yet another command-line system monitor. However, unlike _htop_ and _ytop_, _ctop_ focuses on showing resource usage of containers. _Ctop_ shows both an overview of CPU, memory, network and disk usage of all containers running on your machine, and more comprehensive information about a single container, including graphs of resource usage over time. Currently, _ctop_ has support for Docker and runc containers. + +![][9] + +#### Installation instructions + +The [repo][10] currently provides _ctop_ for Fedora 31, 32 and Rawhide, EPEL 7, as well as for other distributions. To install _ctop_, use these commands: + +``` +sudo dnf copr enable fuhrmann/ctop +sudo dnf install ctop +``` + +### Shortwave + +[Shortwave][11] is a program for listening to radio stations. Shortwave uses a community database of radio stations [www.radio-browser.info][12]. In this database, you can discover or search for radio stations, add them to your library, and listen to them. Additionally, Shortwave provides information about currently playing song and can record the songs as well. + +![][13] + +#### Installation instructions + +The [repo][14] currently provides Shortwave for Fedora 31, 32, and Rawhide. To install Shortwave, use these commands: + +``` +sudo dnf copr enable atim/shortwave +sudo dnf install shortwave +``` + +### Setzer + +[Setzer][15] is a LaTeX editor that can build pdf documents and view them as well. It provides templates for various types of documents, such as articles or presentation slides. Additionally, Setzer has buttons for a lot of special symbols, math symbols and greek letters. + +![][16] + +#### Installation instructions + +The [repo][17] currently provides Setzer for Fedora 30, 31, 32, and Rawhide. To install Setzer, use these commands: + +``` +sudo dnf copr enable lyessaadi/setzer +sudo dnf install setzer +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/4-cool-new-projects-to-try-in-copr-for-april-2020/ + +作者:[Dominik Turecek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/dturecek/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg +[2]: https://copr.fedorainfracloud.org/ +[3]: https://docs.pagure.org/copr.copr/user_documentation.html# +[4]: https://github.com/cjbassi/ytop +[5]: https://fedoramagazine.org/wp-content/uploads/2020/04/ytop.png +[6]: https://copr.fedorainfracloud.org/coprs/atim/ytop/ +[7]: https://fedoramagazine.org/howto-use-sudo/ +[8]: https://github.com/bcicen/ctop +[9]: https://fedoramagazine.org/wp-content/uploads/2020/04/ctop.png +[10]: https://copr.fedorainfracloud.org/coprs/fuhrmann/ctop/ +[11]: https://github.com/ranfdev/shortwave +[12]: http://www.radio-browser.info/gui/#!/ +[13]: https://fedoramagazine.org/wp-content/uploads/2020/04/shortwave.png +[14]: https://copr.fedorainfracloud.org/coprs/atim/shortwave/ +[15]: https://www.cvfosammmm.org/setzer/ +[16]: https://fedoramagazine.org/wp-content/uploads/2020/04/setzer.png +[17]: https://copr.fedorainfracloud.org/coprs/lyessaadi/setzer/ From 5b41bb1e3dff2f6762798b165d21cdb5f306f732 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 5 May 2020 00:54:10 +0800 Subject: [PATCH 125/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200505=20Browse?= =?UTF-8?q?=20the=20Peer-to-peer=20Web=20With=20Beaker=20Browser?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md --- ...he Peer-to-peer Web With Beaker Browser.md | 124 ++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md diff --git a/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md b/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md new file mode 100644 index 0000000000..82129f00a5 --- /dev/null +++ b/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Browse the Peer-to-peer Web With Beaker Browser) +[#]: via: (https://itsfoss.com/beaker-browser/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Browse the Peer-to-peer Web With Beaker Browser +====== + +The Internet as we know it has existed unchanged (more or less) for the last 50 years. People across the globe use their devices to retrieve data from huge servers dotted around the world. + +A group of dedicated technologists wants to change that to make the internet a place where people can connect and share information directly instead of relying on a central server (decentralization). + +There are a bunch of such decentralized services that we have already covered on It’s FOSS. [LBRY as YouTube alternative][1], [Mastodon as Twitter alternative][2] are just a couple of such examples. + +And today I am going to cover another such product called [Beaker Browser][3] which is essentially for browsing the peer to peer web. + +![Beaker Browser][4] + +### What is the ‘peer-to-peer Web’? + +According to [one of the devs][5] behind the Beaker browser, “The P2P Web is an experimental set of technologies…to give users more control over the Web.” + +Further, they say that the peer-to-peer Web has three main principles: anybody can be a server; multiple computers can serve the same site; there is no back end. + +As you can see from those principles. the idea of the peer-to-peer Web is very similar to BitTorrent where files are seeded by multiple peers and those peers share the bandwidth load. This reduces the overall bandwidth that a person needs to provide for their site. + +![Beaker Browser Settings][6] + +The other major part of the peer-to-peer Web is creator control of their ideas. In this day and age, platforms being controlled by large corporations, who try to use your data for their benefit. Beaker returns control to the content creators. + +### Browsing the decentralized web with Beaker + +The [Beaker Browser][3] first came into existence in 2016. The project (and the technology that surrounds it) is created by a team of three at [Blue Link Labs][7]. The Beaker Browser uses the [Dat protocol][8] to share data between computers. All websites that use the Dat protocol start with `dat://` instead of `http://`. + +The strengths of the Dat protocol are: + + * Fast – Archives sync from multiple sources at once. + * Secure – All updates are signed and integrity-checked. + * Resilient – Archives can change hosts without changing their URLs. + * Versioned – Changes are written to an append-only version log. + * Decentralized – Any device can host any archive. + + + +![Beaker Browser Seeding][9] + +The Beaker Browser is essentially a cut down version of Chromium with built-in support for `dat://`addresses. It can still visit regular `http://` sites. + +Each time you visit a dat site, the content for that site is downloaded to your computer as you request it. For example, a picture of Linux Torvalds on the about page of a site is not downloaded until you navigate to that page. + +Also, once you visit a dat website, “[you temporarily][10] re-upload or seed whichever files you’ve downloaded from the website.” You can also choose to seed the website to help its creator. + +![Beaker Browser Menu][11] + +Since the whole idea of Beaker is to create a more open web, you can easily view the source of any website. Unlike most browsers where you just see the source code the current page, you are viewing, Beaker shows you the entire structure of the site in a GitHub-like view. You can even fork the site and host your version of it. + +Besides visiting dat-based websites, you can also create your own site. In the Beaker Browser menu, there is an option to create a new website or an empty project. If you select the option to create a new website, Beaker will build a little demo site that you can edit with the browser’s built-in editor. + +However, if you are like me and prefer to use Markdown, you can choose to create an empty project. Beaker will create the structure of a site and assign it a `dat://`address. Create an `index.md` file and you are good to go. There is a [short tutorial][12] with more info. You can also use the create empty project option to build a web app. + +![Beaker Browser Website Template][13] + +Since Beaker acts as a web server and site seeder, any time you close it or turn off your computer your site will become unavailable. Thankfully, you don’t have to run your computer or the browser constantly. You can also use a seeding service named [Hashbase][14] or you can set up a [`homebase`][15] seeding server. + +Though Beaker is [available][16] for Linux, Windows, and macOS. If you do start playing around Beaker, be sure to take a quick look at [their gui][17][d][17][es][17]. + +### Beaker Browser is not for everyone but it has a purpose + +When I first got this assignment, I had high hopes for the Beaker Browser. As it stands now, it’s still very experimental. A number of the dat sites that I tried to visit were unavailable because the user was not seeding their site. Beaker does have an option to notify you when that site is back online. + +![Beaker Browser No Peer][18] + +Another problem is that Beaker is a really stripped down version of Chromium. There is no option to install extensions or themes. Instead, you are stuck with a white theme and a very limited toolset. I would not use this as my main browser and having access to the world of dat websites is not enough of a reason to keep it installed on my system. + +I looked to see if there is an extension for Firefox that would add support for the `dat://` protocol. I did find such an extension, but it also required the installation of a couple of other pieces of software. It’s just easier to install Beaker. + +As it stands now, Beaker is not for me. Maybe in the future, more people will start using Beaker or the dat protocol will gain support by other browsers. Then it might be interesting. Right now, it’s kinda empty. + +As part of my time with Beaker, I created a [website][19] using the built-in tools. Don’t worry, I made sure that it’s seeded. + +![Beaker Bowser Site Source][20] + +What are your thoughts on the Beaker Brower? What are your thoughts on the peer-to-peer web? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News, or [Reddit][21]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/beaker-browser/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/lbry/ +[2]: https://itsfoss.com/mastodon-open-source-alternative-twitter/ +[3]: https://beakerbrowser.com/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser.jpg?resize=800%2C426&ssl=1 +[5]: https://pfrazee.hashbase.io/blog/what-is-the-p2p-web +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-setting.jpg?resize=800%2C573&ssl=1 +[7]: https://bluelinklabs.com/ +[8]: https://www.datprotocol.com/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-seedding.jpg?resize=800%2C466&ssl=1 +[10]: https://beakerbrowser.com/docs/faq/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-menu.jpg?ssl=1 +[12]: https://beakerbrowser.com/docs/guides/create-a-markdown-site +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-website-template.jpg?resize=800%2C459&ssl=1 +[14]: https://hashbase.io/ +[15]: https://github.com/beakerbrowser/homebase +[16]: https://beakerbrowser.com/install/ +[17]: https://beakerbrowser.com/docs/guides/ +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-no-peer.jpg?resize=800%2C424&ssl=1 +[19]: https://41bfbd06731e8d9c5d5676e8145069c69b254e7a3b710ddda4f6e9804529690c/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-source.jpg?resize=800%2C544&ssl=1 +[21]: https://reddit.com/r/linuxusersgroup From dda8976c94bbbf30066d4b8e6e720802a1e165fb Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 5 May 2020 00:55:05 +0800 Subject: [PATCH 126/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200504=20Unders?= =?UTF-8?q?tanding=20systemd=20at=20startup=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200504 Understanding systemd at startup on Linux.md --- ...derstanding systemd at startup on Linux.md | 445 ++++++++++++++++++ 1 file changed, 445 insertions(+) create mode 100644 sources/tech/20200504 Understanding systemd at startup on Linux.md diff --git a/sources/tech/20200504 Understanding systemd at startup on Linux.md b/sources/tech/20200504 Understanding systemd at startup on Linux.md new file mode 100644 index 0000000000..2d0a5ef7b6 --- /dev/null +++ b/sources/tech/20200504 Understanding systemd at startup on Linux.md @@ -0,0 +1,445 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding systemd at startup on Linux) +[#]: via: (https://opensource.com/article/20/5/systemd-startup) +[#]: author: (David Both https://opensource.com/users/dboth) + +Understanding systemd at startup on Linux +====== +systemd's startup provides important clues to help you solve problems +when they occur. +![People at the start line of a race][1] + +In [_Learning to love systemd_][2], the first article in this series, I looked at systemd's functions and architecture and the controversy around its role as a replacement for the old SystemV init program and startup scripts. In this second article, I'll start exploring the files and tools that manage the Linux startup sequence. I'll explain the systemd startup sequence, how to change the default startup target (runlevel in SystemV terms), and how to manually switch to a different target without going through a reboot. + +I'll also look at two important systemd tools. The first is the **systemctl** command, which is the primary means of interacting with and sending commands to systemd. The second is **journalctl**, which provides access to the systemd journals that contain huge amounts of system history data such as kernel and service messages (both informational and error messages). + +Be sure to use a non-production system for testing and experimentation in this and future articles. Your test system needs to have a GUI desktop (such as Xfce, LXDE, Gnome, KDE, or another) installed. + +I wrote in my previous article that I planned to look at creating a systemd unit and adding it to the startup sequence in this article. Because this article became longer than I anticipated, I will hold that for the next article in this series. + +### Exploring Linux startup with systemd + +Before you can observe the startup sequence, you need to do a couple of things to make the boot and startup sequences open and visible. Normally, most distributions use a startup animation or splash screen to hide the detailed messages that would otherwise be displayed during a Linux host's startup and shutdown. This is called the Plymouth boot screen on Red Hat-based distros. Those hidden messages can provide a great deal of information about startup and shutdown to a sysadmin looking for information to troubleshoot a bug or to just learn about the startup sequence. You can change this using the GRUB (Grand Unified Boot Loader) configuration. + +The main GRUB configuration file is **/boot/grub2/grub.cfg**, but, because this file can be overwritten when the kernel version is updated, you do not want to change it. Instead, modify the **/etc/default/grub** file, which is used to modify the default settings of **grub.cfg**. + +Start by looking at the current, unmodified version of the **/etc/default/grub** file: + + +``` +[root@testvm1 ~]# cd /etc/default ; cat grub +GRUB_TIMEOUT=5 +GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" +GRUB_DEFAULT=saved +GRUB_DISABLE_SUBMENU=true +GRUB_TERMINAL_OUTPUT="console" +GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm. +lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_ +testvm1/usr rhgb quiet" +GRUB_DISABLE_RECOVERY="true" +[root@testvm1 default]# +``` + +Chapter 6 of the [GRUB documentation][3] contains a list of all the possible entries in the **/etc/default/grub** file, but I focus on the following: + + * I change **GRUB_TIMEOUT**, the number of seconds for the GRUB menu countdown, from five to 10 to give a bit more time to respond to the GRUB menu before the countdown hits zero. + * I delete the last two parameters on **GRUB_CMDLINE_LINUX**, which lists the command-line parameters that are passed to the kernel at boot time. One of these parameters, **rhgb** stands for Red Hat Graphical Boot, and it displays the little Fedora icon animation during the kernel initialization instead of showing boot-time messages. The other, the **quiet** parameter, prevents displaying the startup messages that document the progress of the startup and any errors that occur. I delete both **rhgb** and **quiet** because sysadmins need to see these messages. If something goes wrong during boot, the messages displayed on the screen can point to the cause of the problem. + + + +After you make these changes, your GRUB file will look like: + + +``` +[root@testvm1 default]# cat grub +GRUB_TIMEOUT=10 +GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" +GRUB_DEFAULT=saved +GRUB_DISABLE_SUBMENU=true +GRUB_TERMINAL_OUTPUT="console" +GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_testvm1-swap rd.lvm. +lv=fedora_testvm1/root rd.lvm.lv=fedora_testvm1/swap rd.lvm.lv=fedora_ +testvm1/usr" +GRUB_DISABLE_RECOVERY="false" +[root@testvm1 default]# +``` + +The **grub2-mkconfig** program generates the **grub.cfg** configuration file using the contents of the **/etc/default/grub** file to modify some of the default GRUB settings. The **grub2-mkconfig** program sends its output to **STDOUT**. It has a **-o** option that allows you to specify a file to send the datastream to, but it is just as easy to use redirection. Run the following command to update the **/boot/grub2/grub.cfg** configuration file: + + +``` +[root@testvm1 grub2]# grub2-mkconfig > /boot/grub2/grub.cfg +Generating grub configuration file ... +Found linux image: /boot/vmlinuz-4.18.9-200.fc28.x86_64 +Found initrd image: /boot/initramfs-4.18.9-200.fc28.x86_64.img +Found linux image: /boot/vmlinuz-4.17.14-202.fc28.x86_64 +Found initrd image: /boot/initramfs-4.17.14-202.fc28.x86_64.img +Found linux image: /boot/vmlinuz-4.16.3-301.fc28.x86_64 +Found initrd image: /boot/initramfs-4.16.3-301.fc28.x86_64.img +Found linux image: /boot/vmlinuz-0-rescue-7f12524278bd40e9b10a085bc82dc504 +Found initrd image: /boot/initramfs-0-rescue-7f12524278bd40e9b10a085bc82dc504.img +done +[root@testvm1 grub2]# +``` + +Reboot your test system to view the startup messages that would otherwise be hidden behind the Plymouth boot animation. But what if you need to view the startup messages and have not disabled the Plymouth boot animation? Or you have, but the messages stream by too fast to read? (Which they do.) + +There are a couple of options, and both involve log files and systemd journals—which are your friends. You can use the **less** command to view the contents of the **/var/log/messages** file. This file contains boot and startup messages as well as messages generated by the operating system during normal operation. You can also use the **journalctl** command without any options to view the systemd journal, which contains essentially the same information: + + +``` +[root@testvm1 grub2]# journalctl +\-- Logs begin at Sat 2020-01-11 21:48:08 EST, end at Fri 2020-04-03 08:54:30 EDT. -- +Jan 11 21:48:08 f31vm.both.org kernel: Linux version 5.3.7-301.fc31.x86_64 ([mockbuild@bkernel03.phx2.fedoraproject.org][4]) (gcc version 9.2.1 20190827 (Red Hat 9.2.1-1) (GCC)) #1 SMP Mon Oct > +Jan 11 21:48:08 f31vm.both.org kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.3.7-301.fc31.x86_64 root=/dev/mapper/VG01-root ro resume=/dev/mapper/VG01-swap rd.lvm.lv=VG01/root rd> +Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' +Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' +Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' +Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256 +Jan 11 21:48:08 f31vm.both.org kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-provided physical RAM map: +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000dffeffff] usable +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000dfff0000-0x00000000dfffffff] ACPI data +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved +Jan 11 21:48:08 f31vm.both.org kernel: BIOS-e820: [mem 0x0000000100000000-0x000000041fffffff] usable +Jan 11 21:48:08 f31vm.both.org kernel: NX (Execute Disable) protection: active +Jan 11 21:48:08 f31vm.both.org kernel: SMBIOS 2.5 present. +Jan 11 21:48:08 f31vm.both.org kernel: DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 +Jan 11 21:48:08 f31vm.both.org kernel: Hypervisor detected: KVM +Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 +Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: cpu 0, msr 30ae01001, primary cpu clock +Jan 11 21:48:08 f31vm.both.org kernel: kvm-clock: using sched offset of 8250734066 cycles +Jan 11 21:48:08 f31vm.both.org kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns +Jan 11 21:48:08 f31vm.both.org kernel: tsc: Detected 2807.992 MHz processor +Jan 11 21:48:08 f31vm.both.org kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved +Jan 11 21:48:08 f31vm.both.org kernel: e820: remove [mem 0x000a0000-0x000fffff] usable +<snip> +``` + +I truncated this datastream because it can be hundreds of thousands or even millions of lines long. (The journal listing on my primary workstation is 1,188,482 lines long.) Be sure to try this on your test system. If it has been running for some time—even if it has been rebooted many times—huge amounts of data will be displayed. Explore this journal data because it contains a lot of information that can be very useful when doing problem determination. Knowing what this data looks like for a normal boot and startup can help you locate problems when they occur. + +I will discuss systemd journals, the **journalctl** command, and how to sort through all of that data to find what you want in more detail in a future article in this series. + +After GRUB loads the kernel into memory, it must first extract itself from the compressed version of the file before it can perform any useful work. After the kernel has extracted itself and started running, it loads systemd and turns control over to it. + +This is the end of the boot process. At this point, the Linux kernel and systemd are running but unable to perform any productive tasks for the end user because nothing else is running, there's no shell to provide a command line, no background processes to manage the network or other communication links, and nothing that enables the computer to perform any productive function. + +Systemd can now load the functional units required to bring the system up to a selected target run state. + +### Targets + +A systemd target represents a Linux system's current or desired run state. Much like SystemV start scripts, targets define the services that must be present for the system to run and be active in that state. Figure 1 shows the possible run-state targets of a Linux system using systemd. As seen in the first article of this series and in the systemd bootup man page (man bootup), there are other intermediate targets that are required to enable various necessary services. These can include **swap.target**, **timers.target**, **local-fs.target**, and more. Some targets (like **basic.target**) are used as checkpoints to ensure that all the required services are up and running before moving on to the next-higher level target. + +Unless otherwise changed at boot time in the GRUB menu, systemd always starts the **default.target**. The **default.target** file is a symbolic link to the true target file. For a desktop workstation, this is typically going to be the **graphical.target**, which is equivalent to runlevel 5 in SystemV. For a server, the default is more likely to be the **multi-user.target**, which is like runlevel 3 in SystemV. The **emergency.target** file is similar to single-user mode. Targets and services are systemd units. + +The following table, which I included in the previous article in this series, compares the systemd targets with the old SystemV startup runlevels. The systemd target aliases are provided by systemd for backward compatibility. The target aliases allow scripts—and sysadmins—to use SystemV commands like **init 3** to change runlevels. Of course, the SystemV commands are forwarded to systemd for interpretation and execution. + +**systemd targets** | **SystemV runlevel** | **target aliases** | **Description** +---|---|---|--- +default.target | | | This target is always aliased with a symbolic link to either **multi-user.target** or **graphical.target**. systemd always uses the **default.target** to start the system. The **default.target** should never be aliased to **halt.target**, **poweroff.target**, or **reboot.target**. +graphical.target | 5 | runlevel5.target | **Multi-user.target** with a GUI +| 4 | runlevel4.target | Unused. Runlevel 4 was identical to runlevel 3 in the SystemV world. This target could be created and customized to start local services without changing the default **multi-user.target**. +multi-user.target | 3 | runlevel3.target | All services running, but command-line interface (CLI) only +| 2 | runlevel2.target | Multi-user, without NFS, but all other non-GUI services running +rescue.target | 1 | runlevel1.target | A basic system, including mounting the filesystems with only the most basic services running and a rescue shell on the main console +emergency.target | S | | Single-user mode—no services are running; filesystems are not mounted. This is the most basic level of operation with only an emergency shell running on the main console for the user to interact with the system. +halt.target | | | Halts the system without powering it down +reboot.target | 6 | runlevel6.target | Reboot +poweroff.target | 0 | runlevel0.target | Halts the system and turns the power off + +Each target has a set of dependencies described in its configuration file. systemd starts the required dependencies, which are the services required to run the Linux host at a specific level of functionality. When all of the dependencies listed in the target configuration files are loaded and running, the system is running at that target level. If you want, you can review the systemd startup sequence and runtime targets in the first article in this series, [_Learning to love systemd_][2]. + +### Exploring the current target + +Many Linux distributions default to installing a GUI desktop interface so that the installed systems can be used as workstations. I always install from a Fedora Live boot USB drive with an Xfce or LXDE desktop. Even when I'm installing a server or other infrastructure type of host (such as the ones I use for routers and firewalls), I use one of these installations that installs a GUI desktop. + +I could install a server without a desktop (and that would be typical for data centers), but that does not meet my needs. It is not that I need the GUI desktop itself, but the LXDE installation includes many of the other tools I use that are not in a default server installation. This means less work for me after the initial installation. + +But just because I have a GUI desktop does not mean it makes sense to use it. I have a 16-port KVM that I can use to access the KVM interfaces of most of my Linux systems, but the vast majority of my interaction with them is via a remote SSH connection from my primary workstation. This way is more secure and uses fewer system resources to run **multi-user.target** compared to **graphical.target.** + +To begin, check the default target to verify that it is the **graphical.target**: + + +``` +[root@testvm1 ~]# systemctl get-default +graphical.target +[root@testvm1 ~]# +``` + +Now verify the currently running target. It should be the same as the default target. You can still use the old method, which displays the old SystemV runlevels. Note that the previous runlevel is on the left; it is **N** (which means None), indicating that the runlevel has not changed since the host was booted. The number 5 indicates the current target, as defined in the old SystemV terminology: + + +``` +[root@testvm1 ~]# runlevel +N 5 +[root@testvm1 ~]# +``` + +Note that the runlevel man page indicates that runlevels are obsolete and provides a conversion table. + +You can also use the systemd method. There is no one-line answer here, but it does provide the answer in systemd terms: + + +``` +[root@testvm1 ~]# systemctl list-units --type target +UNIT                   LOAD   ACTIVE SUB    DESCRIPTION                 +basic.target           loaded active active Basic System               +cryptsetup.target      loaded active active Local Encrypted Volumes     +getty.target           loaded active active Login Prompts               +graphical.target       loaded active active Graphical Interface         +local-fs-pre.target    loaded active active Local File Systems (Pre)   +local-fs.target        loaded active active Local File Systems         +multi-user.target      loaded active active Multi-User System           +network-online.target  loaded active active Network is Online           +network.target         loaded active active Network                     +nfs-client.target      loaded active active NFS client services         +nss-user-lookup.target loaded active active User and Group Name Lookups +paths.target           loaded active active Paths                       +remote-fs-pre.target   loaded active active Remote File Systems (Pre)   +remote-fs.target       loaded active active Remote File Systems         +rpc_pipefs.target      loaded active active rpc_pipefs.target           +slices.target          loaded active active Slices                     +sockets.target         loaded active active Sockets                     +sshd-keygen.target     loaded active active sshd-keygen.target         +swap.target            loaded active active Swap                       +sysinit.target         loaded active active System Initialization       +timers.target          loaded active active Timers                     + +LOAD   = Reflects whether the unit definition was properly loaded. +ACTIVE = The high-level unit activation state, i.e. generalization of SUB. +SUB    = The low-level unit activation state, values depend on unit type. + +21 loaded units listed. Pass --all to see loaded but inactive units, too. +To show all installed unit files use 'systemctl list-unit-files'. +``` + +This shows all of the currently loaded and active targets. You can also see the **graphical.target** and the **multi-user.target**. The **multi-user.target** is required before the **graphical.target** can be loaded. In this example, the **graphical.target** is active. + +### Switching to a different target + +Making the switch to the **multi-user.target** is easy: + + +``` +`[root@testvm1 ~]# systemctl isolate multi-user.target` +``` + +The display should now change from the GUI desktop or login screen to a virtual console. Log in and list the currently active systemd units to verify that **graphical.target** is no longer running: + + +``` +`[root@testvm1 ~]# systemctl list-units --type target` +``` + +Be sure to use the **runlevel** command to verify that it shows both previous and current "runlevels": + + +``` +[root@testvm1 ~]# runlevel +5 3 +``` + +### Changing the default target + +Now, change the default target to the **multi-user.target** so that it will always boot into the **multi-user.target** for a console command-line interface rather than a GUI desktop interface. As the root user on your test host, change to the directory where the systemd configuration is maintained and do a quick listing: + + +``` +[root@testvm1 ~]# cd /etc/systemd/system/ ; ll +drwxr-xr-x. 2 root root 4096 Apr 25  2018  basic.target.wants +<snip> +lrwxrwxrwx. 1 root root   36 Aug 13 16:23  default.target -> /lib/systemd/system/graphical.target +lrwxrwxrwx. 1 root root   39 Apr 25  2018  display-manager.service -> /usr/lib/systemd/system/lightdm.service +drwxr-xr-x. 2 root root 4096 Apr 25  2018  getty.target.wants +drwxr-xr-x. 2 root root 4096 Aug 18 10:16  graphical.target.wants +drwxr-xr-x. 2 root root 4096 Apr 25  2018  local-fs.target.wants +drwxr-xr-x. 2 root root 4096 Oct 30 16:54  multi-user.target.wants +<snip> +[root@testvm1 system]# +``` + +I shortened this listing to highlight a few important things that will help explain how systemd manages the boot process. You should be able to see the entire list of directories and links on your virtual machine. + +The **default.target** entry is a symbolic link (symlink, soft link) to the directory **/lib/systemd/system/graphical.target**. List that directory to see what else is there: + + +``` +`[root@testvm1 system]# ll /lib/systemd/system/ | less` +``` + +You should see files, directories, and more links in this listing, but look specifically for **multi-user.target** and **graphical.target**. Now display the contents of **default.target**, which is a link to **/lib/systemd/system/graphical.target**: + + +``` +[root@testvm1 system]# cat default.target +#  SPDX-License-Identifier: LGPL-2.1+ +# +#  This file is part of systemd. +# +#  systemd is free software; you can redistribute it and/or modify it +#  under the terms of the GNU Lesser General Public License as published by +#  the Free Software Foundation; either version 2.1 of the License, or +#  (at your option) any later version. + +[Unit] +Description=Graphical Interface +Documentation=man:systemd.special(7) +Requires=multi-user.target +Wants=display-manager.service +Conflicts=rescue.service rescue.target +After=multi-user.target rescue.service rescue.target display-manager.service +AllowIsolate=yes +[root@testvm1 system]# +``` + +This link to the **graphical.target** file describes all of the prerequisites and requirements that the graphical user interface requires. I will explore at least some of these options in the next article in this series. + +To enable the host to boot to multi-user mode, you need to delete the existing link and create a new one that points to the correct target. Make the [PWD][5] **/etc/systemd/system**, if it is not already: + + +``` +[root@testvm1 system]# rm -f default.target +[root@testvm1 system]# ln -s /lib/systemd/system/multi-user.target default.target +``` + +List the **default.target** link to verify that it links to the correct file: + + +``` +[root@testvm1 system]# ll default.target +lrwxrwxrwx 1 root root 37 Nov 28 16:08 default.target -> /lib/systemd/system/multi-user.target +[root@testvm1 system]# +``` + +If your link does not look exactly like this, delete it and try again. List the content of the **default.target** link: + + +``` +[root@testvm1 system]# cat default.target +#  SPDX-License-Identifier: LGPL-2.1+ +# +#  This file is part of systemd. +# +#  systemd is free software; you can redistribute it and/or modify it +#  under the terms of the GNU Lesser General Public License as published by +#  the Free Software Foundation; either version 2.1 of the License, or +#  (at your option) any later version. + +[Unit] +Description=Multi-User System +Documentation=man:systemd.special(7) +Requires=basic.target +Conflicts=rescue.service rescue.target +After=basic.target rescue.service rescue.target +AllowIsolate=yes +[root@testvm1 system]# +``` + +The **default.target**—which is really a link to the **multi-user.target** at this point—now has different requirements in the **[Unit]** section. It does not require the graphical display manager. + +Reboot. Your virtual machine should boot to the console login for virtual console 1, which is identified on the display as tty1. Now that you know how to change the default target, change it back to the **graphical.target** using a command designed for the purpose. + +First, check the current default target: + + +``` +[root@testvm1 ~]# systemctl get-default +multi-user.target +[root@testvm1 ~]# systemctl set-default graphical.target +Removed /etc/systemd/system/default.target. +Created symlink /etc/systemd/system/default.target → /usr/lib/systemd/system/graphical.target. +[root@testvm1 ~]# +``` + +Enter the following command to go directly to the **graphical.target** and the display manager login page without having to reboot: + + +``` +`[root@testvm1 system]# systemctl isolate default.target` +``` + +I do not know why the term "isolate" was chosen for this sub-command by systemd's developers. My research indicates that it may refer to running the specified target but "isolating" and terminating all other targets that are not required to support the target. However, the effect is to switch targets from one run target to another—in this case, from the multi-user target to the graphical target. The command above is equivalent to the old init 5 command in SystemV start scripts and the init program. + +Log into the GUI desktop, and verify that it is working as it should. + +### Summing up + +This article explored the Linux systemd startup sequence and started to explore two important systemd tools, **systemctl** and **journalctl**. It also explained how to switch from one target to another and to change the default target. + +The next article in this series will create a new systemd unit and configure it to run during startup. It will also look at some of the configuration options that help determine where in the sequence a particular unit will start, for example, after networking is up and running. + +### Resources + +There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup. + + * The Fedora Project has a good, practical [guide][6] [to systemd][6]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd. + * The Fedora Project also has a good [cheat sheet][7] that cross-references the old SystemV commands to comparable systemd ones. + * For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][8]'s [description of systemd][9]. + * [Linux.com][10]'s "More systemd fun" offers more advanced systemd [information and tips][11]. + + + +There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers. + + * [Rethinking PID 1][12] + * [systemd for Administrators, Part I][13] + * [systemd for Administrators, Part II][14] + * [systemd for Administrators, Part III][15] + * [systemd for Administrators, Part IV][16] + * [systemd for Administrators, Part V][17] + * [systemd for Administrators, Part VI][18] + * [systemd for Administrators, Part VII][19] + * [systemd for Administrators, Part VIII][20] + * [systemd for Administrators, Part IX][21] + * [systemd for Administrators, Part X][22] + * [systemd for Administrators, Part XI][23] + + + +Alison Chiaken, a Linux kernel and systems programmer at Mentor Graphics, offers a preview of her... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/systemd-startup + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/start_line.jpg?itok=9reaaW6m (People at the start line of a race) +[2]: https://opensource.com/article/20/4/systemd +[3]: http://www.gnu.org/software/grub/manual/grub +[4]: mailto:mockbuild@bkernel03.phx2.fedoraproject.org +[5]: https://en.wikipedia.org/wiki/Pwd +[6]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html +[7]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet +[8]: http://Freedesktop.org +[9]: http://www.freedesktop.org/wiki/Software/systemd +[10]: http://Linux.com +[11]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/ +[12]: http://0pointer.de/blog/projects/systemd.html +[13]: http://0pointer.de/blog/projects/systemd-for-admins-1.html +[14]: http://0pointer.de/blog/projects/systemd-for-admins-2.html +[15]: http://0pointer.de/blog/projects/systemd-for-admins-3.html +[16]: http://0pointer.de/blog/projects/systemd-for-admins-4.html +[17]: http://0pointer.de/blog/projects/three-levels-of-off.html +[18]: http://0pointer.de/blog/projects/changing-roots +[19]: http://0pointer.de/blog/projects/blame-game.html +[20]: http://0pointer.de/blog/projects/the-new-configuration-files.html +[21]: http://0pointer.de/blog/projects/on-etc-sysinit.html +[22]: http://0pointer.de/blog/projects/instances.html +[23]: http://0pointer.de/blog/projects/inetd.html From af3e2a3b0f259fa36d13ba2590ceaf744aca4760 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 5 May 2020 00:56:36 +0800 Subject: [PATCH 127/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200504=20Define?= =?UTF-8?q?=20and=20optimize=20data=20partitions=20in=20Apache=20Cassandra?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200504 Define and optimize data partitions in Apache Cassandra.md --- ...ize data partitions in Apache Cassandra.md | 150 ++++++++++++++++++ 1 file changed, 150 insertions(+) create mode 100644 sources/tech/20200504 Define and optimize data partitions in Apache Cassandra.md diff --git a/sources/tech/20200504 Define and optimize data partitions in Apache Cassandra.md b/sources/tech/20200504 Define and optimize data partitions in Apache Cassandra.md new file mode 100644 index 0000000000..d28f0daee0 --- /dev/null +++ b/sources/tech/20200504 Define and optimize data partitions in Apache Cassandra.md @@ -0,0 +1,150 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Define and optimize data partitions in Apache Cassandra) +[#]: via: (https://opensource.com/article/20/5/apache-cassandra) +[#]: author: (Anil Inamdar https://opensource.com/users/anil-inamdar) + +Define and optimize data partitions in Apache Cassandra +====== +Apache Cassandra is built for speed and scalability; here's how to get +the most out of those benefits. +![Person standing in front of a giant computer screen with numbers, data][1] + +Apache Cassandra is a database. But it's not just any database; it's a replicating database designed and tuned for scalability, high availability, low-latency, and performance. Cassandra can help your data survive regional outages, hardware failure, and what many admins would consider excessive amounts of data. + +Having a thorough command of data partitions enables you to achieve superior Cassandra cluster design, performance, and scalability. In this article, I'll examine how to define partitions and how Cassandra uses them, as well as the most critical best practices and known issues you ought to be aware of. + +To set the scene: partitions are chunks of data that serve as the atomic unit for key database-related functions like data distribution, replication, and indexing. Distributed data systems commonly distribute incoming data into these partitions, performing the partitioning with simple mathematical functions such as identity or hashing, and using a "partition key" to group data by partition. For example, consider a case where server logs arrive as incoming data. Using the "identity" partitioning function and the timestamps of each log (rounded to the hour value) for the partition key, we can partition this data such that each partition holds one hour of the logs. + +### Data partitions in Cassandra + +Cassandra operates as a distributed system and adheres to the data partitioning principles described above. With Cassandra, data partitioning relies on an algorithm configured at the cluster level, and a partition key configured at the table level. + +![Cassandra data partition][2] + +Cassandra Query Language (CQL) uses the familiar SQL table, row, and column terminologies. In the example diagram above, the table configuration includes the partition key within its primary key, with the format: Primary Key = Partition Key + [Clustering Columns]. + +A primary key in Cassandra represents both a unique data partition and a data arrangement inside a partition. Data arrangement information is provided by optional clustering columns. Each unique partition key represents a set of table rows managed in a server, as well as all servers that manage its replicas. + +### Defining primary keys in CQL + +The following four examples demonstrate how a primary key can be represented in CQL syntax. The sets of rows produced by these definitions are generally considered a partition. + +#### Definition 1 (partition key: log_hour, clustering columns: none) + + +``` +CREATE TABLE server_logs( +   log_hour TIMESTAMP PRIMARYKEY, +   log_level text, +   message text, +   server text +   ) +``` + +Here, all rows that share a **log_hour** go into the same partition. + +#### Definition 2 (partition key: log_hour, clustering columns: log_level) + + +``` +CREATE TABLE server_logs( +   log_hour TIMESTAMP, +   log_level text, +   message text, +   server text, +   PRIMARY KEY (log_hour, log_level) +   ) +``` + +This definition uses the same partition key as Definition 1, but here all rows in each partition are arranged in ascending order by **log_level**. + +#### Definition 3 (partition key: log_hour, server, clustering columns: none) + + +``` +CREATE TABLE server_logs( +   log_hour TIMESTAMP, +   log_level text, +   message text, +   server text, +   PRIMARY KEY ((log_hour, server)) +   ) +``` + +In this definition, all rows share a **log_hour** for each distinct **server** as a single partition. + +#### Definition 4 (partition key: log_hour, server, clustering columns: log_level) + + +``` +CREATE TABLE server_logs( +   log_hour TIMESTAMP, +   log_level text, +   message text, +   server text, +   PRIMARY KEY ((log_hour, server),log_level) +   )WITH CLUSTERING ORDER BY (column3 DESC); +``` + +This definition uses the same partition as Definition 3 but arranges the rows within a partition in descending order by **log_level**. + +### How Cassandra uses the partition key + +Cassandra relies on the partition key to determine which node to store data on and where to locate data when it's needed. Cassandra performs these read and write operations by looking at a partition key in a table, and using tokens (a long value out of range -2^63 to +2^63-1) for data distribution and indexing. These tokens are mapped to partition keys by using a partitioner, which applies a partitioning function that converts any partition key to a token. Through this token mechanism, every node of a Cassandra cluster owns a set of data partitions. The partition key then enables data indexing on each node. + +![Cassandra cluster with 3 nodes and token-based ownership][3] + +A Cassandra cluster with three nodes and token-based ownership. This is a simplistic representation: the actual implementation uses [Vnodes][4]. + +### Data partition impacts on Cassandra clusters + +Careful partition key design is crucial to achieving the ideal partition size for the use case. Getting it right allows for even data distribution and strong I/O performance. Partition size has several impacts on Cassandra clusters you need to be aware of: + + * Read performance—In order to find partitions in SSTables files on disk, Cassandra uses data structures that include caches, indexes, and index summaries. Partitions that are too large reduce the efficiency of maintaining these data structures – and will negatively impact performance as a result. Cassandra releases have made strides in this area: in particular, version 3.6 and above of the Cassandra engine introduce storage improvements that deliver better performance for large partitions and resilience against memory issues and crashes. + * Memory usage— Large partitions place greater pressure on the JVM heap, increasing its size while also making the garbage collection mechanism less efficient. + * Cassandra repairs—Large partitions make it more difficult for Cassandra to perform its repair maintenance operations, which keep data consistent by comparing data across replicas. + * Tombstone eviction—Not as mean as it sounds, Cassandra uses unique markers known as "tombstones" to mark data for deletion. Large partitions can make that deletion process more difficult if there isn't an appropriate data deletion pattern and compaction strategy in place. + + + +While these impacts may make it tempting to simply design partition keys that yield especially small partitions, the data access pattern is also highly influential on ideal partition size (for more information, read this in-depth guide to [Cassandra data modeling][5]). The data access pattern can be defined as how a table is queried, including all of the table's **select** queries. Ideally, CQL select queries should have just one partition key in the **where** clause—that is to say, Cassandra is most efficient when queries can get needed data from a single partition, instead of many smaller ones. + +### Best practices for partition key design + +Following best practices for partition key design helps you get to an ideal partition size. As a rule of thumb, the maximum partition size in Cassandra should stay under 100MB. Ideally, it should be under 10MB. While Cassandra versions 3.6 and newer make larger partition sizes more viable, careful testing and benchmarking must be performed for each workload to ensure a partition key design supports desired cluster performance. + +Specifically, these best practices should be considered as part of any partition key design: + + * The goal for a partition key must be to fit an ideal amount of data into each partition for supporting the needs of its access pattern. + * A partition key should disallow unbounded partitions: those that may grow indefinitely in size over time. For instance, in the **server_logs** examples above, using the server column as a partition key would create unbounded partitions as the number of server logs continues to increase. In contrast, using **log_hour** limits each partition to an hour of data. + * A partition key should also avoid creating a partition skew, in which partitions grow unevenly, and some are able to grow without limit over time. In the **server_logs** examples, using the server column in a scenario where one server generates considerably more logs than others would produce a partition skew. To avoid this, a useful technique is to introduce another attribute from the table to force an even distribution, even if it's necessary to create a dummy column to do so. + * It's helpful to partition time-series data with a partition key that uses a time element as well as other attributes. This protects against unbounded partitions, enables access patterns to use the time attribute in querying specific data, and allows for time-bound data deletion. The examples above each demonstrate this by using the **log_hour** time attribute. + + + +Several tools are available to help test, analyze, and monitor Cassandra partitions to check that a chosen schema is efficient and effective. By carefully designing partition keys to align well with the data and needs of the solution at hand, and following best practices to optimize partition size, you can utilize data partitions that more fully deliver on the scalability and performance potential of a Cassandra deployment. + +Dani and Jon will give a three hour tutorial at OSCON this year called: Becoming friends with... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/apache-cassandra + +作者:[Anil Inamdar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/anil-inamdar +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr (Person standing in front of a giant computer screen with numbers, data) +[2]: https://opensource.com/sites/default/files/uploads/apache_cassandra_1_0.png (Cassandra data partition) +[3]: https://opensource.com/sites/default/files/uploads/apache_cassandra_2_0.png (Cassandra cluster with 3 nodes and token-based ownership) +[4]: https://www.instaclustr.com/cassandra-vnodes-how-many-should-i-use/ +[5]: https://www.instaclustr.com/resource/6-step-guide-to-apache-cassandra-data-modelling-white-paper/ From 748a4355a789e45127ff4708f1461671719c19d6 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Tue, 5 May 2020 00:57:36 +0800 Subject: [PATCH 128/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200504=20Create?= =?UTF-8?q?=20interactive=20learning=20games=20for=20kids=20with=20open=20?= =?UTF-8?q?source?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200504 Create interactive learning games for kids with open source.md --- ...earning games for kids with open source.md | 123 ++++++++++++++++++ 1 file changed, 123 insertions(+) create mode 100644 sources/tech/20200504 Create interactive learning games for kids with open source.md diff --git a/sources/tech/20200504 Create interactive learning games for kids with open source.md b/sources/tech/20200504 Create interactive learning games for kids with open source.md new file mode 100644 index 0000000000..f6ade34857 --- /dev/null +++ b/sources/tech/20200504 Create interactive learning games for kids with open source.md @@ -0,0 +1,123 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create interactive learning games for kids with open source) +[#]: via: (https://opensource.com/article/20/5/jclic-games-kids) +[#]: author: (Peter Cheer https://opensource.com/users/petercheer) + +Create interactive learning games for kids with open source +====== +Help your students learn by creating fun puzzles and games in JClic, an +easy Java-based app. +![Family learning and reading together at night in a room][1] + +Schools are closed in many countries around the world to slow the spread of COVID-19. This has suddenly thrown many parents and teachers into homeschooling. Fortunately, there are plenty of educational resources on the internet to use or adapt, although their licenses vary. You can try searching for Creative Commons Open Educational Resources, but if you want to create your own materials, there are many options for that to. + +If you want to create digital educational activities with puzzles or tests, two easy-to-use, open source, cross-platform applications that fit the bill are eXeLearning and JClic. My earlier article on [eXeLearning][2] is a good introduction to that program, so here I'll look at [JClic][3]. It is an open source software project for creating various types of interactive activities such as associations, text-based activities, crosswords, and other puzzles with text, graphics, and multimedia elements. + +Although it's been around since the 1990s, JClic never developed a large user base in the English-speaking world. It was created in Catalonia by the [Catalan Educational Telematic Network][4] (XTEC). + +### About JClic + +JClic is a Java-based application that's available in many Linux repositories and can be downloaded from [GitHub][5]. It runs on Linux, macOS, and Windows, but because it is a Java program, you must have a Java runtime environment [installed][6]. + +The program's interface has not really changed much over the years, even while features have been added or dropped, such as introducing HTML5 export functionality to replace Java Applet technology for web-based deployment. It hasn't needed to change much, though, because it's very effective at what it does. + +### Creating a JClic project + +Many teachers from many countries have used JClic to create interactive materials for a wide variety of ability levels, subjects, languages, and curricula. Some of these materials have been collected in an [downloadable activities library][7]. Although few activities are in English, you can get a sense of the possibilities JClic offers. + +As JClic has a visual, point-and-click program interface, it is easy enough to learn that a new user can quickly concentrate on content creation. [Documentation][8] is available on GitHub. + +The screenshots below are from one of the JClic projects I created to teach basic Excel skills to learners in Papua New Guinea. + +A JClic project is created in its authoring tool and consists of the following four elements: + +#### 1\. Metadata about the project + +![JClic metadata][9] + +#### 2\. A library of the graphical and other resources it uses + +![JClic media][10] + +#### 3\. A series of one or more activities + +![JClic activities][11] + +JClic can produce seven different activity types: + + * Associations where the user discovers the relationships between two information sets + * Memory games where the user discovers pairs of identical elements or relations (which are hidden) between them + * Exploration activities involving the identification and information, based on a single Information set + * Puzzles where the user reconstructs information that is initially presented in a disordered form; the activity can include graphics, text, sound, or a combination of them + * Written-response activities that are solved by writing text, either a single word or a sentence + * Text activities that are based on words, phrases, letters, and paragraphs of text that need to be completed, understood, corrected, or ordered; these activities can contain images and windows with active content + * Word searches and crosswords + + + +Because of variants in the activities, there are 16 possible activity types. + +#### 4\. A timeline to sequence the activities + +![JClic timeline][12] + +### Using JClic content + +Projects can run in JClic's player (part of the Java application you used to create the project), or they can be exported to HTML5 so they can run in a web browser. + +The one thing I don't like about JClic is that its default HTML5 export function assumes you'll be online when running a project. If you want a project to work offline as needed, you must download a compiled and minified HTML5 player from [Github][13], and place it in the same folder as your JClic project. + +Next, open the **index.html** file in a text editor and replace this line: + + +``` +`` +``` + +With: + + +``` +`` +``` + +Now the HTML5 version of your project runs in a web browser, whether the user is online or not. + +JClic also provides a reports function that can store test scores in an ODBC-compliant database. I have not explored this feature, as my tests and puzzles are mostly used for self-assessment and to prompt reflection by the learner, rather than as part of a formal scheme, so the scores are not very important. If you would like to learn about it, there is [documentation][14] on running JClic Reports Server with Tomcat and MySQL (or [mariaDB][15]). + +### Conclusion + +JClic offers a wide range of activity types that provide plenty of room to be creative in designing content to fit your subject area and type of learner. JClic is a valuable addition for anyone who needs a quick and easy way to develop educational resources. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/jclic-games-kids + +作者:[Peter Cheer][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/petercheer +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/family_learning_kids_night_reading.png?itok=6K7sJVb1 (Family learning and reading together at night in a room) +[2]: https://opensource.com/article/18/5/exelearning +[3]: https://clic.xtec.cat/legacy/en/jclic/index.html +[4]: https://clic.xtec.cat/legacy/en/index.html +[5]: https://github.com/projectestac/jclic +[6]: https://adoptopenjdk.net/installation.html +[7]: https://clic.xtec.cat/repo/ +[8]: https://github.com/projectestac/jclic/wiki/JClic_Guide +[9]: https://opensource.com/sites/default/files/uploads/metadata.png (JClic metadata) +[10]: https://opensource.com/sites/default/files/uploads/media.png (JClic media) +[11]: https://opensource.com/sites/default/files/uploads/activities.png (JClic activities) +[12]: https://opensource.com/sites/default/files/uploads/sequence.png (JClic timeline) +[13]: http://projectestac.github.io/jclic.js/ +[14]: https://github.com/projectestac/jclic/wiki/Jclic-Reports-Server-with-Tomcat-and-MySQL-on-Ubuntu +[15]: https://mariadb.org/ From 10232b2ab1eeb386c76c8c827d1c5c133aa547bd Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Tue, 5 May 2020 01:24:09 +0800 Subject: [PATCH 129/178] TSL --- ...nd JPG for your online images- Use WebP.md | 144 ------------------ ...nd JPG for your online images- Use WebP.md | 143 +++++++++++++++++ 2 files changed, 143 insertions(+), 144 deletions(-) delete mode 100644 sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md create mode 100644 translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md diff --git a/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md deleted file mode 100644 index f4e6711933..0000000000 --- a/sources/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ /dev/null @@ -1,144 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Drop PNG and JPG for your online images: Use WebP) -[#]: via: (https://opensource.com/article/20/4/webp-image-compression) -[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) - -Drop PNG and JPG for your online images: Use WebP -====== -Get started with this open source image editing tool to save time and -space. -![Painting art on a computer screen][1] - -WebP is an image format developed by Google in 2010 that provides superior lossless and lossy compression for images on the web. Using WebP, web developers can create smaller, richer images that improve site speed. A faster loading website is critical to the user experience and for the website's marketing effectiveness. - -For optimal loading across all devices and users, images on your site should not be larger than 500 KB in file size. - -WebP lossless images are often at least 25% smaller in size compared to PNGs. WebP lossy images are often anywhere from 25-34% smaller than comparable JPEG images at equivalent SSIM (structural similarity) quality index. - -Lossless WebP supports transparency, as well. For cases when lossy RGB compression is acceptable, lossy WebP also supports transparency, typically providing three times smaller file sizes compared to PNG. - -Google reports a 64% reduction in file size for images converted from animated GIFs to lossy WebP, and a 19% reduction when converted to lossless WebP. - -The WebP file format is based on the RIFF (resource interchange file format) document format. The file signature is **52 49 46 46** (RIFF), as you can see with [hexdump][2]: - - -``` -$ hexdump --canonical pixel.webp -00000000  52 49 46 46 26 00 00 00  [...]  |RIFF&...WEBPVP8 | -00000010  1a 00 00 00 30 01 00 9d  [...]  |....0....*......| -00000020  0e 25 a4 00 03 70 00 fe  [...]  |.%...p...`....| -0000002e -``` - -The standalone libwebp library serves as a reference implementation for the WebP specification and is available from Google's [Git repository][3] or as a tarball. - -The WebP format is compatible with 80% of the web browsers in use worldwide. At the time of this writing, it is not compatible with Apple's Safari browser. The workaround for this is to serve up a JPG/PNG alongside a WebP, and there are methods and Wordpress plugins to do that. - -### Why does this matter? - -Part of my job is to design and maintain our organization's website. Since the website is a marketing tool and site speed is a critical aspect of the user experience, I have been working to improve the speed, and reducing image sizes by converting them to WebP has been a good solution. - -To test the speed of one of the pages, I turned to **web.dev**, which is powered by Lighthouse, released under the Apache 2.0 license, and can be found at . - -According to its official description, "Lighthouse is an open source, automated tool for improving the quality of web pages. You can run it against any web page—public or requiring authentication. It has audits for performance, accessibility, progressive web apps, SEO, and more. You can run Lighthouse in Chrome DevTools, from the command line, or as a Node module. You give Lighthouse a URL to audit, it runs a series of audits against the page, and then it generates a report on how well the page did. From there, use the failing audits as indicators on how to improve the page. Each audit has a reference doc explaining why the audit is important, as well as how to fix it." - -### Creating a smaller WebP image - -The page I tested returned three images. In the report it generates, it provides recommendations and targets. I chose the "app-graphic" image, which, it reported, is 650 KB. By converting it to WebP, I should save 589 KB, reducing the image to 61 KB. I converted the image in Photoshop and saved it with the default WebP settings, and it returned a file size of 44.9 KB. Better than expected! As the screenshot from Photoshop shows, the images look identical in visual quality. - -![WebP vs JPG comparison][4] - -On the left: 650 KB (actual size). On the right: 589 KB (target size after conversion). - -Of course, the open source image editor [GIMP][5] also supports WebP as an export format. It offers several options for quality and compression profile: - -![GIMP dialog for exporting webp, as a webp][6] - -A zoomed-in look of another image: - -![WebP vs PNG comparison][7] - -PNG (left) and WebP (right), both converted from a JPG, shows the WebP, although smaller in size, is superior in visual quality. - -### Convert to an image to WebP - -To convert images on Linux from JPG/PNG to WebP, you can also use the command-line: - -Use **cwebp** on the command line to convert PNG or JPG image files to WebP format. You can convert a PNG image file to a WebP image with a quality range of 80 with the command: - - -``` -`cwebp -q 80 image.png -o image.webp` -``` - -Alternatively, you can also use [Image Magick][8], which is probably available in your distribution's software repository. The subcommand for conversion is **convert**, and all that's needed is an input and output file: - - -``` -`convert pixel.png pixel.webp` -``` - -### Convert an image to WebP with an editor - -To convert images to WebP with a photo editor, use [GIMP][9]. From version 2.10 on, it supports WebP natively. - -If you're a Photoshop user, you need a plugin to convert the files, as Photoshop does not include it natively. WebPShop 0.2.1, released under the Apache License 2.0 license, is a Photoshop module for opening and saving WebP images, including animations, and can be found at: . - -To use the plugin, put the file found in the **bin** folder inside your Photoshop plugin directory: - -Windows x64—C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi - -Mac—Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin - -### WebP on Wordpress - -Many websites are built using Wordpress (that's what I use). So, how does Wordpress handle uploading WebP images? At the time of this writing, it doesn't. But, there are, of course, plugins to enable it so you can serve up both WebP alongside PNG/JPG images (for the Apple crowd). - -Or there are these [instructions][10] from [Marius Hosting][11]: - -"How about directly uploading WebP images to Wordpress? This is easy. Just add some text line on your theme functions.php file. Wordpress does not natively support viewing and uploading WebP files, but I will explain to you how you can make it work in a few simple steps. Log in to your Wordpress admin area and go to Appearance/Theme Editor and find functions.php. Copy and paste the code below at the end of the file and save it.  - - -``` -`//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes');` -``` - -If you want to see the thumbnail image preview when you go to Media/Library, you have to add the code below in the same functions.php file. To find the functions.php file, go to Appearance/Theme Editor and find functions.php, then copy and paste the code below at the end of the file and save it." - - -``` -`//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2);` -``` - -### WebP and the future - -WebP is a robust and optimized format. It looks better, it has better compression ratio, and it has all the features of most other common image formats. There's no need to wait—start using it now. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/webp-image-compression - -作者:[Jeff Macharyas][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jeffmacharyas -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) -[2]: https://opensource.com/article/19/8/dig-binary-files-hexdump -[3]: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html -[4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png (WebP vs JPG comparison) -[5]: http://gimp.org -[6]: https://opensource.com/sites/default/files/webp-gimp.webp (GIMP dialog for exporting webp, as a webp) -[7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png (WebP vs PNG comparison) -[8]: https://imagemagick.org -[9]: https://en.wikipedia.org/wiki/GIMP -[10]: https://mariushosting.com/how-to-upload-webp-files-on-wordpress/ -[11]: https://mariushosting.com/ diff --git a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md new file mode 100644 index 0000000000..3186cf5b00 --- /dev/null +++ b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -0,0 +1,143 @@ +[#]: collector: "lujun9972" +[#]: translator: "lxbwolf" +[#]: reviewer: " " +[#]: publisher: " " +[#]: url: " " +[#]: subject: "Drop PNG and JPG for your online images: Use WebP" +[#]: via: "https://opensource.com/article/20/4/webp-image-compression" +[#]: author: "Jeff Macharyas https://opensource.com/users/jeffmacharyas" + +线上图片请抛弃 PNG 和 JPG 后缀:使用 WebP +====== +了解一下这个开源的图片编辑工具来节省时间和空间。 +![Painting art on a computer screen][1] + +WebP 是 2010 年 Google 开发的一种图片格式,能提供对网络图片的无损压缩和有损压缩。网络开发者们可以使用 WebP 来创建更小更丰富的图片,以此来提高网站的速度。更快的加载速度对于网站的用户体验和网站市场的效能是至关重要的。 + +为了提供领先于所有的设备和用户的图片加载能力,你网站上的图片文件大小不应该超过 500 KB。 + +WebP 无损图片通常比 PNG 图片文件小至少 25%。在相同的 SSIM(structural similarity,结构相似性)质量指标下,WebP 有损图片通常比 JPEG 图片小 25% 到 34%。 + +无损 WebP 也支持透明度。在有损 RGB 压缩可接受的情况下,有损 WebP 也支持透明度,PNG 文件的大小通常为 WebP 文件大小的四倍。 + +Google 报告把动图 GIF 文件转换为有损 WebP后文件大小减少了 64%,转换为无损 WebP 后文件大小减少了 19%。 + +WebP 文件格式是一种基于 RIFF(resource interchange file format,资源交换文件格式)的文档格式。你可以用 [hexdump][2] 看到文件的签名是 **52 49 46 46** (RIFF): + + +``` +$ hexdump --canonical pixel.webp +00000000  52 49 46 46 26 00 00 00  [...]  |RIFF&...WEBPVP8 | +00000010  1a 00 00 00 30 01 00 9d  [...]  |....0....*......| +00000020  0e 25 a4 00 03 70 00 fe  [...]  |.%...p...`....| +0000002e +``` + +独立的 libwebp 库以 WebP 技术规范的引用实现的方式提供服务,可以从 Google 的 [Git 仓库][3] 或下载 tar 包获得。 + +全球在用的 80% 的 web 浏览器兼容 WebP 格式。本文撰写时,Apple 的 Safari 浏览器还不兼容。对于不兼容的情况,应变方法是在 WebP 图片旁边准备一张 JPG/PNG 图片,我们有很多种方法和 Wordpress 插件来实现。 + +### 为什么这很重要? + +我的部分工作是设计和维护我们组织的网站。由于网站是个市场工具并且网站速度是衡量用户体验的重要指标,我一直致力于提高网站速度,通过把图片转换为 WebP 来减少图片大小是一个有效的解决方案。 + +我使用了 **web.dev** 来检测其中一个网页,该工具是由 Lighthouse 提供服务的,遵循 Apache 2.0 证书,可以在 找到。 + +根据官方描述,”LIghthouse 是一个开源的,旨在提升网页质量的自动化工具。你可以在任何网页上运行它 — 公共的或需要鉴权的。它有性能、可用性、积极的 web 应用、SEO和其他项目的审计。你可以使用命令行、作为一个 Node 模块或在 Chrome DevTools 里运行 Lighthouse。你输入一个 URL 给 Lighthouse,它对这个网页运行一系列的审计规则,之后生成这个网页的审计结果报告。从报告的失败审计条目中可以知道应该怎么优化网页。每条审计都有对应的文档解释为什么该项目是重要的,以及如何修复它。“ + +### 创建更小的 WebP 图片 + +我测试的页面返回了三张图片。在它生成的报告中,它提供了推荐和目标格式。我选择了它报告有 650 KB 的 ”app-graphic“ 图片。通过把它转换为 WebP 格式,预计可以图片大小降到 61 KB,节省 589 KB。我在 Photoshop 中把它转换了,用默认的 WebP 设置参数保存它,它的文件大小为 44.9 KB。比预期的还要好!从下面的 Photoshop 截图中可以看出,两张图在视觉上完全一样。 + +![WebP vs JPG comparison][4] + +左图:650 KB(实际大小)。右图: 589 KB(转换之后的目标大小)。 + +当然,也可以用开源图片编辑工具 [GIMP][5] 把图片导出为 WebP。它提供了几个质量和压缩的参数: + +![GIMP dialog for exporting webp, as a webp][6] + +另一张图拉近视野后: + +![WebP vs PNG comparison][7] + +PNG(左图)和 WebP(右图),都是从 JPG 转换而来,两图对比可以看出 WebP 不仅在文件大小更小,在视觉质量上也更优秀。 + +### 把图片转换为 WebP + +你也可以用 Linux 的命令行工具把图片从 JPG/PNG 转换为 WebP: + +在命令行使用 **cwebp** 把 PNG 或 JPG 图片文件转换为 WebP 格式。你可以用下面的命令把 PNG 图片文件转换为质量参数为 80 的 WebP 图片。 + + +``` +`cwebp -q 80 image.png -o image.webp` +``` + +你还可以用 [Image Magick][8],这个工具可能在你的发行版本软件仓库中可以找到。转换的子命令是 **convert**,它需要的所有参数就是输入和输出文件: + + +``` +`convert pixel.png pixel.webp` +``` + +### 使用编辑器把图片转换为 WebP + +使用 [GIMP][9] 图片编辑器来把图片转换为 WebP。从 2.10 版本开始,它原生地支持 WebP。 + +如果你是 Photoshop 用户,由于 Photoshop 默认不包含 WebP,因此你需要一个转换插件。遵循 Apache License 2.0 证书发行的 WebPShop 0.2.1 是一个用户打开和保存包括动图在内的 WebP 图片的 Photoshop 模块,在 可以找到。 + +为了能正常使用它,你需要把它放进 Photoshop 插件目录下的 **bin** 文件夹: + +Windows x64—C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi + +Mac—Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin + +### Wordpress 上的 WebP + +很多网站是用 Wordpress 搭建的(我的网站就是)。因此,Wordpress 怎么上传 WebP 图片?本文撰写时,它还不支持。但是,当然已经有插件来满足这种需求,因此你可以在你的网站上同时准备 WebP 和 PNG/JPG 图片(为 Apple 用户)。 + +在 [Marius Hosting][11] 有下面的指示: + +”直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 functions.php 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你陈述怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入 Appearance/Theme Editor 找到 functions.php。拷贝下面的代码粘贴到文件最后并保存。 + + +``` +`//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes');` +``` + +"如果你想在 Media/Library 看缩略图预览,那么你需要把下面的代码也添加到 functions.php 文件。为了找到 functions.php 文件,进入 Appearance/Theme Editor 并搜索 functions.php,然后拷贝下面的代码粘贴到文件最后并保存。“ + + +``` +`//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2);` +``` + +### WebP 和未来 + +WebP 是鲁棒的和最优的格式。它看起来更好,有更好的压缩率,它拥有其他大部分常见图片格式的所有特性。不必再等了,现在就使用它把。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/webp-image-compression + +作者:[Jeff Macharyas][a] +选题:[lujun9972][b] +译者:[lxbwolf](https://github.com/lxbwolf) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeffmacharyas +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ "Painting art on a computer screen" +[2]: https://opensource.com/article/19/8/dig-binary-files-hexdump +[3]: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html +[4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png "WebP vs JPG comparison" +[5]: http://gimp.org +[6]: https://opensource.com/sites/default/files/webp-gimp.webp "GIMP dialog for exporting webp, as a webp" +[7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png "WebP vs PNG comparison" +[8]: https://imagemagick.org +[9]: https://en.wikipedia.org/wiki/GIMP +[10]: https://mariushosting.com/how-to-upload-webp-files-on-wordpress/ +[11]: https://mariushosting.com/ From 791a3b784fe53c15897a82c879f44ba8adec921f Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Tue, 5 May 2020 11:58:11 +0800 Subject: [PATCH 130/178] update --- ...nd JPG for your online images- Use WebP.md | 24 +++++++++---------- 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md index 3186cf5b00..690f4c6c39 100644 --- a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -1,11 +1,11 @@ -[#]: collector: "lujun9972" -[#]: translator: "lxbwolf" -[#]: reviewer: " " -[#]: publisher: " " -[#]: url: " " -[#]: subject: "Drop PNG and JPG for your online images: Use WebP" -[#]: via: "https://opensource.com/article/20/4/webp-image-compression" -[#]: author: "Jeff Macharyas https://opensource.com/users/jeffmacharyas" +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Drop PNG and JPG for your online images: Use WebP) +[#]: via: (https://opensource.com/article/20/4/webp-image-compression) +[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) 线上图片请抛弃 PNG 和 JPG 后缀:使用 WebP ====== @@ -130,13 +130,13 @@ via: https://opensource.com/article/20/4/webp-image-compression [a]: https://opensource.com/users/jeffmacharyas [b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ "Painting art on a computer screen" +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/painting_computer_screen_art_design_creative.png?itok=LVAeQx3_ (Painting art on a computer screen) [2]: https://opensource.com/article/19/8/dig-binary-files-hexdump [3]: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html -[4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png "WebP vs JPG comparison" +[4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png (WebP vs JPG comparison) [5]: http://gimp.org -[6]: https://opensource.com/sites/default/files/webp-gimp.webp "GIMP dialog for exporting webp, as a webp" -[7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png "WebP vs PNG comparison" +[6]: https://opensource.com/sites/default/files/webp-gimp.webp (GIMP dialog for exporting webp, as a webp) +[7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png (WebP vs PNG comparison) [8]: https://imagemagick.org [9]: https://en.wikipedia.org/wiki/GIMP [10]: https://mariushosting.com/how-to-upload-webp-files-on-wordpress/ From fcd061d231551038992107c11607e3efe1c5970b Mon Sep 17 00:00:00 2001 From: qfzy1233 Date: Tue, 5 May 2020 15:36:41 +0800 Subject: [PATCH 131/178] translating by qfzy1233 --- ...Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md b/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md index c6857014ef..73e60dc17a 100644 --- a/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md +++ b/sources/tech/20200428 Lubuntu 20.04 Review- Lightweight, Minimalistic, Polished.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (qfzy1233) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 067db949bb992056f19c8b1462b3805cc1b96ab2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 17:13:15 +0800 Subject: [PATCH 132/178] APL --- .../tech/20200429 The life-changing magic of git rebase -i.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200429 The life-changing magic of git rebase -i.md b/sources/tech/20200429 The life-changing magic of git rebase -i.md index 8afb9d7f8e..c6d4d904b4 100644 --- a/sources/tech/20200429 The life-changing magic of git rebase -i.md +++ b/sources/tech/20200429 The life-changing magic of git rebase -i.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From ed5ce13ff18bc78f96d122c333d93dabbae5c606 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 21:30:08 +0800 Subject: [PATCH 133/178] TSL --- ...he life-changing magic of git rebase -i.md | 80 ------------------- ...he life-changing magic of git rebase -i.md | 79 ++++++++++++++++++ 2 files changed, 79 insertions(+), 80 deletions(-) delete mode 100644 sources/tech/20200429 The life-changing magic of git rebase -i.md create mode 100644 translated/tech/20200429 The life-changing magic of git rebase -i.md diff --git a/sources/tech/20200429 The life-changing magic of git rebase -i.md b/sources/tech/20200429 The life-changing magic of git rebase -i.md deleted file mode 100644 index c6d4d904b4..0000000000 --- a/sources/tech/20200429 The life-changing magic of git rebase -i.md +++ /dev/null @@ -1,80 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (The life-changing magic of git rebase -i) -[#]: via: (https://opensource.com/article/20/4/git-rebase-i) -[#]: author: (Dave Neary https://opensource.com/users/dneary) - -The life-changing magic of git rebase -i -====== -Make everyone think you write perfect code the first time (and make your -patches easier to review and merge). -![Hands programming][1] - -Software development is messy. So many wrong turns, typos to fix, quick hacks and kludges to correct later, off-by-one errors you find late in the process. With version control, you have a pristine record of every wrong turn and correction made during the process of creating the "perfect" final product—a patch ready to submit upstream. Like the outtakes from movies, they are a little embarrassing and sometimes amusing. - -Wouldn't it be great if you could use version control to save your work regularly at waypoints, and then when you have something you are ready to submit for review, you could hide all of that private drafting work and just submit a single, perfect patch? Meet **git rebase -i**, the perfect way to rewrite history and make everyone think that you produce perfect code the first time! - -### What does git rebase do? - -In case you're not familiar with the intricacies of Git, here is a brief overview. Under the covers, Git associates different versions of your project with a unique identifier, which is made up of a hash of the parent node's unique identifier, and the difference between the new version and its parent node. This creates a tree of revisions, and each person who checks out the project gets their own copy. Different people can take the project in different directions, each starting from potentially different branch points. - -![Master branch vs. private branch][2] - -The master branch in the "origin" repo on the left and the private branch on your personal copy on the right. - -There are two ways to integrate your work back with the master branch in the original repository: one is to use **git merge**, and the other is to use **git rebase**. They work in very different ways. - -When you use **git merge**, a new commit is created on the master branch that includes all of the changes from origin plus all of your local changes. If there are any conflicts (for example, if someone else has changed a file you are also working with), these will be marked, and you have an opportunity to resolve the conflicts before committing this merge commit to your local repository. When you push your changes back to the parent repository, all of your local work will appear as a branch for other users of the Git repository. - -But **git rebase** works differently. It rewinds your commits and replays those commits again from the tip of the master branch. This results in two main changes. First, since your commits are now branching off a different parent node, their hashes will be recalculated, and anyone who has cloned your repository may now have a broken copy of the repository. Second, you do not have a merge commit, so any merge conflicts are identified as your changes are being replayed onto the master branch, and you need to fix them before proceeding with the rebase. When you push your changes now, your work does not appear on a branch, and it looks as though you wrote all of your changes off the very latest commit to the master branch. - -![Merge commits preserve history, and rebase rewrites history.][3] - -Merge commits (left) preserve history, while rebase (right) rewrites history. - -However, both of these options come with a downside: everyone can see all your scribbles and edits as you worked through problems locally before you were ready to share your code. This is where the **\--interactive** (or **-i** for short) flag to **git rebase** comes into the picture. - -### Introducing git rebase -i - -The big advantage of **git rebase** is that it rewrites history. But why stop at just pretending you branched off a later point? There is a way to go even further and rewrite how you arrived at your ready-to-propose code: **git rebase -i**, an interactive **git rebase**. - -This feature is the "magic time machine" function in Git. The flag allows you to make sophisticated changes to revision history while doing a rebase. You can hide your mistakes! Merge many small changes into one pristine feature patch! Reorder how things appear in revision history! - -![output of git rebase -i][4] - -When you run **git rebase -i**, you get an editor session listing all of the commits that are being rebased and a number of options for what you can do to them. The default choice is **pick**. - - * **Pick** maintains the commit in your history. - * **Reword** allows you to change a commit message, perhaps to fix a typo or add additional commentary. - * **Edit** allows you to make changes to the commit while in the process of replaying the branch. - * **Squash** merges multiple commits into one. - * You can reorder commits by moving them around in the file. - - - -When you are finished, simply save the final result, and the rebase will execute. At each stage where you have chosen to modify a commit (either with **reword**, **edit**, **squash**, or when there is a conflict), the rebase stops and allows you to make the appropriate changes before continuing. - -The example above results in "One-liner bug fix" and "Integrate new header everywhere" being merged into one commit, and "New header for docs website" and "D'oh - typo. Fixed" into another. Like magic, the work that went into the other commits is still there on your branch, but the associated commits have disappeared from your history! - -This makes it easy to submit a clean patch to an upstream project using **git send-email** or by creating a pull request against the parent repository with your newly tidied up patchset. This has a number of advantages, including that it makes your code easier to review, easier to accept, and easier to merge. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/20/4/git-rebase-i - -作者:[Dave Neary][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/dneary -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming) -[2]: https://opensource.com/sites/default/files/uploads/master-private-branches.png (Master branch vs. private branch) -[3]: https://opensource.com/sites/default/files/uploads/merge-commit-vs-rebase.png (Merge commits preserve history, and rebase rewrites history.) -[4]: https://opensource.com/sites/default/files/uploads/git-rebase-i.png (output of git rebase -i) diff --git a/translated/tech/20200429 The life-changing magic of git rebase -i.md b/translated/tech/20200429 The life-changing magic of git rebase -i.md new file mode 100644 index 0000000000..91739d2789 --- /dev/null +++ b/translated/tech/20200429 The life-changing magic of git rebase -i.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The life-changing magic of git rebase -i) +[#]: via: (https://opensource.com/article/20/4/git-rebase-i) +[#]: author: (Dave Neary https://opensource.com/users/dneary) + +完美生活:git rebase -i +====== + +> 让大家觉得你一次就能写出完美的代码,并让你的补丁更容易审核和合并。 + +![Hands programming][1] + +软件开发是混乱的。有很多错误的转折、有需要修复的错别字、有需要修正的错误、有需要稍后纠正临时和粗陋的代码,还有在开发过程中以后发现的错位问题。有了版本控制,在创建“完美”的最终产品(即准备提交给上游的补丁)的过程中,你会有一个记录着每一个错误的转折和修正的原始记录。就像电影中的片段一样,它们有些尴尬,有时还很有趣。就像电影中的花絮一样,它们会让人有点尴尬,有时也会让人觉得好笑。 + +如果你能使用版本控制来定期保存你的工作线索,然后当你准备提交审核的东西时,可以隐藏所有这些私人草稿工作,只需提交一份单一的、完美的补丁就可以了,那不是很好吗?`git rebase -i`,是重写历史记录的完美方法,可以让大家觉得你一次就写出了完美的代码。 + +### git rebase 的作用是什么? + +如果你不熟悉 Git 的复杂性,这里简单介绍一下。在幕后,Git 将项目的不同版本与唯一标识符关联起来,这个标识符由父节点的唯一标识符的哈希以及新版本与其父节点的差异组成。这样就形成了一棵修订树,每个检出项目的人都会得到自己的副本。不同的人可以把项目往不同的方向发展,每个人的出发点都可能是不同的分支点。 + +![Master branch vs. private branch][2] + +*左边是 `origin` 版本库中的主分支,右边是你个人副本中的私有分支。* + +有两种方法可以将你的工作与原始版本库中的主分支整合起来:一种是使用 合并:`git merge`,另一种是使用变基:`git rebase`。它们的工作方式非常不同。 + +当你使用 `git merge` 时,会在主分支上创建一个新的提交,其中包括所有来自 `origin` 的修改和所有本地的修改。如果有任何冲突(例如,如果别人修改了你也在修改的文件),则将这些冲突标记出来,并且你有机会在将该“合并提交”提交到本地版本库之前解决这些冲突。当你将更改推送回父版本库时,所有的本地工作都会以分支的形式出现在 Git 仓库的其他用户面前。 + +但是 `git rebase` 的工作方式不同。它回滚你的提交,并从主分支的顶端再次重放这些提交。这导致了两个主要的变化。首先,由于你的提交现在从一个不同的父节点分支出来,它们的哈希值会被重新计算,并且任何克隆了你的仓库的人都可能会有一个残破的仓库副本。第二,你没有一个合并提交,所以在将更改重放到主分支上时会识别出任何合并冲突,所以任何合并冲突都会被识别出来,因此,你需要在进行变基rebase之前修复它们。当你现在推送你的修改时,你的工作不会出现在分支上,并且看起来像是你把所有的修改都写到了主分支的最新的提交上。 + +![Merge commits preserve history, and rebase rewrites history.][3] + +*合并提交(左)保留了历史,而变基(右)重写历史。* + +然而,这两种方式都有一个坏处:在你准备好分享代码之前,每个人都可以看到你在本地处理问题时的所有涂鸦和编辑。这就是 `git rebase` 的 `--interactive`(或简写 `-i`)标志的作用。 + +### 介绍 git rebase -i + +`git rebase` 的最大优点是它重写了历史。但是,为什么仅止于假装你从后面的点分支出来呢?有一种更进一步方法可以重写你是如何准备就绪这些代码的:`git rebase -i`,交互式的 `git rebase`。 + +这个功能就是 Git 中的 "神奇的时间机器” 功能。这个标志允许你在做变基时对修订历史记录进行复杂的修改。你可以隐藏你的错误! 将许多小的修改合并到一个原始的功能补丁中! 重新排序修改历史记录中的内容 + +![output of git rebase -i][4] + +当你运行 `git rebase -i` 时,你会得到一个编辑器会话,其中列出了所有正在被变基的提交,并有一些选项可以对它们做什么。默认的选择是 `pick`。 + + * `Pick`:会在你的历史记录中保留该提交。 + * `Reword`:允许你修改提交信息,可能是修复一个错别字或添加额外的注释。 + * `Edit`:允许你在重放分支的过程中对提交进行修改。 + * `Squash`:可以将多个提交合并为一个。 + * 你可以通过移动文件中的提交来重新排序。 + +当你完成后,只需保存最终结果,变基就会执行。在每个阶段,当你选择了修改提交(无论是用 `reword`、`edit`、`squash` 还是有冲突时),变基会停止,并允许你在继续提交之前进行适当的修改。 + +上面这个例子的结果是 “One-liner bug fix” 和 “Integate new header everywhere” 被合并到一个提交中,而 “New header for docs website” 和 “D'oh - typo. Fixed” 合并到另一个提交中。就像变魔术一样,其他提交的工作还在你的分支中,但相关的提交已经从你的历史记录中消失了!这样一来,你就可以很容易地提交干净的提交。 + +这使得使用 `git send-email` 或者用你新整理好的补丁集在父版本库中创建一个拉取请求来提交一个干净的补丁给上游项目变得很容易。这有很多好处,包括让你的代码更容易审核,更容易接受,也更容易合并。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/4/git-rebase-i + +作者:[Dave Neary][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dneary +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S (Hands programming) +[2]: https://opensource.com/sites/default/files/uploads/master-private-branches.png (Master branch vs. private branch) +[3]: https://opensource.com/sites/default/files/uploads/merge-commit-vs-rebase.png (Merge commits preserve history, and rebase rewrites history.) +[4]: https://opensource.com/sites/default/files/uploads/git-rebase-i.png (output of git rebase -i) From c65cba5b8db89928c2ce6b829bcbed9a9e5a4b73 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 22:36:16 +0800 Subject: [PATCH 134/178] PRF @robsean --- ...re SFTP Server with Chroot in Debian 10.md | 84 ++++++++++--------- 1 file changed, 43 insertions(+), 41 deletions(-) diff --git a/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md b/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md index a30b6ee817..4fe8afb9a2 100644 --- a/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md +++ b/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md @@ -7,61 +7,63 @@ [#]: via: (https://www.linuxtechi.com/configure-sftp-chroot-debian10/) [#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) -如何在 Debian 10 中使用 Chroot 配置 SFTP 服务 +如何在 Debian 10 中配置 Chroot 环境的 SFTP 服务 ====== -**SFTP** 代表安全文件传输协议 / SSH 文件传输协议,它是最常用的一个方法,用于通过ssh将文件从本地系统安全地传输到远程服务器,反之亦然。sftp 的主要优点是,除 ‘**openssh-server**’ 之外,我们不需要安装任何额外的软件包,在大多数的 Linux 发行版中,‘openssh-server’ 软件包是默认安装的一部分。sftp 的另外一个好处是,我们可以允许用户使用 sftp ,而不允许使用 ssh 。 +SFTP 意思是“安全文件传输协议Secure File Transfer Protocol” 或 “SSH 文件传输协议SSH File Transfer Protocol”,它是最常用的用于通过 `ssh` 将文件从本地系统安全地传输到远程服务器的方法,反之亦然。`sftp` 的主要优点是,除 `openssh-server` 之外,我们不需要安装任何额外的软件包,在大多数的 Linux 发行版中,`openssh-server` 软件包是默认安装的一部分。`sftp` 的另外一个好处是,我们可以允许用户使用 `sftp` ,而不允许使用 `ssh` 。 -[![配置-sftp-debian10][1]][2] - -当前 Debian 10 ,代号‘Buster’,已经发布,在这篇文章中,我们将演示如何在 Debian 10 系统中使用 Chroot ‘Jail’ 类似的环境配置 sftp 。在这里,Chroot Jail 类似环境意味着,用户不能超出各自的 home 目录,或者用户不能从各自的 home 目录更改目录。下面实验的详细情况: - - * OS = Debian 10 - * IP 地址 = 192.168.56.151 +![](https://img.linux.net.cn/data/attachment/album/202005/05/223518ip4mbdi4nggbdtgu.jpg) +当前发布的 Debian 10 代号为 ‘Buster’,在这篇文章中,我们将演示如何在 Debian 10 系统中在 “监狱式的” Chroot 环境中配置 `sftp`。在这里,Chroot 监狱式环境意味着,用户不能超出各自的家目录,或者用户不能从各自的家目录更改目录。下面实验的详细情况: +* OS = Debian 10 +* IP 地址 = 192.168.56.151 让我们跳转到 SFTP 配置步骤, -### 步骤:1) 为 sftp 使用 groupadd 命令创建一个组 +### 步骤 1、使用 groupadd 命令给 sftp 创建一个组 -打开终端,使用下面的 groupadd 命令创建一个名为的“**sftp_users**”组, +打开终端,使用下面的 `groupadd` 命令创建一个名为的 `sftp_users` 组: ``` root@linuxtechi:~# groupadd sftp_users ``` -### 步骤:2) 添加用户到组 ‘sftp_users’ 并设置权限 +### 步骤 2、添加用户到组 sftp_users 并设置权限 -假设你想创建新的用户,并且想添加该用户到 ‘sftp_users’ 组中,那么运行下面的命令, +假设你想创建新的用户,并且想添加该用户到 `sftp_users` 组中,那么运行下面的命令, -**语法:** # useradd -m -G sftp_users +**语法:** -让我们假设用户名是 ’Jonathan’ +``` +# useradd -m -G sftp_users <用户名> +``` + +让我们假设用户名是 `jonathan`: ``` root@linuxtechi:~# useradd -m -G sftp_users jonathan ``` -使用下面的 chpasswd 命令设置密码, +使用下面的 `chpasswd` 命令设置密码: ``` -root@linuxtechi:~# echo "jonathan:" | chpasswd +root@linuxtechi:~# echo "jonathan:<输入密码>" | chpasswd ``` -假设你想添加现有的用户到 ‘sftp_users’ 组中,那么运行下面的 usermod 命令,让我们假设已经存在的用户名称是 ‘chris’ +假设你想添加现有的用户到 `sftp_users` 组中,那么运行下面的 `usermod` 命令,让我们假设已经存在的用户名称是 `chris`: ``` root@linuxtechi:~# usermod -G sftp_users chris ``` -现在设置用户所需的权限, +现在设置用户所需的权限: ``` root@linuxtechi:~# chown root /home/jonathan /home/chris/ ``` -在各用户的 home 目录中都创建一个上传目录,并设置正确地所有权, +在各用户的家目录中都创建一个上传目录,并设置正确地所有权: ``` root@linuxtechi:~# mkdir /home/jonathan/upload @@ -72,14 +74,14 @@ root@linuxtechi:~# chown chris /home/chris/upload **注意:** 像 Jonathan 和 Chris 之类的用户可以从他们的本地系统上传文件和目录。 -### 步骤:3) 编辑 sftp 配置文件 (/etc/ssh/sshd_config) +### 步骤 3、编辑 sftp 配置文件 /etc/ssh/sshd_config -正如我们已经陈述的,sftp 操作是通过 ssh 完成的,所以它的配置文件是 “**/etc/ssh/sshd_config**“, 在做任何更改前,我建议首先备份文件,然后再编辑该文件,接下来添加下面的内容, +正如我们已经陈述的,`sftp` 操作是通过 `ssh` 完成的,所以它的配置文件是 `/etc/ssh/sshd_config`,在做任何更改前,我建议首先备份文件,然后再编辑该文件,接下来添加下面的内容: ``` root@linuxtechi:~# cp /etc/ssh/sshd_config /etc/ssh/sshd_config-org root@linuxtechi:~# vim /etc/ssh/sshd_config -……… +...... #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp @@ -88,28 +90,28 @@ Match Group sftp_users AllowTcpForwarding no ChrootDirectory %h ForceCommand internal-sftp -………… +...... ``` 保存并退出文件。 -为使上述更改生效,使用下面的 systemctl 命令来重新启动 ssh 服务 +为使上述更改生效,使用下面的 `systemctl` 命令来重新启动 `ssh` 服务: ``` root@linuxtechi:~# systemctl restart sshd ``` -在上面的 ‘sshd_config’ 文件中,我们已经注释掉了以 “Subsystem”开头的行,并添加了新的条目 “Subsystem sftp internal-sftp” 和新的行,像, +在上面的 `sshd_config` 文件中,我们已经注释掉了以 `Subsystem` 开头的行,并添加了新的条目 `Subsystem sftp internal-sftp` 和新的行。而 -“**Match Group sftp_users”** –> 它意味着如果用户是 ‘sftp_users’ 组中的一员,那么将应用下面提到的规则到这个条目。 +`Match Group sftp_users` –> 它意味着如果用户是 `sftp_users` 组中的一员,那么将应用下面提到的规则到这个条目。 -“**ChrootDierctory %h**” –> 它意味着用户只能在他们自己各自的 home 目录中更改目录,而不能超出他们各自的 home 目录。或者换句话说,我们可以说用户是不允许更改目录的。他们将在他们的目录中获得 jai 类似环境,并且不能访问其他用户的目录和系统的目录。 +`ChrootDierctory %h` –> 它意味着用户只能在他们自己各自的家目录中更改目录,而不能超出他们各自的家目录。或者换句话说,我们可以说用户是不允许更改目录的。他们将在他们的目录中获得监狱一样的环境,并且不能访问其他用户的目录和系统的目录。 -“**ForceCommand internal-sftp**” –> 它意味着用户仅被限制到 sftp 命令。 +`ForceCommand internal-sftp` –> 它意味着用户仅被限制到只能使用 `sftp` 命令。 -### 步骤:4) 测试和验证 sftp +### 步骤 4、测试和验证 sftp -登录到你的 sftp 服务器的同一个网络上的任何其它的 Linux 系统,然后通过我们在 ‘sftp_users’ 组中映射的用户来尝试 ssh sftp 服务。 +登录到你的 `sftp` 服务器的同一个网络上的任何其它的 Linux 系统,然后通过我们放入 `sftp_users` 组中的用户来尝试 ssh 和 sftp 服务。 ``` [root@linuxtechi ~]# ssh root@linuxtechi @@ -121,7 +123,7 @@ Write failed: Broken pipe [root@linuxtechi ~]# ``` -以上操作证实用户不允许 SSH ,现在使用下面的命令尝试 sftp , +以上操作证实用户不允许 `ssh` ,现在使用下面的命令尝试 `sftp`: ``` [root@linuxtechi ~]# sftp root@linuxtechi @@ -133,7 +135,7 @@ drwxr-xr-x 2 root 1001 4096 Sep 14 07:52 debian10-pkgs drwxr-xr-x 2 1001 1002 4096 Sep 14 08:29 upload ``` -让我们使用 sftp ‘**get**‘ 命令来尝试下载一个文件 +让我们使用 sftp 的 `get` 命令来尝试下载一个文件: ``` sftp> get devops-actions.txt @@ -147,9 +149,9 @@ Couldn't stat remote file: No such file or directory sftp> ``` -上面的输出证实我们能从我们的 sftp 服务器下载文件到本地机器,除此之外,我们也必需测试用户不能更改目录。 +上面的输出证实我们能从我们的 sftp 服务器下载文件到本地机器,除此之外,我们也必须测试用户不能更改目录。 -让我们在 **upload**”目录下尝试上传一个文件, +让我们在 `upload` 目录下尝试上传一个文件: ``` sftp> cd upload/ @@ -163,17 +165,17 @@ sftp> 这证实我们已经成功地从我们的本地系统上传一个文件到 sftp 服务中。 -现在使用 winscp 工具来测试 SFTP 服务,输入 sftp 服务器 ip 地址和用户的凭证, +现在使用 winscp 工具来测试 sftp 服务,输入 sftp 服务器 IP 地址和用户的凭证: -[![Winscp-sftp-debian10][1]][3] +![][3] -在 Login 上单击,然后尝试下载和上传文件 +在 “Login” 上单击,然后尝试下载和上传文件: -[![下载-文件-winscp-debian10-sftp][1]][4] +![][4] -现在,在 upload 文件夹中尝试上传文件, +现在,在 `upload` 文件夹中尝试上传文件: -[![使用-winscp-Debian10-sftp-上传-文件][1]][5] +![][5] 上面的窗口证实上传是完好地工作的,这就是这篇文章的全部。如果这些步骤能帮助你在 Debian 10 中使用 chroot 环境配置 SFTP 服务器s,那么请分享你的反馈和评论。 @@ -184,7 +186,7 @@ via: https://www.linuxtechi.com/configure-sftp-chroot-debian10/ 作者:[Pradeep Kumar][a] 选题:[lujun9972][b] 译者:[robsean](https://github.com/robsean) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From f2866dc8c6d1f412e0ca0c7324089a85e7a96e69 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 22:39:46 +0800 Subject: [PATCH 135/178] PUB @robsean https://linux.cn/article-12186-1.html --- ...How to Configure SFTP Server with Chroot in Debian 10.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) rename {translated/tech => published}/20190915 How to Configure SFTP Server with Chroot in Debian 10.md (98%) diff --git a/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md b/published/20190915 How to Configure SFTP Server with Chroot in Debian 10.md similarity index 98% rename from translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md rename to published/20190915 How to Configure SFTP Server with Chroot in Debian 10.md index 4fe8afb9a2..51a75f3521 100644 --- a/translated/tech/20190915 How to Configure SFTP Server with Chroot in Debian 10.md +++ b/published/20190915 How to Configure SFTP Server with Chroot in Debian 10.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12186-1.html) [#]: subject: (How to Configure SFTP Server with Chroot in Debian 10) [#]: via: (https://www.linuxtechi.com/configure-sftp-chroot-debian10/) [#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) From 06b002218761300f5686e3c1a766ae099b216df2 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 23:21:47 +0800 Subject: [PATCH 136/178] PRF @wxy --- ...0 ways to analyze binary files on Linux.md | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/translated/tech/20200430 10 ways to analyze binary files on Linux.md b/translated/tech/20200430 10 ways to analyze binary files on Linux.md index 80577dfbcf..c5dbfc6f61 100644 --- a/translated/tech/20200430 10 ways to analyze binary files on Linux.md +++ b/translated/tech/20200430 10 ways to analyze binary files on Linux.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (10 ways to analyze binary files on Linux) @@ -12,11 +12,11 @@ > 这些简单的命令和工具可以帮助你轻松完成分析二进制文件的任务。 -![Tux with binary code background][1] +![](https://img.linux.net.cn/data/attachment/album/202005/05/232115nn0oduodo4oztv0a.jpg) “这个世界上有 10 种人:懂二进制的人和不懂二进制的人。” -我们每天都在与二进制文件打交道,但我们对二进制文件却知之甚少。我所说的二进制,是指你每天运行的可执行文件,从你的命令行工具到成熟的应用程序都是。 +我们每天都在与二进制文件打交道,但我们对二进制文件却知之甚少。我所说的二进制,是指你每天运行的可执行文件,从命令行工具到成熟的应用程序都是。 Linux 提供了一套丰富的工具,让分析二进制文件变得轻而易举。无论你的工作角色是什么,如果你在 Linux 上工作,了解这些工具的基本知识将帮助你更好地理解你的系统。 @@ -26,7 +26,7 @@ Linux 提供了一套丰富的工具,让分析二进制文件变得轻而易 它的作用:帮助确定文件类型。 -这将是你进行二进制分析的出发点。我们每天都在与文件打交道。并非所有的文件都是可执行类型,除此之外还有各种各样的文件类型。在你开始之前,你需要了解要分析的文件类型。它是二进制文件、库文件、ASCII 文本文件、视频文件、图片文件、PDF、数据文件等等。 +这将是你进行二进制分析的起点。我们每天都在与文件打交道,并非所有的文件都是可执行类型,除此之外还有各种各样的文件类型。在你开始之前,你需要了解要分析的文件类型。是二进制文件、库文件、ASCII 文本文件、视频文件、图片文件、PDF、数据文件等文件吗? `file` 命令将帮助你确定你所处理的文件类型。 @@ -66,11 +66,11 @@ $ ### ltrace -它的作用:一个库调用跟踪器。 +它的作用:库调用跟踪器。 我们现在知道如何使用 `ldd` 命令找到一个可执行程序所依赖的库。然而,一个库可以包含数百个函数。在这几百个函数中,哪些是我们的二进制程序正在使用的实际函数? -`ltrace` 命令可以显示在运行时从库中调用的所有函数。在下面的例子中,你可以看到被调用的函数名称,以及传递给该函数的参数。你也可以在输出的最右边看到这些函数返回的内容。 +`ltrace` 命令可以显示运行时从库中调用的所有函数。在下面的例子中,你可以看到被调用的函数名称,以及传递给该函数的参数。你也可以在输出的最右边看到这些函数返回的内容。 ``` $ ltrace ls @@ -95,7 +95,7 @@ $ 它的作用:以 ASCII、十进制、十六进制或八进制显示文件内容。 -通常情况下,当你用一个应用程序打开一个文件,而它不知道如何处理该文件时,就会出现这种情况。尝试用 `vim` 打开一个可执行文件或视频文件,你会看到的只是屏幕上抛出的乱码。 +通常情况下,当你用一个应用程序打开一个文件,而它不知道如何处理该文件时,就会出现这种情况。尝试用 `vim` 打开一个可执行文件或视频文件,你屏幕上会看到的只是抛出的乱码。 在 `hexdump` 中打开未知文件,可以帮助你看到文件的具体内容。你也可以选择使用一些命令行选项来查看用 ASCII 表示的文件数据。这可能会帮助你了解到它是什么类型的文件。 @@ -132,7 +132,7 @@ $ strings /bin/ls ELF(可执行和可链接文件格式Executable and Linkable File Format)是可执行文件或二进制文件的主流格式,不仅是 Linux 系统,也是各种 UNIX 系统的主流文件格式。如果你已经使用了像 `file` 命令这样的工具,它告诉你文件是 ELF 格式,那么下一步就是使用 `readelf` 命令和它的各种选项来进一步分析文件。 -在使用 `readelf` 命令时,有一个实际的 ELF 规范的参考是非常有用的。你可以在[这里][2]找到规范。  +在使用 `readelf` 命令时,有一份实际的 ELF 规范的参考是非常有用的。你可以在[这里][2]找到该规范。  ``` $ readelf -h /bin/ls @@ -163,9 +163,9 @@ $ 它的作用:从对象文件中显示信息。 -二进制文件是通过你编写源码的创建的,这些源码会通过一个叫做编译器的工具进行编译。这个编译器会生成相当于源代码的机器语言指令,然后由 CPU 执行,以执行特定的任务。这些机器语言代码可以通过被称为汇编语言的助记词来解读。汇编语言是一组指令,它可以帮助你理解由程序所进行并最终在 CPU 上执行的操作。 +二进制文件是通过你编写的源码创建的,这些源码会通过一个叫做编译器的工具进行编译。这个编译器会生成相对于源代码的机器语言指令,然后由 CPU 执行特定的任务。这些机器语言代码可以通过被称为汇编语言的助记词来解读。汇编语言是一组指令,它可以帮助你理解由程序所进行并最终在 CPU 上执行的操作。 -`objdump` 实用程序读取二进制或可执行文件,并将汇编语言指令转储到屏幕上。汇编语言知识对于理解 `objdump` 命令的输出是至关重要的。 +`objdump` 实用程序读取二进制或可执行文件,并将汇编语言指令转储到屏幕上。汇编语言知识对于理解 `objdump` 命令的输出至关重要。 请记住:汇编语言是特定于体系结构的。 @@ -174,7 +174,6 @@ $ objdump -d /bin/ls | head /bin/ls: file format elf64-x86-64 - Disassembly of section .init: 0000000000402150 <_init@@Base>: @@ -219,7 +218,7 @@ $ 它的作用:列出对象文件中的符号。 -如果你所使用的二进制文件没有被剥离,`nm` 命令将为你提供在编译过程中嵌入到二进制文件中的有价值的信息。`nm` 可以帮助你从二进制文件中识别变量和函数。你可以想象一下,如果你无法访问二进制文件的源代码,这将是多么有用。 +如果你所使用的二进制文件没有被剥离,`nm` 命令将为你提供在编译过程中嵌入到二进制文件中的有价值的信息。`nm` 可以帮助你从二进制文件中识别变量和函数。你可以想象一下,如果你无法访问二进制文件的源代码时,这将是多么有用。 为了展示 `nm`,我们快速编写了一个小程序,用 `-g` 选项编译,我们会看到这个二进制文件没有被剥离。 @@ -264,7 +263,7 @@ $ 分析这些路径的唯一方法是在运行时环境,在任何给定的位置停止或暂停程序,并能够分析信息,然后再往下执行。 -这就是调试器的作用,在 Linux 上,`gdb` 就是调试器的事实标准。它可以帮助你加载程序,在特定的地方设置断点,分析内存和 CPU 的寄存器,还有更多的功能。它是对上面提到的其他工具的补充,可以让你做更多的运行时分析。 +这就是调试器的作用,在 Linux 上,`gdb` 就是调试器的事实标准。它可以帮助你加载程序,在特定的地方设置断点,分析内存和 CPU 的寄存器,以及更多的功能。它是对上面提到的其他工具的补充,可以让你做更多的运行时分析。 有一点需要注意的是,一旦你使用 `gdb` 加载一个程序,你会看到它自己的 `(gdb)` 提示符。所有进一步的命令都将在这个 `gdb` 命令提示符中运行,直到你退出。 @@ -290,7 +289,8 @@ Missing separate debuginfos, use: debuginfo-install glibc-2.17-260.el7_6.6.x86_6 Continuing. Hello world![Inferior 1 (process 29620) exited normally] (gdb) q -$``` +$ +``` ### 结语 @@ -303,7 +303,7 @@ via: https://opensource.com/article/20/4/linux-binary-analysis 作者:[Gaurav Kamathe][a] 选题:[lujun9972][b] 译者:[wxy](https://github.com/wxy) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 2fc33688b14ace7d116b6ce18319b1c4820f600b Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Tue, 5 May 2020 23:22:14 +0800 Subject: [PATCH 137/178] PUB @wxy https://linux.cn/article-12187-1.html --- .../20200430 10 ways to analyze binary files on Linux.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200430 10 ways to analyze binary files on Linux.md (99%) diff --git a/translated/tech/20200430 10 ways to analyze binary files on Linux.md b/published/20200430 10 ways to analyze binary files on Linux.md similarity index 99% rename from translated/tech/20200430 10 ways to analyze binary files on Linux.md rename to published/20200430 10 ways to analyze binary files on Linux.md index c5dbfc6f61..35fedf607a 100644 --- a/translated/tech/20200430 10 ways to analyze binary files on Linux.md +++ b/published/20200430 10 ways to analyze binary files on Linux.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12187-1.html) [#]: subject: (10 ways to analyze binary files on Linux) [#]: via: (https://opensource.com/article/20/4/linux-binary-analysis) [#]: author: (Gaurav Kamathe https://opensource.com/users/gkamathe) From 56e24096621276c48184a8f01b1bbb0a3ec65973 Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Tue, 5 May 2020 23:45:17 +0800 Subject: [PATCH 138/178] APL --- .../20200505 Browse the Peer-to-peer Web With Beaker Browser.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md b/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md index 82129f00a5..319b8a18c5 100644 --- a/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md +++ b/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (lxbwolf) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 0bb37bb6aec7a6661c022baa6fa47a9bae504636 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 6 May 2020 00:54:57 +0800 Subject: [PATCH 139/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20Three?= =?UTF-8?q?=20Methods=20to=20Check=20Uptime=20of=20MySQL/MariaDB=20Databas?= =?UTF-8?q?e=20Server=20on=20Linux?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 Three Methods to Check Uptime of MySQL-MariaDB Database Server on Linux.md --- ... MySQL-MariaDB Database Server on Linux.md | 143 ++++++++++++++++++ 1 file changed, 143 insertions(+) create mode 100644 sources/tech/20200506 Three Methods to Check Uptime of MySQL-MariaDB Database Server on Linux.md diff --git a/sources/tech/20200506 Three Methods to Check Uptime of MySQL-MariaDB Database Server on Linux.md b/sources/tech/20200506 Three Methods to Check Uptime of MySQL-MariaDB Database Server on Linux.md new file mode 100644 index 0000000000..be88629967 --- /dev/null +++ b/sources/tech/20200506 Three Methods to Check Uptime of MySQL-MariaDB Database Server on Linux.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Three Methods to Check Uptime of MySQL/MariaDB Database Server on Linux) +[#]: via: (https://www.2daygeek.com/check-mysql-mariadb-database-server-uptime-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Three Methods to Check Uptime of MySQL/MariaDB Database Server on Linux +====== + +We all know the purpose of the uptime command in Linux. + +This is used to check the **[uptime of the Linux system][1]** and how long the system runs without restarting. + +The Linux admin job is to keep the system up and running. + +If you want to check how long other services like **[Apache][2]**, MySQL, MariaDB, sftp, etc., are running on Linux, how do you do that? + +Each service has their own command to check the uptime of service. + +But you can also use other commands for this purpose. + +### Method-1: How to Check the Uptime of a MySQL/MariaDB Database Server on Linux Using the ps Command + +The **[ps command][3]** stands for process status. This is one of the most basic commands that shows the system running processes with details. + +To do so, you first need to find the PID of **[MySQL][4]**/MariaDB using the **[pidof command][5]**. + +``` +# pidof mysqld | cut -d" " -f1 + +2412 +``` + +Once you have the MySQL/[**MariaDB**][6] PID, use the “etime” option with the ps command and get the uptime. + + * **etime:** elapsed time since the process was started, in the form of [[DD-]hh:]mm:ss. + + + +``` +# ps -p 2412 -o etime + + ELAPSED +2-08:49:30 +``` + +Alternatively, use the “lstart” option with the ps command to get the uptime of a given PID. + +``` +# ps -p 2412 -o lstart + + STARTED +Sat May 2 03:02:15 2020 +``` + +The MySQL/MariaDB process has been running for 2 days, 03 hours, 02 minutes and 15 seconds. + +### Method-2: How to Check the Uptime of a MySQL/MariaDB Database Server on Linux Using the Systemctl Command + +The **[systemctl command][7]** is used to control the systemd system and service manager. + +systemd is a new init system and system manager, that was adopted by most of Linux distributions now over the traditional SysVinit manager. + +``` +# systemctl status mariadb +or +# systemctl status mysql + +● mariadb.service - MariaDB 10.1.44 database server + Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled) + Drop-In: /etc/systemd/system/mariadb.service.d + └─migrated-from-my.cnf-settings.conf + Active: active (running) since Sat 2020-05-02 03:02:18 UTC; 2 days ago + Docs: man:mysqld(8) + https://mariadb.com/kb/en/library/systemd/ + Process: 2448 ExecStartPost=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) + Process: 2388 ExecStartPre=/bin/sh -c [ ! -e /usr/bin/galera_recovery ] && VAR= || VAR=/usr/bin/galera_recovery; [ $? -eq 0 ] && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1 (code=exited, status=0/SUCCESS) + Process: 2386 ExecStartPre=/bin/sh -c systemctl unset-environment _WSREP_START_POSITION (code=exited, status=0/SUCCESS) + Main PID: 2412 (mysqld) + Status: "Taking your SQL requests now…" + CGroup: /system.slice/mariadb.service + └─2412 /usr/sbin/mysqld + +May 03 21:41:26 ns2.2daygeek.com mysqld[2412]: 2020-05-03 21:41:26 140328136861440 [Warning] Host name '1.1.1.1' could not be resolved: … not known +May 04 02:00:46 ns2.2daygeek.com mysqld[2412]: 2020-05-04 2:00:46 140328436418304 [Warning] IP address '1.1.1.1' has been resolved to the host name '2…ss itself. +May 04 03:01:31 ns2.2daygeek.com mysqld[2412]: 2020-05-04 3:01:31 140328436111104 [Warning] IP address '1.1.1.1' could not be resolved: Temporary fai…resolution +May 04 04:03:06 ns2.2daygeek.com mysqld[2412]: 2020-05-04 4:03:06 140328136861440 [Warning] IP address '1.1.1.1' could not be resolved: Name or ser… not known +May 04 07:23:54 ns2.2daygeek.com mysqld[2412]: 2020-05-04 7:23:54 140328435189504 [Warning] IP address '1.1.1.1' could not be resolved: Name or service not known +May 04 08:03:31 ns2.2daygeek.com mysqld[2412]: 2020-05-04 8:03:31 140328436418304 [Warning] IP address '1.1.1.1' could not be resolved: Name or service not known +May 04 08:25:56 ns2.2daygeek.com mysqld[2412]: 2020-05-04 8:25:56 140328135325440 [Warning] IP address '1.1.1.1' could not be resolved: Name or service not known +Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable. +Hint: Some lines were ellipsized, use -l to show in full. +``` + +### Method-3: How to Check the Uptime of a MySQL/MariaDB Database Server on Linux Using the MySQLAdmin Command + +**[MySQLAdmin][8]** is a command-line utility for MySQL Server that is installed when installing the MySQL package. + +The MySQLAdmin client allows you to perform some basic administrative functions on the MySQL server. + +It is used to create a database, drop a database, set a root password, change the root password, check MySQL status, verify MySQL functionality, monitor mysql processes, and verify the configuration of the server. + +``` +# mysqladmin -u root -pPassword version + +mysqladmin Ver 8.42 Distrib 5.7.27, for Linux on x86_64 +Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. + +Oracle is a registered trademark of Oracle Corporation and/or its +affiliates. Other names may be trademarks of their respective +owners. + +Server version 5.7.27 +Protocol version 10 +Connection Localhost via UNIX socket +UNIX socket /var/lib/mysql/mysql.sock +Uptime: 1 day 10 hours 44 min 13 sec +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-mysql-mariadb-database-server-uptime-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-system-server-uptime-check/ +[2]: https://www.2daygeek.com/check-find-apache-httpd-web-server-uptime-linux/ +[3]: https://www.2daygeek.com/linux-ps-command-find-running-process-monitoring/ +[4]: https://www.2daygeek.com/category/mysql/ +[5]: https://www.2daygeek.com/check-find-parent-process-id-pid-ppid-linux/ +[6]: https://www.2daygeek.com/category/mariadb/ +[7]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/ +[8]: https://www.2daygeek.com/linux-mysqladmin-command-administrate-mysql-mariadb-server/ From fa8df9c7325af368c5190b99fc02e19f09f2c868 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 6 May 2020 01:16:16 +0800 Subject: [PATCH 140/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20After?= =?UTF-8?q?=20More=20Than=203=20Years,=20Inkscape=201.0=20is=20Finally=20H?= =?UTF-8?q?ere=20With=20Tons=20of=20Feature=20Improvements?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md --- ... Here With Tons of Feature Improvements.md | 115 ++++++++++++++++++ 1 file changed, 115 insertions(+) create mode 100644 sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md diff --git a/sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md new file mode 100644 index 0000000000..b036d7a56a --- /dev/null +++ b/sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md @@ -0,0 +1,115 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements) +[#]: via: (https://itsfoss.com/inkscape-1-release/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements +====== + +Even though I’m not an expert, it is safe to say that Inkscape is one of the [best vector graphics editors][1]. + +Not just limited to the reason that it is free and open-source software – but it is indeed a useful application for digital artists creating something on it. + +The last release (version 0.92) was about 3 years ago. And, now, finally, [Inkscape announced its 1.0 release][2] – with a bunch of new features, additions, and improvements. + +### Inkscape 1.0: What’s New? + +![Inkscape 1.0][3] + +Here, let me highlight the important key changes that you need to know about Inkscape 1.0 release: + +#### First native macOS application + +It’s always good to have a proper cross-platform support for amazing tools like Inkscape. And, with the latest release, a native macOS application has been made available as well. + +Do note that the macOS app is still a **preview** version and has room for a lot of improvements. However, with a better system integration without needing [XQuartz][4], it should be a promising progress for macOS users. + +#### Performance Improvements + +Any kind of application/tool benefits from a significant performance boost. And, so does Inkscape. + +With its 1.0 release, they mention that you will be able to notice the smoother performance when using Inkscape for all the creative work you do. + +Except on macOS (which is still a “preview” version), Inkscape should run just fine on Linux and Windows. + +#### Improved UI and HiDPI Support + +![][5] + +In their release notes, they’ve mentioned: + +> A major milestone was achieved in enabling Inkscape to use a more recent version of the software used to build the editor’s user interface (namely GTK+3). Users with HiDPI (high resolution) screens can thank teamwork that took place during the 2018 Boston Hackfest for setting the updated-GTK wheels in motion. + +So, starting from GTK +3 user interface to the HiDPI support for high-resolution screens, it is a wonderful upgrade. + +Not to forget, you get more customization options to tweak the look and feel as well. + +#### New Feature Additions + +![][6] + +On paper, the list of new features sounds good. Depending on your expertise and what you prefer, the latest additions should come in handy. + +Here’s an overview of the new features: + + * New and improved Live Path Effect (LPE) features + * A new searchable LPE selection dialog + * Freestyle drawing users can now mirror and rotate the canvas + * The new PowerPencil mode of the Pencil tool provides pressure-dependent width and it is finally possible to create closed paths. + * New path effects that will appeal to the artistic user include Offset, PowerClip, and PowerMask LPEs. + * Ability to create a duplicate guide, aligning grids to the page, the Measure tool’s path length indicator, and the inverted Y-axis. + * Ability to export PDFs with clickable links and metadata + * New palettes and mesh gradients that work in the web browser + + + +While I’ve tried to compile the list of the key features added to this release, you can get all the nitty gritty details in their [release notes][7]. + +#### Other Important Changes + +Along with all the major changes, Inkscape 1.0 now supports Python 3. And, with that going forward, you might notice some extensions that don’t work with the latest version. + +So, if your work depends on the workflow of your extensions, I suggest you to take a closer look at their [release notes][7] to get all the technical details. + +### Download & Install Inkscape 1.0 on Linux + +Inkscape 1.0 is available in AppImage and Snap format for Linux. You can download it from Inkscape’s website. + +[Download Inkscape 1.0 for Linux][8] + +If you aren’t aware, you can check [how to use AppImage file on Linux][9] to get started. You may also refer to [this Snap guide][10]. + +Ubuntu users can find the snap version of Inskcape 1.0 in the Ubuntu Software Center. + +I used the AppImage file on [Pop OS 20.04][11] and it worked just fine to get started. You can test drive all the features in detail to see how it works out for you. + +Have you tried it yet? Let me know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/inkscape-1-release/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/vector-graphics-editors-linux/ +[2]: https://inkscape.org/news/2020/05/04/introducing-inkscape-10/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-1-0.jpg?ssl=1 +[4]: https://en.wikipedia.org/wiki/XQuartz +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-ui-customization.jpg?ssl=1 +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-live-path-effects.jpg?ssl=1 +[7]: https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0 +[8]: https://inkscape.org/release/1.0/gnulinux/ +[9]: https://itsfoss.com/use-appimage-linux/ +[10]: https://itsfoss.com/install-snap-linux/ +[11]: https://itsfoss.com/pop-os-20-04-review/ From 73f3fbc0b669c5f0b41702b3c47ee465a5588d9a Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 6 May 2020 01:17:47 +0800 Subject: [PATCH 141/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200505=2011=20D?= =?UTF-8?q?evOps=20lessons=20from=20My=20Little=20Pony?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200505 11 DevOps lessons from My Little Pony.md --- ...5 11 DevOps lessons from My Little Pony.md | 96 +++++++++++++++++++ 1 file changed, 96 insertions(+) create mode 100644 sources/tech/20200505 11 DevOps lessons from My Little Pony.md diff --git a/sources/tech/20200505 11 DevOps lessons from My Little Pony.md b/sources/tech/20200505 11 DevOps lessons from My Little Pony.md new file mode 100644 index 0000000000..f550f871a1 --- /dev/null +++ b/sources/tech/20200505 11 DevOps lessons from My Little Pony.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (11 DevOps lessons from My Little Pony) +[#]: via: (https://opensource.com/article/20/5/devops-lessons) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +11 DevOps lessons from My Little Pony +====== +What you never thought you could learn about DevOps for Twilight Sparkle +and her friends. +![My Little Pony][1] + +In 2010, the My Little Pony franchise was rebooted with the animated show _My Little Pony: Friendship is Magic_. The combination of accessibility to children with the sophisticated themes the show tackled garnered a following that cut across ages. I was swept up in the wave and discovered there is a lot to learn about DevOps from the show. + +### Discovering technical debt + +The show begins with Twilight Sparkle reading obscure documentation, only to realize that Equestria, where the show is set, is due to suffer a calamity. Though someone named Nightmare Moon has been imprisoned for a thousand years, there is a prophecy she will return. + +#### Lesson 1: Technical debt matters. + +Nightmare Moon is a perfect stand-in for technical debt. Document it. Pay attention to the signs of risk no matter how infrequently they occur. Have a plan to resolve it. + +Twilight Sparkle goes to her manager with the news, only to be told that it is not a current priority. She is sent to Ponyville to prepare for the coming celebration, instead. + +#### Lesson 2: Communication with management is key. + +Twilight Sparkle communicated her priority (the risk of technical debt) but did not convince her management that it was more important than the celebration (of the next release or a new customer). + +We all need to make clear what the business case is for resolving critical issues. It is also not straightforward to explain technical debt in business terms. If management does not agree on the severity, find new ways to communicate the risk, and team up with others who speak that language. + +### When technical debt becomes an outage + +As the prophecy has foreseen, Nightmare Moon returns and declares eternal night. (In this DevOps story, this marks the beginning of a catastrophic outage.) Twilight quickly understands that she cannot resolve the issue by herself, and she recruits the ponies who will become, with her, the "Mane Six." They each stand for a different element of harmony—Applejack stands for Honesty, Fluttershy for Kindness, Pinkie Pie for Laughter, Rarity for Generosity, Rainbow Dash for Loyalty, and Twilight Sparkle herself for Magic. This team-building is full of lessons: + +#### Lesson 3: Few are the issues that can be resolved by one person. + +When facing an outage, reach out to other people with complementary skills who can help you. It is best if they are different than you: different backgrounds leads to differing perspectives, and that can lead to better problem-solving. + +#### Lesson 4: When resolving an outage, honest communication is key. + +Throughout the struggle against the eternal night, the Mane Six have to speak openly and honestly about what's not working. Their [blameless communication][2] is part of problem-solving. + +#### Lesson 5: When resolving an outage, kindness to yourself and to others is crucial. + +Though tempers flare hot in the land of Equestria, we all benefit from coming back to working together. + +#### Lesson 6: Laughter is important. + +Even when everything comes crashing down, remember to take a break, drink a glass of water, and take a deep breath. Stressing out does not help anything. + +#### Lesson 7: Be generous. + +Even if you are not on-call right now, if your help is needed to resolve a problem, help out as you hope your colleagues will do for you. + +#### Lesson 8: Be loyal. + +An outage is not a time to settle rivalries between teams. Focus on how to collaborate and resolve the outage as a team. + +#### Lesson 9: Though people skills are important, you have to understand the technology on a deep level. + +Keep your skills sharp. Expertise is not only the ability to learn; it is knowing when that information is needed. Part of being an expert is practice. + +### Growing into a culture of continual improvement + +After the issue is resolved, Princess Celestia realizes that the Mane Six are crucial to the long-term survival of Equestria, and tells Twilight Sparkle to stay in Ponyville and keep researching the magic of friendship. + +#### Lesson 10: After an outage is resolved, conduct a review, take concrete lessons, and act on them. + +I could go on, episode by episode, detailing lessons relevant for DevOps, but I will wrap up with one of my favorite ones. In the "Winter Wrap-Up" episode, all the ponies in Ponyville help in preparing for the spring. As per tradition, they do not use magic, leaving Twilight Sparkle to wonder how she can contribute. Eventually, she realizes that she can help by making a checklist to make sure everything is done in the right order. + +#### Lesson 11: When automation is impossible or inadvisable, write a solid checklist, and follow it. Do not depend on your memory. + +Twilight Sparkle and the Mane Six overcome great obstacles as a team, and now have a system to improve as a team. + +### A story of DevOps + +This story reflects how many organizations slowly adopt DevOps. The transition from recognizing a fear of technical debt toward addressing it is not simple. With courageous leadership, teamwork, and a willingness to improve, all organizations can come out on the other side with a similar story to Twilight Sparkle and her friends. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/devops-lessons + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/my-little-pony.jpg?itok=X-rwAGuE (My Little Pony) +[2]: https://opensource.com/article/19/4/psychology-behind-blameless-retrospective From a0e4f624b4db19d7fde9a83a4a8ccc29bca27ec6 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 6 May 2020 01:18:47 +0800 Subject: [PATCH 142/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200505=208=20op?= =?UTF-8?q?en=20source=20video=20games=20to=20play?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200505 8 open source video games to play.md --- ...00505 8 open source video games to play.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20200505 8 open source video games to play.md diff --git a/sources/tech/20200505 8 open source video games to play.md b/sources/tech/20200505 8 open source video games to play.md new file mode 100644 index 0000000000..ac0577d96b --- /dev/null +++ b/sources/tech/20200505 8 open source video games to play.md @@ -0,0 +1,116 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (8 open source video games to play) +[#]: via: (https://opensource.com/article/20/5/open-source-fps-games) +[#]: author: (Aman Gaur https://opensource.com/users/amangaur) + +8 open source video games to play +====== +These games are fun and free to play, a way to connect with friends, and +an opportunity to make an old favorite even better. +![Gaming on a grid with penguin pawns][1] + +Video games are a big business. That's great for the industry's longevity—not to mention for all the people working in programming and graphics. But it can take a lot of work, time, and money to keep up with all the latest gaming crazes. If you feel like playing a few quick rounds of a video game without investing in a new console or game franchise, then you'll be happy to know that there are plenty of open source combat games you can download, play, share, and even modify (if you're inclined to programming) for free. + +First-person shooters (FPS) are one of the most popular categories of video games. They are centered around the perspective of the protagonist (the player), and they often offer weapon-based advancement. As you get better at the game, you survive longer, you get better weapons, and you increase your power. FPS games have a distinct look and feel, which is reflected in the category's name: players see everything—their weapons and the game world—in first person, as if they're looking through their player character's eyes. + +If you want to give one a try, check out the following eight great open source FPS games. + +### Xonotic + +![Xonotic][2] + +[Xonotic][3] is a fast-paced, arena-based FPS game. It is a popular game in the open source world. One reason could be the fact that it has never been a mainstream game. It offers a variety of weapons and enemies that are thrown right at you mercilessly from the start. Demanding quick action and response, it is an experience that will keep you on the edge of your seats. The game is available under the GPLv3+ license. + +### Wolfenstein Enemy Territory + +![Wolfenstein Enemy Territory][4] + +Wolfenstein has been a major franchise in gaming for many years. If you are a fan of gore and glory, then you've probably already heard of this game (if not, you'll love it once you try it). [Wolfenstein Enemy Territory][5] is an early iteration of the popular World War II game. It became free to play in 2003, and its [source code][6] is provided under the GPLv3. To play, however, you must own the game data (or recreate it yourself) separately (which remains under its original EULA). + +### Doom + +![Doom][7] + +[Doom][8] is a wildly popular game that was also an early example of games on Linux—way back in 2004. There are many iterations of the game, many of which have been released as open source. The game is about acquiring a teleportation device that's been captured by demons, so the violence, while gory, is low on realism. The source code for the game was provided under the GPL, but many versions require that you own the game for the game assets. There are dozens of ports and adaptations, including [Freedoom][9] (with free assets), [Dhewm3][10], [RBDoom-3-BFG][11], and many more. Try a few and pick your favorite! + +### Smokin' Guns + +![Smokin' Guns][12] + +If you're a fan of the Old West and six-shooters, this FPS is for you. From cowboys to gunslingers and with a captivating background score, [Smokin' Guns][13] has it all. It's a semi-realistic simulation of the old spaghetti western. On your way through the game, you face multiple enemies and get multiple weapons, so there's always the promise of excitement and danger around the corner. The game is free and open source under the terms of the GPLv2. + +### Nexuiz + +![Nexuiz][14] + +[Nexuiz][15] (classic) is another great FPS that's free to play on multiple platforms. The game is based on the Quake engine and has been made open source under the GNU GPLv2. The game offers multiple modes, including online, LAN party, and bot training. The game features sophisticated weapons and fast action. It's brutal and exciting, with an objective: kill as many opponents as possible before they get you. + +Note that the open source version of Nexuiz is not the same as the version built on CryEngine3 that is sold on Steam. + +### .kkrieger + +![kkrieger][16] + +[.Kkrieger][17] was developed in 2004 by .theprodukkt, a German demogroup. The game was developed using an unreleased (at the time) engine known as Werkkzeug. This game might feel a little slow to many, but it still offers an intense experience. The approaching enemies are slow, but their sheer number makes it confusing to know which one to take down first. It's an onslaught, and you have to shoot through layers of enemies before you reach the final boss. It was released in a rather raw form on [GitHub][18] by its creators under a BSD license with some public domain components. + +### Warsow + +![Warsow][19] + +If you've ever played Borderlands 2, then imagine [Warsow][20] as an arena-style Borderlands. The game is built on a modernized Quake II engine, and its plot takes a simple approach: Kill as many opponents as possible. The team with the most number of kills wins. Despite its simplicity, it features amazing weaponry and lots of great trick moves, like circle jumping, bunny hopping, double jumping, ramp sliding, and so on. It makes for an engaging multiplayer session, and it's been recognized by multiple online leagues as a worthy game for their competitions. Get the source code from [GitHub][21] or install the game from your software repository. + +### World of Padman + +![World of Padman][22] + +[The World of Padman][23] may be the last game on this list, but it's one of the most unique. Designed by PadWorld Entertainment, World of Padman takes a different twist graphically and introduces you to quirky and whimsical characters in a colorful (albeit cartoonishly violent) world. It's based on the ioquake3 engine, and its unique style and uproarious gameplay have earned it a featured place in multiple gaming magazines. You can download the source code from [GitHub][24]. + +### Give one a shot + +A game that becomes open source can act as a template for something great, whether it's a wholly open source version of an old classic, a remix of a beloved game, or an entirely new platform built on an old reliable engine. + +Open source gaming is important for many reasons: it provides users with a fun diversion, a way to connect with friends, and an opportunity for programmers and designers to hack within an existing framework. If titles like Doom weren't made open source, a little bit of video game history would be lost. Instead, it endures and has the opportunity to grow even more. + +Try an open source game, and watch your six. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/open-source-fps-games + +作者:[Aman Gaur][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/amangaur +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game_pawn_grid_linux.png?itok=4gERzRkg (Gaming on a grid with penguin pawns) +[2]: https://opensource.com/sites/default/files/uploads/xonotic.jpg (Xonotic) +[3]: https://www.xonotic.org/download/ +[4]: https://opensource.com/sites/default/files/uploads/wolfensteinenemyterritory.jpg (Wolfenstein Enemy Territory) +[5]: https://www.splashdamage.com/games/wolfenstein-enemy-territory/ +[6]: https://github.com/id-Software/Enemy-Territory +[7]: https://opensource.com/sites/default/files/uploads/doom.jpg (Doom) +[8]: https://github.com/id-Software/DOOM +[9]: https://freedoom.github.io/ +[10]: https://dhewm3.org/ +[11]: https://github.com/RobertBeckebans/RBDOOM-3-BFG/ +[12]: https://opensource.com/sites/default/files/uploads/smokinguns.jpg (Smokin' Guns) +[13]: https://www.smokin-guns.org/downloads +[14]: https://opensource.com/sites/default/files/uploads/nexuiz.jpg (Nexuiz) +[15]: https://sourceforge.net/projects/nexuiz/ +[16]: https://opensource.com/sites/default/files/uploads/kkrieger.jpg (kkrieger) +[17]: https://web.archive.org/web/20120204065621/http://www.theprodukkt.com/kkrieger +[18]: https://github.com/farbrausch/fr_public +[19]: https://opensource.com/sites/default/files/uploads/warsow.jpg (Warsow) +[20]: https://www.warsow.net/download +[21]: https://github.com/Warsow +[22]: https://opensource.com/sites/default/files/uploads/padman.jpg (World of Padman) +[23]: https://worldofpadman.net/en/ +[24]: https://github.com/PadWorld-Entertainment From 20388ad705e6854d78dd86f496f84b9d6b4e80c2 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Wed, 6 May 2020 01:19:38 +0800 Subject: [PATCH 143/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200505=20Analyz?= =?UTF-8?q?ing=20data=20science=20code=20with=20R=20and=20Emacs?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200505 Analyzing data science code with R and Emacs.md --- ...zing data science code with R and Emacs.md | 133 ++++++++++++++++++ 1 file changed, 133 insertions(+) create mode 100644 sources/tech/20200505 Analyzing data science code with R and Emacs.md diff --git a/sources/tech/20200505 Analyzing data science code with R and Emacs.md b/sources/tech/20200505 Analyzing data science code with R and Emacs.md new file mode 100644 index 0000000000..ebcfadbe92 --- /dev/null +++ b/sources/tech/20200505 Analyzing data science code with R and Emacs.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Analyzing data science code with R and Emacs) +[#]: via: (https://opensource.com/article/20/5/r-emacs-data-science) +[#]: author: (Peter Prevos https://opensource.com/users/danderzei) + +Analyzing data science code with R and Emacs +====== +Emacs' versatility and extensibility bring the editor's full power into +play for writing data science code. +![metrics and data shown on a computer screen][1] + +Way back in 2012, _Harvard Business Review_ published an article that proclaimed "data scientist" to be the [sexiest job][2] of the 21st century. Interest in data science has exploded since then. Many great open source projects, such as [Python][3] and the [R language][4] for statistical computing, have facilitated the rapid developments in how we analyze data. + +I started my career using pencil and paper and moved to spreadsheets. Now the R language is my weapon of choice when I need to create value from data. Emacs is another one of my favorite tools. This article briefly explains how to use the [Emacs Speaks Statistics][5] (ESS) package to get started with developing R projects in this venerable editor. + +The vast majority of R developers use the [RStudio][6] IDE to manage their projects. RStudio is a powerful open source editor with specialized functionality to develop data science projects. RStudio is a great integrated development environment (IDE), but its editing functions are limited. + +Using Emacs to write data science code means that you have access to the full power of this extensible editor. I prefer using Emacs for my data science projects because I can do many other tasks within the same application, leveraging the multifunctionality of this venerable editor. If you are just getting started with Emacs, then please first read Seth Kenlon's [Emacs getting started][7] article. + +### Setting up Emacs for R + +Emacs is an almost infinitely extensible text editor, which unfortunately means that many things don't work the way you want them to out of the box. Before you can write and execute R scripts, you need to install some packages and configure them. The ESS package provides an interface between Emacs and R. Other packages, such as [Company][8] and [highlight-parentheses][9] help with completion and balancing parentheses. + +Emacs uses a version of Lisp for configuration. The lines of [Emacs Lisp][10] code below install the required extensions and define a minimal configuration to get you started. These lines were tested for GNU Emacs version 26.3. + +Copy these lines and save them in a file named **init.el** in your **.emacs.d** folder. This is the folder that Emacs uses to store configurations, including the [init file][11]. If you already have an init file, then you can append these lines to your config. This minimal configuration is enough to get you started. + + +``` +;; Elisp file for R coding with Emacs + +;; Add MELPA repository and initialise the package manager +(require 'package) +(add-to-list 'package-archives +             '("melpa" . "")) +(package-initialize) + +;; Install use-package,in case it does not exist yet +;; The use-package software will install all other packages as required +(unless (package-installed-p 'use-package) +  (package-refresh-contents) +  (package-install 'use-package)) + +;; ESS configurationEmacs Speaks Statistics +(use-package ess +  :ensure t +) + +;; Auto completion +(use-package company +  :ensure t +  :config +  (setq company-idle-delay 0) +  (setq company-minimum-prefix-length 2) +  (global-company-mode t) +) + +; Parentheses +(use-package highlight-parentheses +  :ensure t +  :config +  (progn +    (highlight-parentheses-mode) +    (global-highlight-parentheses-mode)) +  ) +``` + +### Using the R console + +To start an R console session, press **M-x R** and hit **Enter** (**M** is the Emacs way to denote the **Alt** or **Command** key). ESS will ask you to nominate a working directory, which defaults to the folder of the current buffer. You can use more than one console in the same Emacs session by repeating the R command. + +Emacs opens a new buffer for your new R console. You can also use the **Up** and **Down** arrow keys to go to previous lines and re-run them. Use the **Ctrl** and **Up/Down** arrow keys to recycle old commands. + +The Company ("complete anything") package manages autocompletion in both the console and R scripts. When entering a function, the mini-buffer at the bottom of the screen shows the relevant parameters. When the autocompletion dropdown menu appears, you can press **F1** to view the chosen option's Help file before you select it. + +The [highlight-parentheses][9] package does what its name suggests. Several other Emacs packages are available to help you balance parentheses and other structural elements in your code. + +### Writing R scripts + +Emacs recognizes R mode for any buffer with a **.R** extension (the file extension is case-sensitive). Open or create a new file with the **C-x C-f** shortcut and type the path and file name. You can start writing your code and use all of the powerful editing techniques that Emacs provides. + +Several functions are available to evaluate the code. You can evaluate each line separately with **C-<return>**, while **C-c C-c** will evaluate a contiguous region. Keying **C-c C-b** will evaluate the whole buffer. + +When you evaluate some code, Emacs will use any running console or ask you to open a new console to run the code. + +The output of any plotting functions appears in a window outside of Emacs. If you prefer to view the output within Emacs, then you need to save the output to disk and open the resulting file in a separate buffer. + +![Literate programming in Org mode, the ESS buffer, and graphics output.][12] + +Literate programming in Org mode, the ESS buffer, and graphics output. + +### Advanced use + +This article provides a brief introduction to using R in Emacs. Many parameters can be fine-tuned to make Emacs behave according to your preferences, but it would take too much space to cover them here. The [ESS manual][13] describes these in detail. You can also extend functionality with additional packages. + +Org mode can integrate R code, providing a productive platform for literate programming. If you prefer to use RMarkdown, the [Polymode][14] package has you covered. + +Emacs has various packages to make your editing experience more efficient. The best part of using Emacs to write R code is that the program is more than just an IDE; it is a malleable computer system that you can configure to match your favorite workflow. + +Learning how to configure Emacs can be daunting. The best way to learn quickly is to copy ideas from people who share their configurations. Miles McBain manages a [list of Emacs configurations][15] that could be useful if you want to explore using the R language in Emacs further. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/r-emacs-data-science + +作者:[Peter Prevos][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/danderzei +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) +[2]: https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century +[3]: https://www.python.org/ +[4]: https://www.r-project.org/ +[5]: https://ess.r-project.org/ +[6]: https://opensource.com/article/18/2/getting-started-RStudio-IDE +[7]: https://opensource.com/article/20/3/getting-started-emacs +[8]: https://company-mode.github.io/ +[9]: https://github.com/tsdh/highlight-parentheses.el +[10]: https://en.wikipedia.org/wiki/Emacs_Lisp +[11]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Init-File.html +[12]: https://opensource.com/sites/default/files/uploads/r-ess-screenshot.jpg (Literate programming in Org mode, the ESS buffer, and graphics output.) +[13]: https://ess.r-project.org/index.php?Section=documentation&subSection=manuals +[14]: https://github.com/polymode/polymode +[15]: https://github.com/MilesMcBain/esscss From bbb7192034188dc2b21697981518e40fc3701d04 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 6 May 2020 08:22:48 +0800 Subject: [PATCH 144/178] Rename sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md to sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md --- ...scape 1.0 is Finally Here With Tons of Feature Improvements.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md (100%) diff --git a/sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md similarity index 100% rename from sources/tech/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md rename to sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md From 1a966c068ded416f55cdfc4bd145d11dcfb4cf5f Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Wed, 6 May 2020 08:25:33 +0800 Subject: [PATCH 145/178] Rename sources/tech/20200505 11 DevOps lessons from My Little Pony.md to sources/talk/20200505 11 DevOps lessons from My Little Pony.md --- .../20200505 11 DevOps lessons from My Little Pony.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => talk}/20200505 11 DevOps lessons from My Little Pony.md (100%) diff --git a/sources/tech/20200505 11 DevOps lessons from My Little Pony.md b/sources/talk/20200505 11 DevOps lessons from My Little Pony.md similarity index 100% rename from sources/tech/20200505 11 DevOps lessons from My Little Pony.md rename to sources/talk/20200505 11 DevOps lessons from My Little Pony.md From 40bb59f0f21f43ea281031b80bc42dc2edffc89c Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 6 May 2020 08:35:34 +0800 Subject: [PATCH 146/178] translated --- ...and Folders on Desktop Screen in Ubuntu.md | 100 ------------------ ...and Folders on Desktop Screen in Ubuntu.md | 100 ++++++++++++++++++ 2 files changed, 100 insertions(+), 100 deletions(-) delete mode 100644 sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md create mode 100644 translated/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md diff --git a/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md b/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md deleted file mode 100644 index 32eeaa9c95..0000000000 --- a/sources/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md +++ /dev/null @@ -1,100 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Using Files and Folders on Desktop Screen in Ubuntu) -[#]: via: (https://itsfoss.com/add-files-on-desktop-ubuntu/) -[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) - -Using Files and Folders on Desktop Screen in Ubuntu -====== - -_**This beginner tutorial discusses a few difficulties you may face while adding files and folders on the desktop screen on Ubuntu.**_ - -I know a few people who are habitual of putting all the important/frequently used files on the desktop screen for quick access. - -![][1] - -I am not a fan of a cluttered desktop screen but I can imagine that it might actually be helpful to some people. - -For the past few releases, it has been difficult to add files on the desktop screen in Ubuntu’s default GNOME desktop. It’s not really Ubuntu’s fault. - -The [GNOME][2] developers thinks that there is no place for icons and files on the desktop screen. There is no need of putting files on the desktop when you can easily search for it in the menu. And that’s part true. - -This is why the newer version of [GNOME’s File Manager Nautilus][3] doesn’t support icons and files on the desktop very well. - -That said, it’s not impossible to add files and folders on the desktop. Let me show you how you can still use it. - -### Adding files and folders on the desktop screen in Ubuntu - -![][4] - -I am using Ubuntu 20.04 in this tutorial. The steps may or may not vary for other Ubuntu versions. - -#### Add the files and folders to the “Desktop folder” - -If you open the file manager, you should see an entry called Desktop in the left sidebar or in the folders list. This folder represents your desktop screen (in a way). - -![Desktop folder can be used to add files to the desktop screen][5] - -Anything you add to this folder will be reflected on the desktop screen. - -![Anything added to the Desktop folder will be reflected on the desktop screen][6] - -If you delete files from this ‘Desktop folder’, it will be removed from the desktop screen as well. - -#### Drag and drop files to desktop screen doesn’t work - -Now, if you try to drag and drop files from the file manager on the desktop, it won’t work. It’s not a bug, it’s a feature that irks a lot of people. - -A workaround would be to open two instances of the file manager. Open Desktop folder in one of them and then drag and drop files to this folder and they will be added on the desktop. - -I know that’s not ideal but you don’t have a lot of choices here. - -#### You cannot use Ctrl+C and Ctrl+V to copy-paste on the desktop, use the right click menu - -To add salt to injury, you cannot use Ctrl+V the famous keyboard shortcut to paste files on the desktop screen. - -But you can still use the right click context menu and select Paste from there to put the copied files on the desktop. You can even create new folders this way. - -![Right click menu can be used for copy-pasting files to desktop][7] - -Does it make sense? Not to me but that’s how it is in Ubuntu 20.04. - -#### You cannot delete files and folder using the Delete key, use the right click menu again - -What’s worse is that you cannot use the delete key or shift delete key to remove files from the desktop screen. But you can still right click on the files or folders and select “Move to trash” to delete the file. - -![Delete files from desktop using right click][8] - -Alright, so now you know that at least there is a way to add files on the desktop with some restrictions. But it doesn’t end here unfortunately. - -You cannot search for files with their names on the desktop screen. Normally, if you start typing ‘abc’, files starting with ‘abc’ are highlighted. You don’t get it here. - -I don’t know why so many restrictions have been put on adding files on the desktop. Thankfully, I don’t use it a lot otherwise I have been way too frustrated. - -If interested, you may read about [adding application shortcut on the desktop in Ubuntu][9] as well. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/add-files-on-desktop-ubuntu/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[b]: https://github.com/lujun9972 -[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/files-on-desktop-ubuntu.jpg?ssl=1 -[2]: https://www.gnome.org/ -[3]: https://wiki.gnome.org/Apps/Files -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-ubuntu.png?ssl=1 -[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/desktop-folder-ubuntu.png?ssl=1 -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-screen-ubuntu.jpg?ssl=1 -[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-new-files-ubuntu-desktop.jpg?ssl=1 -[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/delete-files-from-desktop-ubuntu.jpg?ssl=1 -[9]: https://itsfoss.com/ubuntu-desktop-shortcut/ diff --git a/translated/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md b/translated/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md new file mode 100644 index 0000000000..984bb4dedb --- /dev/null +++ b/translated/tech/20200428 Using Files and Folders on Desktop Screen in Ubuntu.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Files and Folders on Desktop Screen in Ubuntu) +[#]: via: (https://itsfoss.com/add-files-on-desktop-ubuntu/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +在 Ubuntu 桌面中使用文件和文件夹 +====== + +_**此初学者教程讨论了 在Ubuntu 桌面上添加文件和文件夹时可能遇到的一些困难。**_ + +我认识一些习惯将所有重要/常用文件放在桌面上以便快速访问的人。 + +![][1] + +我不喜欢杂乱的桌面,但是我可以想象它实际上可能对某些人有所帮助。 + +在过去的几个版本中,很难在 Ubuntu 的默认 GNOME 桌面上添加文件。这并不是 Ubuntu 的错。 + +[GNOME][2] 的开发者认为,桌面上没有图标和文件的位置。当你可以在菜单中轻松搜索文件时,无需将文件放在桌面上。这部分是事实。 + +这就是为什么 [GNOME 的文件管理器 Nautilus][3]的较新版本不能很好地支持桌面上的图标和文件的原因。 + +也就是说,在桌面上添加文件和文件夹并非没有可能。让我告诉你如何做。 + +### 在 Ubuntu 的桌面上添加文件和文件夹 + +![][4] + +我在本教程中使用的是 Ubuntu 20.04。对于其他 Ubuntu 版本,步骤可能会有所不同。 + +#### 将文件和文件夹添加到“桌面文件夹” + +如果打开文件管理器,你应该在左侧边栏或文件夹列表中看到一个名为“桌面”的条目。此文件夹(以某种方式)代表你的桌面。 + +![Desktop folder can be used to add files to the desktop screen][5] + +你添加到此文件夹的所有内容都会反应在桌面上。 + +![Anything added to the Desktop folder will be reflected on the desktop screen][6] + +如果你从“桌面文件夹”中删除文件,那么文件也会从桌面中删除。 + +#### 将文件拖放到桌面不起作用 + +现在,如果你尝试在桌面上从文件管理器拖放文件,它会不起使用。这不是一个 bug,它是一个使很多人恼火的功能。 + +一种临时方案是打开两个文件管理器。在其中一个打开“桌面”文件夹,然后将文件拖放到该文件夹​​中,它们将被添加到桌面上。 + +我知道这并不理想,但是你没有太多选择。 + +#### 你不能使用 Ctrl+C 和 Ctrl+V 在桌面上复制粘贴,请使用右键单击菜单 + +更恼人的是,你不能使用 Ctrl+V(著名的键盘快捷键)将文件粘贴到桌面上。 + +但是,你仍然可以使用右键单击,然后选择“粘贴”,将文件复制到桌面上。你甚至可以通过这种方式创建新文件夹。 + +![Right click menu can be used for copy-pasting files to desktop][7] + +是否有意义?对我来说不是,但这就是 Ubuntu 20.04 的方式。 + +#### 你无法使用 Delete 键删除文件和文件夹,请再次使用右键菜单 + +更糟糕的是,你无法使用 Delete 键或 Shift+Delete 键从桌面上删除文件。但是你仍然可以右键单击文件或文件夹,然后选择“移至回收站”来删除文件。 + +![Delete files from desktop using right click][8] + +好了,你现在知道至少有一种方法可以在桌面上添加文件,但有一些限制。不幸的是,这还没有结束。 + +你无法在桌面上用名称搜索文件。通常,如果你开始输入 “abc”,那么以 “abc” 开头的文件会高亮显示。你并不明白。 + +我不知道为什么在桌面上添加文件受到了如此多的限制。值得庆幸的是,我不会经常使用它,否则我会感到非常沮丧。 + +如果有兴趣,你也可以阅读[在 Ubuntu 桌面上添加应用快捷方式][9]这篇文章。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/add-files-on-desktop-ubuntu/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/files-on-desktop-ubuntu.jpg?ssl=1 +[2]: https://www.gnome.org/ +[3]: https://wiki.gnome.org/Apps/Files +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-ubuntu.png?ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/desktop-folder-ubuntu.png?ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-files-desktop-screen-ubuntu.jpg?ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/adding-new-files-ubuntu-desktop.jpg?ssl=1 +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/delete-files-from-desktop-ubuntu.jpg?ssl=1 +[9]: https://itsfoss.com/ubuntu-desktop-shortcut/ From bc09a3b5a8ce1d3a1e221f6f9dc6e8af9b04238f Mon Sep 17 00:00:00 2001 From: geekpi Date: Wed, 6 May 2020 08:39:07 +0800 Subject: [PATCH 147/178] translating --- .../20200501 Using mergerfs to increase your virtual storage.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200501 Using mergerfs to increase your virtual storage.md b/sources/tech/20200501 Using mergerfs to increase your virtual storage.md index 06f26abf51..734af93148 100644 --- a/sources/tech/20200501 Using mergerfs to increase your virtual storage.md +++ b/sources/tech/20200501 Using mergerfs to increase your virtual storage.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From bfd297b16f67a07453d9db0fe6f0e15755626336 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 6 May 2020 08:39:28 +0800 Subject: [PATCH 148/178] APL --- ...ape 1.0 is Finally Here With Tons of Feature Improvements.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md index b036d7a56a..feb249b973 100644 --- a/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md +++ b/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (wxy) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 8cfbdd4cb35e28c472c41ee509314be828b5048a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 6 May 2020 09:41:33 +0800 Subject: [PATCH 149/178] TSL&PRF --- ... Here With Tons of Feature Improvements.md | 115 ------------------ ... Here With Tons of Feature Improvements.md | 115 ++++++++++++++++++ 2 files changed, 115 insertions(+), 115 deletions(-) delete mode 100644 sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md create mode 100644 translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md diff --git a/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md deleted file mode 100644 index feb249b973..0000000000 --- a/sources/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md +++ /dev/null @@ -1,115 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (wxy) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements) -[#]: via: (https://itsfoss.com/inkscape-1-release/) -[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) - -After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements -====== - -Even though I’m not an expert, it is safe to say that Inkscape is one of the [best vector graphics editors][1]. - -Not just limited to the reason that it is free and open-source software – but it is indeed a useful application for digital artists creating something on it. - -The last release (version 0.92) was about 3 years ago. And, now, finally, [Inkscape announced its 1.0 release][2] – with a bunch of new features, additions, and improvements. - -### Inkscape 1.0: What’s New? - -![Inkscape 1.0][3] - -Here, let me highlight the important key changes that you need to know about Inkscape 1.0 release: - -#### First native macOS application - -It’s always good to have a proper cross-platform support for amazing tools like Inkscape. And, with the latest release, a native macOS application has been made available as well. - -Do note that the macOS app is still a **preview** version and has room for a lot of improvements. However, with a better system integration without needing [XQuartz][4], it should be a promising progress for macOS users. - -#### Performance Improvements - -Any kind of application/tool benefits from a significant performance boost. And, so does Inkscape. - -With its 1.0 release, they mention that you will be able to notice the smoother performance when using Inkscape for all the creative work you do. - -Except on macOS (which is still a “preview” version), Inkscape should run just fine on Linux and Windows. - -#### Improved UI and HiDPI Support - -![][5] - -In their release notes, they’ve mentioned: - -> A major milestone was achieved in enabling Inkscape to use a more recent version of the software used to build the editor’s user interface (namely GTK+3). Users with HiDPI (high resolution) screens can thank teamwork that took place during the 2018 Boston Hackfest for setting the updated-GTK wheels in motion. - -So, starting from GTK +3 user interface to the HiDPI support for high-resolution screens, it is a wonderful upgrade. - -Not to forget, you get more customization options to tweak the look and feel as well. - -#### New Feature Additions - -![][6] - -On paper, the list of new features sounds good. Depending on your expertise and what you prefer, the latest additions should come in handy. - -Here’s an overview of the new features: - - * New and improved Live Path Effect (LPE) features - * A new searchable LPE selection dialog - * Freestyle drawing users can now mirror and rotate the canvas - * The new PowerPencil mode of the Pencil tool provides pressure-dependent width and it is finally possible to create closed paths. - * New path effects that will appeal to the artistic user include Offset, PowerClip, and PowerMask LPEs. - * Ability to create a duplicate guide, aligning grids to the page, the Measure tool’s path length indicator, and the inverted Y-axis. - * Ability to export PDFs with clickable links and metadata - * New palettes and mesh gradients that work in the web browser - - - -While I’ve tried to compile the list of the key features added to this release, you can get all the nitty gritty details in their [release notes][7]. - -#### Other Important Changes - -Along with all the major changes, Inkscape 1.0 now supports Python 3. And, with that going forward, you might notice some extensions that don’t work with the latest version. - -So, if your work depends on the workflow of your extensions, I suggest you to take a closer look at their [release notes][7] to get all the technical details. - -### Download & Install Inkscape 1.0 on Linux - -Inkscape 1.0 is available in AppImage and Snap format for Linux. You can download it from Inkscape’s website. - -[Download Inkscape 1.0 for Linux][8] - -If you aren’t aware, you can check [how to use AppImage file on Linux][9] to get started. You may also refer to [this Snap guide][10]. - -Ubuntu users can find the snap version of Inskcape 1.0 in the Ubuntu Software Center. - -I used the AppImage file on [Pop OS 20.04][11] and it worked just fine to get started. You can test drive all the features in detail to see how it works out for you. - -Have you tried it yet? Let me know your thoughts in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/inkscape-1-release/ - -作者:[Ankush Das][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/ankush/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/vector-graphics-editors-linux/ -[2]: https://inkscape.org/news/2020/05/04/introducing-inkscape-10/ -[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-1-0.jpg?ssl=1 -[4]: https://en.wikipedia.org/wiki/XQuartz -[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-ui-customization.jpg?ssl=1 -[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-live-path-effects.jpg?ssl=1 -[7]: https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0 -[8]: https://inkscape.org/release/1.0/gnulinux/ -[9]: https://itsfoss.com/use-appimage-linux/ -[10]: https://itsfoss.com/install-snap-linux/ -[11]: https://itsfoss.com/pop-os-20-04-review/ diff --git a/translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md new file mode 100644 index 0000000000..06c61b776b --- /dev/null +++ b/translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md @@ -0,0 +1,115 @@ +[#]: collector: (lujun9972) +[#]: translator: (wxy) +[#]: reviewer: (wxy) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements) +[#]: via: (https://itsfoss.com/inkscape-1-release/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +经过了 3 年,Inkscape 1.0 终于发布了 +====== + +![](https://img.linux.net.cn/data/attachment/album/202005/06/094055fvnh9nnnbybwl4jn.jpg) + +虽然我不是这方面的专业人员,但可以肯定地说,Inkscape 是[最好的矢量图形编辑器][1]之一。 + +不仅仅因为它是自由开源软件,而且对于数字艺术家来说,它是一个非常有用的应用程序。 + +上一次发布(0.92 版本)是在 3 年前。现在,终于,[Inkscape 宣布了它的 1.0 版本][2] —— 增加了很多新的功能和改进。 + +### Inkscape 1.0 里的新东西 + +![Inkscape 1.0][3] + +在这里,让我重点介绍一下 Inkscape 1.0 版本中重要关键变化。 + +#### 首个原生 macOS 应用 + +对于像 Inkscape 这样的神奇工具来说,适当的跨平台支持总是好的。在这个最新的版本中,它推出了原生的 macOS 应用。 + +请注意,这个 macOS 应用仍然是一个**预览版**,还有很多改进的空间。不过,在无需 [XQuartz][4] 的情况下就做到了更好的系统集成,对于 macOS 用户来说,应该是一个值得期许的进步。 + +#### 性能提升 + +不管是什么应用程序/工具,都会从显著的性能提升中受益,而 Inkscape 也是如此。 + +随着其 1.0 版本的发布,他们提到,当你使用 Inkscape 进行各种创意工作时,你会发现性能更加流畅。 + +除了在 macOS 上(仍为“预览版”),Inkscape 在 Linux 和 Windows 上的运行都是很好的。 + +#### 改进的 UI 和 HiDPI 支持 + +![][5] + +他们在发布说明中提到: + +> ……达成了一个重要的里程碑,使 Inkscape 能够使用最新的软件(即 GTK+3)来构建编辑器的用户界面。拥有 HiDPI(高分辨率)屏幕的用户要感谢 2018 年波士顿黑客节期间的团队合作,让更新后的 GTK 轮子开始运转起来。 + +从 GTK+3 的用户界面到高分辨率屏幕的 HiDPI 支持,这都是一次精彩的升级。 + +更不要忘了,你还可以获得更多的自定义选项来调整外观和感受。 + +#### 新增功能 + +![][6] + +即便是从纸面上看,这些列出新功能都看起来不错。根据你的专业知识和你的喜好,这些新增功能应该会派上用场。 + +以下是新功能的概述: + + * 新改进过的实时路径效果(LPE)功能。 + * 新的可搜索的 LPE 选择对话框。 + * 自由式绘图用户现在可以对画布进行镜像和旋转。 + * 铅笔工具的新的 PowerPencil 模式提供了压感的宽度,并且终于可以创建封闭路径了。 + * 包括偏移、PowerClip 和 PowerMask LPE 在内的新路径效果会吸引艺术类用户。 + * 能够创建复制引导、将网格对齐到页面上、测量工具的路径长度指示器和反向 Y 轴。 + * 能够导出带有可点击链接和元数据的 PDF 文件。 + * 新的调色板和网状渐变,可在网页浏览器中使用。 + +虽然我已经尝试着整理了这个版本中添加的关键功能列表,但你可以在他们的[发布说明][7]中获得全部细节。 + +#### 其他重要变化 + +作为重大变化之一,Inkscape 1.0 现在支持 Python 3。而且,随着这一变化,你可能会注意到一些扩展程序无法在最新版本中工作。 + +所以,如果你的工作依赖于某个扩展程序的工作流程,我建议你仔细看看他们的[发布说明][7],了解所有的技术细节。 + +### 在 Linux 上下载和安装 Inkscape 1.0 + +Inkscape 1.0 有用于 Linux 的 AppImage 和 Snap 软件包,你可以从 Inkscape 的网站上下载。 + +- [下载 Inkscape 1.0 for Linux][8] + +如果你还不知道,可以查看[如何在 Linux 上使用 AppImage 文件][9]来入门。你也可以参考[这个 Snap 指南][10]。 + +Ubuntu 用户可以在 Ubuntu 软件中心找到 Inskcape 1.0 的 Snap 版本。 + +我在 [Pop!_OS 20.04][11] 上使用了 AppImage 文件,工作的很好。你可以详细体验所有的功能,看看它的效果如何。 + +你试过了吗?请在下面的评论中告诉我你的想法。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/inkscape-1-release/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/vector-graphics-editors-linux/ +[2]: https://inkscape.org/news/2020/05/04/introducing-inkscape-10/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-1-0.jpg?ssl=1 +[4]: https://en.wikipedia.org/wiki/XQuartz +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-ui-customization.jpg?ssl=1 +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/inkscape-live-path-effects.jpg?ssl=1 +[7]: https://wiki.inkscape.org/wiki/index.php/Release_notes/1.0 +[8]: https://inkscape.org/release/1.0/gnulinux/ +[9]: https://itsfoss.com/use-appimage-linux/ +[10]: https://itsfoss.com/install-snap-linux/ +[11]: https://itsfoss.com/pop-os-20-04-review/ From cf259313357b80fa60fc812766ac71fb373810c9 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 6 May 2020 09:48:22 +0800 Subject: [PATCH 150/178] PUB @wxy https://linux.cn/article-12188-1.html --- ...e 1.0 is Finally Here With Tons of Feature Improvements.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/news => published}/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md (98%) diff --git a/translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md b/published/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md similarity index 98% rename from translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md rename to published/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md index 06c61b776b..fcf6c3b3ca 100644 --- a/translated/news/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md +++ b/published/20200506 After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (wxy) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12188-1.html) [#]: subject: (After More Than 3 Years, Inkscape 1.0 is Finally Here With Tons of Feature Improvements) [#]: via: (https://itsfoss.com/inkscape-1-release/) [#]: author: (Ankush Das https://itsfoss.com/author/ankush/) From a6da4f7e618643ce6beb0c799a33028d7b403c0f Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Wed, 6 May 2020 21:38:22 +0800 Subject: [PATCH 151/178] Almost done --- ... Create a SDN on Linux with open source.md | 64 +++++++++++++------ 1 file changed, 45 insertions(+), 19 deletions(-) diff --git a/sources/tech/20200417 Create a SDN on Linux with open source.md b/sources/tech/20200417 Create a SDN on Linux with open source.md index 5e69bb4db6..fe22d47bbe 100644 --- a/sources/tech/20200417 Create a SDN on Linux with open source.md +++ b/sources/tech/20200417 Create a SDN on Linux with open source.md @@ -7,27 +7,33 @@ [#]: via: (https://opensource.com/article/20/4/quagga-linux) [#]: author: (M Umer https://opensource.com/users/noisybotnet) -Create a SDN on Linux with open source +Create a SDN on Linux with open source 在Linux上使用开源代码创建SDN ====== -Make your Linux system act like a router with the open source routing -stack Quagga. +Make your Linux system act like a router with the open source routing stack Quagga. +使用开源路由协议栈Quagga,使您的Linux系统成为一台路由器。 ![Coding on a computer][1] Network routing protocols fall into two main categories: interior gateway protocols and exterior gateway protocols. Interior gateway protocols are used by routers to share information within a single autonomous system. If you are running Linux, you can make your system behave as a router through the open source (GPLv2) routing stack [Quagga][2]. +网络路由协议分为两大类:内部网关协议和外部网关协议。路由器使用内部网关协议在单个自治系统内共享信息。如果您用的是Linux,则可以通过开源(GPLv2)路由协议栈[Quagga][2]使其表现得像一台路由器。 -### What is Quagga? +### What is Quagga? 什么是Quagga? Quagga is a [routing software suite][3] and a fork of [GNU Zebra][4]. It provides implementations of all major routing protocols such as Open Shortest Path First (OSPF), Routing Information Protocol (RIP), Border Gateway Protocol (BGP), and Intermediate System to Intermediate System (IS-IS) for Unix-like platforms. Although Quagga implements the routing protocols for both IPv4 and IPv6, it doesn't act as a complete router. A true router not only implements all the routing protocols but also has the ability to forward network traffic. Quagga only implements the routing stack, and the job of forwarding network traffic is handled by the Linux kernel. -### Architecture +Quagga是[路由软件包][3],并且是[GNU Zebra][4]的一个分支。它为类Unix平台提供了所有主流路由协议的实现,例如开放最短路径优先(OSPF),路由信息协议(RIP),边界网关协议(BGP)和中间系统到中间系统协议(IS-IS)。 + +尽管Quagga为IPv4和IPv6都实现了路由协议,但它却不是一个完整的路由器。真正的路由器不仅实现了所有路由协议,而且还有转发网络流量的能力。 Quagga仅仅实现了路由协议栈,而转发网络流量的工作由Linux内核处理。 + +### Architecture 架构 Quagga implements the different routing protocols through protocol-specific daemons. The daemon name is the same as the routing protocol followed by the letter "d." Zebra is the core and a protocol-independent daemon that provides an [abstraction layer][5] to the kernel and presents the Zserv API over TCP sockets to Quagga clients. Each protocol-specific daemon is responsible for running the relevant protocol and building the routing table based on the information exchanged. +Quagga通过协议特定的守护程序实现不同的路由协议。守护程序名称与路由协议相同,加了字母“d”作为后缀。Zebra是核心的协议无关的守护进程,它为内核提供了一个[抽象层][5],并通过TCP套接字向Quagga客户端提供Zserv API。每个协议特定的守护程序负责运行相关的协议并基于交换的信息来建立路由表。 ![Quagga architecture][6] -### Setup +### Setup 环境 This tutorial implements the OSPF protocol to configure dynamic routing using Quagga. The setup includes two CentOS 7.7 hosts, named Alpha and Beta. Both hosts share access to the **192.168.122.0/24** network. @@ -41,26 +47,38 @@ Gateway: 192.168.122.1 IP: 192.168.122.50/24 Gateway: 192.168.122.1 -### Install the package +本教程通过Quagga实现的OSPF协议来配置动态路由。该环境包括两个名为Alpha和Beta的CentOS 7.7主机。两台主机共享访问 **192.168.122.0/24** 网络。 + +**主机Alpha** + +IP:192.168.122.100/24 +网关:192.168.122.1 + +**主机Beta** + +IP:192.168.122.50/24 +网关:192.168.122.1 + +### Install the package 安装软件包 First, install the Quagga package on both hosts. It is available in the CentOS base repo: - +首先,在两台主机上安装Quagga软件包。它存在于CentOS基础仓库中: ``` `yum install quagga -y` ``` -### Enable IP forwarding +### Enable IP forwarding 使能IP转发 Next, enable IP forwarding on both hosts since that will performed by the Linux kernel: - +接下来,在两台主机上使能IP转发,因为它将由Linux内核来执行: ``` sysctl -w net.ipv4.ip_forward = 1 sysctl -p ``` -### Configuration +### Configuration 配置 Now, go into the **/etc/quagga** directory and create the configuration files for your setup. You need three files: @@ -71,7 +89,14 @@ Now, go into the **/etc/quagga** directory and create the configuration files fo On host Alpha, +现在,进入 **/etc/quagga** 目录并为您的设置创建配置文件。您需要三个文件: + * **zebra.conf**:Quagga的守护程序配置文件,您可以在其中定义接口及其IP地址和IP转发 + * **ospfd.conf**:协议配置文件,您可以在其中定义将通过OSPF协议提供的网络 + * **守护程序**:您将在其中指定需要运行的相关的协议守护程序 + + +在主机Alpha上, ```  [root@alpha]# cat /etc/quagga/zebra.conf @@ -100,7 +125,7 @@ ospfd=yes ``` On host Beta, - +在主机Beta上, ``` [root@beta quagga]# cat zebra.conf @@ -128,10 +153,10 @@ zebra=yes ospfd=yes ``` -### Configure the firewall +### Configure the firewall 配置防火墙 To use the OSPF protocol, you must allow it in the firewall: - +要使用OSPF协议,必须允许它通过防火墙: ``` firewall-cmd --add-protocol=ospf –permanent @@ -140,7 +165,7 @@ firewall-cmd –reload ``` Now, start the zebra and ospfd daemons. - +现在,启动zebra和ospfd守护程序。 ``` # systemctl start zebra @@ -148,7 +173,7 @@ Now, start the zebra and ospfd daemons. ``` Look at the route table on both hosts using: - +用下面命令在两个主机上查看路由表: ``` [root@alpha ~]# ip route show   @@ -159,7 +184,7 @@ default via 192.168.122.1 dev eth0 proto static metric 100 ``` You can see that the routing table on Alpha contains an entry of **10.10.10.0/24** via **192.168.122.50** offered through protocol **zebra**. Similarly, on host Beta, the table contains an entry of network **10.12.13.0/24** via **192.168.122.100**. - +您可以看到Alpha上的路由表包含通过 **192.168.122.50** 到达 **10.10.10.0/24** 的条目,它是通过协议 **zebra** 获取的。同样,在主机Beta上,该表包含通过 **192.168.122.100** 到达网络 **10.12.13.0/24** 的条目。 ``` [root@beta ~]# ip route show @@ -169,9 +194,10 @@ default via 192.168.122.1 dev eth0 proto static metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100 ``` -### Conclusion +### Conclusion 结论 As you can see, the setup and configuration are relatively simple. To add complexity, you can add more network interfaces to the router to provide routing for more networks. You can also implement BGP and RIP protocols using the same method. +如您所见,环境和配置相对简单。要增加复杂性,您可以向路由器添加更多网络接口,以为更多网络提供路由。您也可以使用相同的方法来实现BGP和RIP协议。 -------------------------------------------------------------------------------- @@ -179,7 +205,7 @@ via: https://opensource.com/article/20/4/quagga-linux 作者:[M Umer][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) +译者:[messon007](https://github.com/messon007) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From a22332810e4218e601d4230ae509484e6b067768 Mon Sep 17 00:00:00 2001 From: messon007 <306809057@qq.com> Date: Wed, 6 May 2020 21:41:09 +0800 Subject: [PATCH 152/178] translated --- ... Create a SDN on Linux with open source.md | 54 ++++--------------- 1 file changed, 9 insertions(+), 45 deletions(-) rename {sources => translated}/tech/20200417 Create a SDN on Linux with open source.md (62%) diff --git a/sources/tech/20200417 Create a SDN on Linux with open source.md b/translated/tech/20200417 Create a SDN on Linux with open source.md similarity index 62% rename from sources/tech/20200417 Create a SDN on Linux with open source.md rename to translated/tech/20200417 Create a SDN on Linux with open source.md index fe22d47bbe..97272e9f2d 100644 --- a/sources/tech/20200417 Create a SDN on Linux with open source.md +++ b/translated/tech/20200417 Create a SDN on Linux with open source.md @@ -7,45 +7,26 @@ [#]: via: (https://opensource.com/article/20/4/quagga-linux) [#]: author: (M Umer https://opensource.com/users/noisybotnet) -Create a SDN on Linux with open source 在Linux上使用开源代码创建SDN +在Linux上使用开源代码创建SDN ====== -Make your Linux system act like a router with the open source routing stack Quagga. 使用开源路由协议栈Quagga,使您的Linux系统成为一台路由器。 ![Coding on a computer][1] -Network routing protocols fall into two main categories: interior gateway protocols and exterior gateway protocols. Interior gateway protocols are used by routers to share information within a single autonomous system. If you are running Linux, you can make your system behave as a router through the open source (GPLv2) routing stack [Quagga][2]. 网络路由协议分为两大类:内部网关协议和外部网关协议。路由器使用内部网关协议在单个自治系统内共享信息。如果您用的是Linux,则可以通过开源(GPLv2)路由协议栈[Quagga][2]使其表现得像一台路由器。 -### What is Quagga? 什么是Quagga? - -Quagga is a [routing software suite][3] and a fork of [GNU Zebra][4]. It provides implementations of all major routing protocols such as Open Shortest Path First (OSPF), Routing Information Protocol (RIP), Border Gateway Protocol (BGP), and Intermediate System to Intermediate System (IS-IS) for Unix-like platforms. - -Although Quagga implements the routing protocols for both IPv4 and IPv6, it doesn't act as a complete router. A true router not only implements all the routing protocols but also has the ability to forward network traffic. Quagga only implements the routing stack, and the job of forwarding network traffic is handled by the Linux kernel. +### Quagga是什么? Quagga是[路由软件包][3],并且是[GNU Zebra][4]的一个分支。它为类Unix平台提供了所有主流路由协议的实现,例如开放最短路径优先(OSPF),路由信息协议(RIP),边界网关协议(BGP)和中间系统到中间系统协议(IS-IS)。 尽管Quagga为IPv4和IPv6都实现了路由协议,但它却不是一个完整的路由器。真正的路由器不仅实现了所有路由协议,而且还有转发网络流量的能力。 Quagga仅仅实现了路由协议栈,而转发网络流量的工作由Linux内核处理。 -### Architecture 架构 +### 架构 -Quagga implements the different routing protocols through protocol-specific daemons. The daemon name is the same as the routing protocol followed by the letter "d." Zebra is the core and a protocol-independent daemon that provides an [abstraction layer][5] to the kernel and presents the Zserv API over TCP sockets to Quagga clients. Each protocol-specific daemon is responsible for running the relevant protocol and building the routing table based on the information exchanged. Quagga通过协议特定的守护程序实现不同的路由协议。守护程序名称与路由协议相同,加了字母“d”作为后缀。Zebra是核心的协议无关的守护进程,它为内核提供了一个[抽象层][5],并通过TCP套接字向Quagga客户端提供Zserv API。每个协议特定的守护程序负责运行相关的协议并基于交换的信息来建立路由表。 ![Quagga architecture][6] -### Setup 环境 - -This tutorial implements the OSPF protocol to configure dynamic routing using Quagga. The setup includes two CentOS 7.7 hosts, named Alpha and Beta. Both hosts share access to the **192.168.122.0/24** network. - -**Host Alpha:** - -IP: 192.168.122.100/24 -Gateway: 192.168.122.1 - -**Host Beta:** - -IP: 192.168.122.50/24 -Gateway: 192.168.122.1 +### 环境 本教程通过Quagga实现的OSPF协议来配置动态路由。该环境包括两个名为Alpha和Beta的CentOS 7.7主机。两台主机共享访问 **192.168.122.0/24** 网络。 @@ -59,18 +40,16 @@ IP:192.168.122.100/24 IP:192.168.122.50/24 网关:192.168.122.1 -### Install the package 安装软件包 +### 安装软件包 -First, install the Quagga package on both hosts. It is available in the CentOS base repo: 首先,在两台主机上安装Quagga软件包。它存在于CentOS基础仓库中: ``` `yum install quagga -y` ``` -### Enable IP forwarding 使能IP转发 +### 使能IP转发 -Next, enable IP forwarding on both hosts since that will performed by the Linux kernel: 接下来,在两台主机上使能IP转发,因为它将由Linux内核来执行: ``` @@ -78,17 +57,8 @@ sysctl -w net.ipv4.ip_forward = 1 sysctl -p ``` -### Configuration 配置 +### 配置 -Now, go into the **/etc/quagga** directory and create the configuration files for your setup. You need three files: - - * **zebra.conf**: Quagga's daemon configuration file, which is where you'll define the interfaces and their IP addresses and IP forwarding - * **ospfd.conf**: The protocol configuration file, which is where you'll define the networks that will be offered through the OSPF protocol - * **daemons**: Where you'll specify the relevant protocol daemons that are required to run - - - -On host Alpha, 现在,进入 **/etc/quagga** 目录并为您的设置创建配置文件。您需要三个文件: * **zebra.conf**:Quagga的守护程序配置文件,您可以在其中定义接口及其IP地址和IP转发 @@ -124,7 +94,6 @@ zebra=yes ospfd=yes ``` -On host Beta, 在主机Beta上, ``` @@ -153,9 +122,8 @@ zebra=yes ospfd=yes ``` -### Configure the firewall 配置防火墙 +### 配置防火墙 -To use the OSPF protocol, you must allow it in the firewall: 要使用OSPF协议,必须允许它通过防火墙: ``` @@ -164,7 +132,6 @@ firewall-cmd --add-protocol=ospf –permanent firewall-cmd –reload ``` -Now, start the zebra and ospfd daemons. 现在,启动zebra和ospfd守护程序。 ``` @@ -172,7 +139,6 @@ Now, start the zebra and ospfd daemons. # systemctl start ospfd ``` -Look at the route table on both hosts using: 用下面命令在两个主机上查看路由表: ``` @@ -183,7 +149,6 @@ default via 192.168.122.1 dev eth0 proto static metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.100 metric 100 ``` -You can see that the routing table on Alpha contains an entry of **10.10.10.0/24** via **192.168.122.50** offered through protocol **zebra**. Similarly, on host Beta, the table contains an entry of network **10.12.13.0/24** via **192.168.122.100**. 您可以看到Alpha上的路由表包含通过 **192.168.122.50** 到达 **10.10.10.0/24** 的条目,它是通过协议 **zebra** 获取的。同样,在主机Beta上,该表包含通过 **192.168.122.100** 到达网络 **10.12.13.0/24** 的条目。 ``` @@ -194,9 +159,8 @@ default via 192.168.122.1 dev eth0 proto static metric 100 192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.50 metric 100 ``` -### Conclusion 结论 +### 结论 -As you can see, the setup and configuration are relatively simple. To add complexity, you can add more network interfaces to the router to provide routing for more networks. You can also implement BGP and RIP protocols using the same method. 如您所见,环境和配置相对简单。要增加复杂性,您可以向路由器添加更多网络接口,以为更多网络提供路由。您也可以使用相同的方法来实现BGP和RIP协议。 -------------------------------------------------------------------------------- From 139f7f0551537cd8d82a18e33d64a17d1ae4fc02 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 6 May 2020 23:16:10 +0800 Subject: [PATCH 153/178] PRF @robsean --- ...7 How to compress files on Linux 5 ways.md | 76 +++++++++---------- 1 file changed, 36 insertions(+), 40 deletions(-) diff --git a/translated/tech/20200417 How to compress files on Linux 5 ways.md b/translated/tech/20200417 How to compress files on Linux 5 ways.md index e9ef6af572..e10a0bd8a4 100644 --- a/translated/tech/20200417 How to compress files on Linux 5 ways.md +++ b/translated/tech/20200417 How to compress files on Linux 5 ways.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (How to compress files on Linux 5 ways) @@ -9,18 +9,20 @@ 在 Linux 上压缩文件的 5 种方法 ====== -在 Linux 系统上有很多可以用于压缩文件的工具,但是它们表现的行为或产生相同程度的压缩等级并不相同,在这篇文章中,我们比较其中的五个工具。 -Getty Images -在 Linux 上有不少用于压缩文件的命令。最新最有效的一个方法是 **xz** ,但是所有的方法都有节省磁盘空间和为后期使用维护备份文件的优点。在这篇文章中,我们将比较压缩命令并指出显著的不同 。 +> 在 Linux 系统上有很多可以用于压缩文件的工具,但它们的表现并不都是一样的,也不是所有的压缩效果都是一样的。在这篇文章中,我们比较其中的五个工具。 + +![](https://img.linux.net.cn/data/attachment/album/202005/06/231536tgxma941yb8dgl53.jpg) + +在 Linux 上有不少用于压缩文件的命令。最新最有效的一个方法是 `xz`,但是所有的方法都有节省磁盘空间和维护备份文件供以后使用的优点。在这篇文章中,我们将比较这些压缩命令并指出显著的不同。 ### tar -tar 命令不是专门的压缩命令。它通常用于将多个文件拉入一单个文件中,以便容易地传输到另一个系统,或者备份文件为一个相关的组。它也提供压缩作为一个功能,这是很明智的,附加的 **z** 压缩选项能够实现压缩文件。 +`tar` 命令不是专门的压缩命令。它通常用于将多个文件拉入一个单个的文件中,以便容易地传输到另一个系统,或者将文件作为一个相关的组进行备份。它也提供压缩的功能,这就很有意义了,附加一个 `z` 压缩选项能够实现压缩文件。 -当压缩过程被附加到一个使用 **z** 选项的 **tar** 命令时,tar 使用 **gzip** 来进行压缩。 +当使用 `z` 选项为 `tar` 命令附加压缩过程时,`tar` 使用 `gzip` 来进行压缩。 -你可以使用 **tar** 来压缩一个单个文件,就像压缩一个组一样容易,尽管这种操作与直接使用 **gzip** 相比没有特别的优势。为此,要使用 **tar** ,只需要使用一个 “tar cfz newtarfile filename” 命令来像你标识一个组一样标识文件,像这样: +就像压缩一组文件一样,你可以使用 `tar` 来压缩单个文件,尽管这种操作与直接使用 `gzip` 相比没有特别的优势。要使用 `tar` 这样做,只需要使用 `tar cfz newtarfile filename` 命令来标识要压缩的文件,就像标识一组文件一样,像这样: ``` $ tar cfz bigfile.tgz bigfile @@ -33,13 +35,11 @@ $ ls -l bigfile* -rw-rw-r-- 1 shs shs 21608325 Apr 16 16:08 bigfile.tgz ``` -注意,文件的大小显著减少。 +注意,文件的大小显著减少了。 -如果你喜欢,你可以使用 **tar.gz** 扩展名,这可能会使文件的特征更加明显,但是大多数的 Linux 用户将很可能会意识到与 **tgz** 的意思是相同的东西 – **tar** 和 **gz** 的组合来显示文件是一个压缩的 tar 文件。在压缩完成后,将留下原始文件和压缩文件。 +如果你愿意,你可以使用 `tar.gz` 扩展名,这可能会使文件的特征更加明显,但是大多数的 Linux 用户将很可能会意识到与 `tgz` 的意思是一样的 – `tar` 和 `gz` 的组合来显示文件是一个压缩的 tar 文件。在压缩完成后,你将同时得到原始文件和压缩文件。 -为收集很多文件在一起并在一个命令中压缩生成的 “tar ball” ,使用相同的语法,但是要指明将要被包含的文件来作为一个组,而不是单个文件。这里有一个示例: - -[][1] +要将很多文件收集在一起并在一个命令中压缩出 “tar ball”,使用相同的语法,但要指定要包含的文件为一组,而不是单个文件。这里有一个示例: ``` $ tar cfz bin.tgz bin/* @@ -50,7 +50,7 @@ $ tar cfz bin.tgz bin/* ### zip -**zip** 命令创建一个压缩文件,与此同时保留原始文件的完整性。语法像使用 **tar** 一样简单,只是你必需记住,你的原始文件名称应该是命令行上的最后一个参数。 +`zip` 命令创建一个压缩文件,与此同时保留原始文件的完整性。语法像使用 `tar` 一样简单,只是你必需记住,你的原始文件名称应该是命令行上的最后一个参数。 ``` $ zip ./bigfile.zip bigfile @@ -62,7 +62,7 @@ $ ls -l bigfile bigfile.zip ### gzip -**gzip** 命令非常容易使用。你只需要键入 "gzip" ,紧随其后的是你想要压缩的文件名称。不像上述描述的命令,**gzip** 将“就地”加密文件。换句话说,原始文件将被加密文件替换。 +`gzip` 命令非常容易使用。你只需要键入 `gzip`,紧随其后的是你想要压缩的文件名称。不像上述描述的命令,`gzip` 将“就地”加密文件。换句话说,原始文件将被加密文件替换。 ``` $ gzip bigfile @@ -72,7 +72,7 @@ $ ls -l bigfile* ### bzip2 -像使用 **gzip** 命令一样,**bzip2** 将在你选的“合适位置”压缩文件,只留下原始文件保持原样离开。 +像使用 `gzip` 命令一样,`bzip2` 将在你选择的文件“就地”压缩,不留下原始文件。 ``` $ bzip bigfile @@ -82,7 +82,7 @@ $ ls -l bigfile* ### xz -压缩命令组中的一个相对较新的成员,**xz** 就如何更好的压缩文件而言是领跑者。像先前的两个命令一样,你只需要将文件名称补给到命令中。再强调一次,原始文件被就地压缩。 +`xz` 是压缩命令团队中的一个相对较新的成员,在压缩文件的能力方面,它是一个领跑者。像先前的两个命令一样,你只需要将文件名称提供给命令。再强调一次,原始文件被就地压缩。 ``` $ xz bigfile @@ -90,17 +90,17 @@ $ ls -l bigfile* -rw-rw-r-- 1 shs shs 13427236 Apr 15 17:30 bigfile.xz ``` -对于大文件来说,你可能会注意到 **xz** 将比其它的压缩命令花费更多的运行时间,但是压缩的结果却是非常令人赞叹的。 +对于大文件来说,你可能会注意到 `xz` 将比其它的压缩命令花费更多的运行时间,但是压缩的结果却是非常令人赞叹的。 -### 考虑对比性 +### 对比 -大多数人都听说过 "文件大小不是万能的"。所以,让我们比较一下文件大小以及一些当你计划如何压缩文件时的问题。 +大多数人都听说过“大小不是一切”。所以,让我们比较一下文件大小以及一些当你计划如何压缩文件时的问题。 -下面显示的统计数据都与压缩单个文件相关,在上面显示的示例中使用 – bigfile – 。这个文件是一个大的且相当随机的文本文件。压缩率在一定程度上取决于文件的内容。 +下面显示的统计数据都与压缩单个文件相关,在上面显示的示例中使用 `bigfile`。这个文件是一个大的且相当随机的文本文件。压缩率在一定程度上取决于文件的内容。 #### 大小减缩率 -在比较期间,上面显示的各种压缩命产生下面的结果。百分比表示压缩文件对比原始文件。 +当比较时,上面显示的各种压缩命产生下面的结果。百分比表示压缩文件与原始文件的比较效果。 ``` -rw-rw-r-- 1 shs shs 103270400 Apr 16 14:01 bigfile @@ -112,15 +112,15 @@ $ ls -l bigfile* -rw-rw-r-- 1 shs shs 21606889 Apr 16 13:59 bigfile.zip ~21% ``` -**xz** 命令获胜,最终只有压缩文件大小的13%,但是这些所有的压缩命令都相当显著地减少原始文件的大小。 +`xz` 命令获胜,最终只有压缩文件 13% 的大小,但是所有这些压缩命令都相当显著地减少原始文件的大小。 #### 是否替换原始文件 -**bzip2**,**gzip** 和 **xz** 命令都将使用压缩文件替换原始文件。**tar** 和 **zip** 命令不替换。 +`bzip2`、`gzip` 和 `xz` 命令都用压缩文件替换原始文件。`tar` 和 `zip` 命令不替换。 #### 运行时间 -**xz** 命令似乎比其它命令需要花费更多的时间来加密文件。对于 bigfile 来说,近似时间是: +`xz` 命令似乎比其它命令需要花费更多的时间来加密文件。对于 `bigfile` 来说,大概的时间是: ``` 命令 运行时间 @@ -135,27 +135,25 @@ xz 50.4 秒 #### 文件权限 -不管你对压缩文件设置什么权限,压缩文件的权限将基于你的 **umask** 设置,除 **bzip2** 维持原始文件的权限外。 +不管你对压缩文件设置什么权限,压缩文件的权限将基于你的 `umask` 设置,但 `bzip2` 除外,它保留了原始文件的权限。 #### 与 Windows 的兼容性 -**zip** 命令将创建一个可被使用的文件(例如,解压缩),在 Windows 系统上以及 Linux 和其它 Unix 系统上,无需安装其它可能可用或不可用的工具。 +`zip` 命令创建的文件可以在 Windows 系统以及 Linux 和其他 Unix 系统上使用(即解压),而无需安装其他工具,无论这些工具可能是可用还是不可用的。 ### 解压缩文件 -解压缩文件的命令类似于这些压缩文件的命令。这些命令将在我们运行上述压缩命令后用于解压缩 bigfile 。 +解压文件的命令与压缩文件的命令类似。在我们运行上述压缩命令后,这些命令用于解压缩 `bigfile`: - * tar: **tar xf bigfile.tgz** - * zip: **unzip bigfile.zip** - * gzip: **gunzip bigfile.gz** - * bzip2: **bunzip2 bigfile.gz2** - * xz: **xz -d bigfile.xz** 或 **unxz bigfile.xz** + * tar: `tar xf bigfile.tgz` + * zip: `unzip bigfile.zip` + * gzip: `gunzip bigfile.gz` + * bzip2: `bunzip2 bigfile.gz2` + * xz: `xz -d bigfile.xz` 或 `unxz bigfile.xz` +### 自己运行压缩对比 - -### 对比你自己运行的压缩 - -如果你想自己运行一些测试,抓取一个大的且可以替换的文件,并使用上面显示的每个命令来压缩它 – 最好使用一个新的子目录。你可能必需先安装 **xz** ,如果你想在测试中包含它的话。这个脚本可能更容易地压缩,但是将可能花费几分钟来完成。 +如果你想自己运行一些测试,抓取一个大的且可以替换的文件,并使用上面显示的每个命令来压缩它 —— 最好使用一个新的子目录。你可能需要先安装 `xz`,如果你想在测试中包含它的话。这个脚本可能更容易地进行压缩,但是可能需要花费几分钟完成。 ``` #!/bin/bash @@ -187,16 +185,14 @@ ls -l $filename.* mv $filename-2 $filename ``` -加入 [Facebook][2] 和 [LinkedIn][3] 网络世界社区来评论那些最重要的话题。 - -------------------------------------------------------------------------------- via: https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html 作者:[Sandra Henry-Stocker][a] 选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From 08728494a92d2f8e81d6106bee61679ec764b75a Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Wed, 6 May 2020 23:16:42 +0800 Subject: [PATCH 154/178] PUB @robsean https://linux.cn/article-12190-1.html --- .../20200417 How to compress files on Linux 5 ways.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200417 How to compress files on Linux 5 ways.md (99%) diff --git a/translated/tech/20200417 How to compress files on Linux 5 ways.md b/published/20200417 How to compress files on Linux 5 ways.md similarity index 99% rename from translated/tech/20200417 How to compress files on Linux 5 ways.md rename to published/20200417 How to compress files on Linux 5 ways.md index e10a0bd8a4..d0cb239d11 100644 --- a/translated/tech/20200417 How to compress files on Linux 5 ways.md +++ b/published/20200417 How to compress files on Linux 5 ways.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (robsean) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12190-1.html) [#]: subject: (How to compress files on Linux 5 ways) [#]: via: (https://www.networkworld.com/article/3538471/how-to-compress-files-on-linux-5-ways.html) [#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) From 43d4e5ad70eac5daeb4d038a58b70c7ae1e58585 Mon Sep 17 00:00:00 2001 From: "Xiaobin.Liu" Date: Wed, 6 May 2020 23:41:46 +0800 Subject: [PATCH 155/178] TSL 20200505 Browse the Peer-to-peer Web With Beaker Browser --- ...he Peer-to-peer Web With Beaker Browser.md | 124 ------------------ ...he Peer-to-peer Web With Beaker Browser.md | 124 ++++++++++++++++++ 2 files changed, 124 insertions(+), 124 deletions(-) delete mode 100644 sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md create mode 100644 translated/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md diff --git a/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md b/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md deleted file mode 100644 index 319b8a18c5..0000000000 --- a/sources/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md +++ /dev/null @@ -1,124 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (lxbwolf) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Browse the Peer-to-peer Web With Beaker Browser) -[#]: via: (https://itsfoss.com/beaker-browser/) -[#]: author: (John Paul https://itsfoss.com/author/john/) - -Browse the Peer-to-peer Web With Beaker Browser -====== - -The Internet as we know it has existed unchanged (more or less) for the last 50 years. People across the globe use their devices to retrieve data from huge servers dotted around the world. - -A group of dedicated technologists wants to change that to make the internet a place where people can connect and share information directly instead of relying on a central server (decentralization). - -There are a bunch of such decentralized services that we have already covered on It’s FOSS. [LBRY as YouTube alternative][1], [Mastodon as Twitter alternative][2] are just a couple of such examples. - -And today I am going to cover another such product called [Beaker Browser][3] which is essentially for browsing the peer to peer web. - -![Beaker Browser][4] - -### What is the ‘peer-to-peer Web’? - -According to [one of the devs][5] behind the Beaker browser, “The P2P Web is an experimental set of technologies…to give users more control over the Web.” - -Further, they say that the peer-to-peer Web has three main principles: anybody can be a server; multiple computers can serve the same site; there is no back end. - -As you can see from those principles. the idea of the peer-to-peer Web is very similar to BitTorrent where files are seeded by multiple peers and those peers share the bandwidth load. This reduces the overall bandwidth that a person needs to provide for their site. - -![Beaker Browser Settings][6] - -The other major part of the peer-to-peer Web is creator control of their ideas. In this day and age, platforms being controlled by large corporations, who try to use your data for their benefit. Beaker returns control to the content creators. - -### Browsing the decentralized web with Beaker - -The [Beaker Browser][3] first came into existence in 2016. The project (and the technology that surrounds it) is created by a team of three at [Blue Link Labs][7]. The Beaker Browser uses the [Dat protocol][8] to share data between computers. All websites that use the Dat protocol start with `dat://` instead of `http://`. - -The strengths of the Dat protocol are: - - * Fast – Archives sync from multiple sources at once. - * Secure – All updates are signed and integrity-checked. - * Resilient – Archives can change hosts without changing their URLs. - * Versioned – Changes are written to an append-only version log. - * Decentralized – Any device can host any archive. - - - -![Beaker Browser Seeding][9] - -The Beaker Browser is essentially a cut down version of Chromium with built-in support for `dat://`addresses. It can still visit regular `http://` sites. - -Each time you visit a dat site, the content for that site is downloaded to your computer as you request it. For example, a picture of Linux Torvalds on the about page of a site is not downloaded until you navigate to that page. - -Also, once you visit a dat website, “[you temporarily][10] re-upload or seed whichever files you’ve downloaded from the website.” You can also choose to seed the website to help its creator. - -![Beaker Browser Menu][11] - -Since the whole idea of Beaker is to create a more open web, you can easily view the source of any website. Unlike most browsers where you just see the source code the current page, you are viewing, Beaker shows you the entire structure of the site in a GitHub-like view. You can even fork the site and host your version of it. - -Besides visiting dat-based websites, you can also create your own site. In the Beaker Browser menu, there is an option to create a new website or an empty project. If you select the option to create a new website, Beaker will build a little demo site that you can edit with the browser’s built-in editor. - -However, if you are like me and prefer to use Markdown, you can choose to create an empty project. Beaker will create the structure of a site and assign it a `dat://`address. Create an `index.md` file and you are good to go. There is a [short tutorial][12] with more info. You can also use the create empty project option to build a web app. - -![Beaker Browser Website Template][13] - -Since Beaker acts as a web server and site seeder, any time you close it or turn off your computer your site will become unavailable. Thankfully, you don’t have to run your computer or the browser constantly. You can also use a seeding service named [Hashbase][14] or you can set up a [`homebase`][15] seeding server. - -Though Beaker is [available][16] for Linux, Windows, and macOS. If you do start playing around Beaker, be sure to take a quick look at [their gui][17][d][17][es][17]. - -### Beaker Browser is not for everyone but it has a purpose - -When I first got this assignment, I had high hopes for the Beaker Browser. As it stands now, it’s still very experimental. A number of the dat sites that I tried to visit were unavailable because the user was not seeding their site. Beaker does have an option to notify you when that site is back online. - -![Beaker Browser No Peer][18] - -Another problem is that Beaker is a really stripped down version of Chromium. There is no option to install extensions or themes. Instead, you are stuck with a white theme and a very limited toolset. I would not use this as my main browser and having access to the world of dat websites is not enough of a reason to keep it installed on my system. - -I looked to see if there is an extension for Firefox that would add support for the `dat://` protocol. I did find such an extension, but it also required the installation of a couple of other pieces of software. It’s just easier to install Beaker. - -As it stands now, Beaker is not for me. Maybe in the future, more people will start using Beaker or the dat protocol will gain support by other browsers. Then it might be interesting. Right now, it’s kinda empty. - -As part of my time with Beaker, I created a [website][19] using the built-in tools. Don’t worry, I made sure that it’s seeded. - -![Beaker Bowser Site Source][20] - -What are your thoughts on the Beaker Brower? What are your thoughts on the peer-to-peer web? Please let us know in the comments below. - -If you found this article interesting, please take a minute to share it on social media, Hacker News, or [Reddit][21]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/beaker-browser/ - -作者:[John Paul][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/john/ -[b]: https://github.com/lujun9972 -[1]: https://itsfoss.com/lbry/ -[2]: https://itsfoss.com/mastodon-open-source-alternative-twitter/ -[3]: https://beakerbrowser.com/ -[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser.jpg?resize=800%2C426&ssl=1 -[5]: https://pfrazee.hashbase.io/blog/what-is-the-p2p-web -[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-setting.jpg?resize=800%2C573&ssl=1 -[7]: https://bluelinklabs.com/ -[8]: https://www.datprotocol.com/ -[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-seedding.jpg?resize=800%2C466&ssl=1 -[10]: https://beakerbrowser.com/docs/faq/ -[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-menu.jpg?ssl=1 -[12]: https://beakerbrowser.com/docs/guides/create-a-markdown-site -[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-website-template.jpg?resize=800%2C459&ssl=1 -[14]: https://hashbase.io/ -[15]: https://github.com/beakerbrowser/homebase -[16]: https://beakerbrowser.com/install/ -[17]: https://beakerbrowser.com/docs/guides/ -[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-no-peer.jpg?resize=800%2C424&ssl=1 -[19]: https://41bfbd06731e8d9c5d5676e8145069c69b254e7a3b710ddda4f6e9804529690c/ -[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-source.jpg?resize=800%2C544&ssl=1 -[21]: https://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md b/translated/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md new file mode 100644 index 0000000000..4a83f1435d --- /dev/null +++ b/translated/tech/20200505 Browse the Peer-to-peer Web With Beaker Browser.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: (lxbwolf) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Browse the Peer-to-peer Web With Beaker Browser) +[#]: via: (https://itsfoss.com/beaker-browser/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +使用 Beaker 浏览器浏览 P2P Web +====== + +我们所认识的因特网在过去 50 年中变化不大。全球的网民使用它们的设备从遍布在世界各地的服务器上检索数据。 + +一群专业的技术专家想改变现状,使互联网变成人们可以连接和直接分享信息的地方,而不是依赖一个中心服务器(去中心化)。 + +我们已经在 It’s FOSS 讨论过很多这样的去中心化的服务。[YouTube 竞品:LBRY][1],[Twitter竞品:Mastodon][2] 是其中的两个例子。 + +今天我将要介绍另一个这样的产品,名为 [Beaker 浏览器][3],它的设计目标是浏览 P2P web 数据。 + +![Beaker Browser][4] + +### ’peer-to-peer Web‘ 是什么? + +根据 Beaker 浏览器的[开发者之一][5]的描述,”P2P Web 是一项实验性的技术...提高我们掌控 Web 的能力。“ + +还有,它们说 P2P Web 有三个主要原则:任何一点都可以成为服务器;多台计算机可以为同一个网站提供服务;没有后端。 + +从这些原则中你可以看出,P2P Web 的思想与 BitTorrent 很像,多个点把文件作为种子,这些点共享带宽负载。这减少了一个用户需要提供给他的网站的总带宽。 + +![Beaker Browser Settings][6] + +P2P Web 另一个重要的方面是创作者对于他们自己的想法的控制能力。当今年代,平台都是由庞大的组织控制的,往往拿你的数据为他们所用。Beaker 把数据的控制能力返还给了内容创造者。 + +### 使用 Beaker 浏览去中心化的 web + +[Beaker 浏览器][3] 是在 2016 年被创建的。该项目(及其周边技术)由[蓝链实验室][7]的三人团队创建。Beaker 浏览器使用 [Dat 协议][8]在计算机之间共享数据。使用 Dat 协议的网站以 `dat://` 而不是 `http://` 开头。 + +Dat 协议的优势如下: + + * 快速 – 档案能立即从多个源同步。 + * 安全 – 所有的更新都是有签名和被完整检查的。 + * 灵活 – 可以在不修改档案 URL 的情况下迁移主机。 + * 版本控制 – 每次修改都被写到只能追加的版本日志中。 + * 去中心化 – 任何设备都可以作为承载档案的主机。 + + + +![Beaker Browser Seeding][9] + +Beaker 浏览器本质上是阉割版的 Chromium,原生支持 `dat://` 地址,也可以访问普通的 `http://` 站点。 + +每次访问一个 dat 站点,在你请求时该站点的内容才会下载到你的计算机。例如,只有在你浏览到站点的 about 页面时,才会下载该页面上的 Linux Torvalds 的图片。 + +当你浏览一个 dat 网站时,”[你暂时][10]重新上传或种下你从网站上下载的所有文件。“你也可以选择为网站做种来帮助创造者。 + +![Beaker Browser Menu][11] + +由于 Beaker 的志向就是创建一个更开放的网络,因此你可以很容易地查看任何网站的源码。不像在大多数浏览器上你只能看到当前浏览的页面的源码那样,使用 Beaker 你能以类似 GitHub 的视图查看整个站点的结构。你甚至可以 fork 这个站点,自己维护。 + +除了浏览基于 dat 的网站外,你还可以创建自己的站点。在 Beaker 浏览器的菜单里,有创建新网站和创建空项目的选项。如果你选择了创建一个新网站,Beaker 会搭建一个小的 demo 站点,你可以使用浏览器里自带的编辑器来编辑。 + +然而,如果你像我一样更喜欢用 Markdown,你可以选择创建一个空项目。Beaker 会创建一个站点的结构,赋给它一个 `dat://` 地址。创建一个 `index.md` 文件后你就可以很好地工作了。里面有个[简短教程][12],你可以看到更多信息。你也可以用创建空项目的选项搭建一个 web app。 + +![Beaker Browser Website Template][13] + +由于 Beaker 的角色是个 web 服务器和站点播种者,当你关闭它或关机后你的站点就不可用了。幸运的是,你不必一直开着你的计算机或浏览器。你也可以使用名为 [Hashbase][14] 的播种服务或者你可以搭建一个 [`homebase`][15] 播种服务器。 + +虽然 Beaker [适用于][16] Linux,Windows 和 macOS,但是在搞 Beaker 之前,还是要查阅下[各平台的教程][17]。 + +### Beaker 浏览器不是每个人都能用,但它有这个意图 + +当第一次被分配到这个任务时,我对 Beaker 浏览器有极高的热情。(但是)像它现在的地位一样,Beaker 浏览器仍是实验性的。我尝试浏览过的很多 dat 站点还不可用,因为用户并没有为站点做种。当站点恢复可用时 Beaker 确实可以选择通知你。 + +![Beaker Browser No Peer][18] + +另一个问题是,Beaker 是真正阉割版的 Chromium。不能安装扩展或主题。你只能使用白色主题和极少的工具集。我不会把 Beaker 浏览器作为常用浏览器,而且能访问 dat 网站并不是把它留在系统上的充分条件。 + +我曾经寻找一个能支持 `dat://` 协议的 Firefox 扩展。我确实找到了这样一款扩展,但它需要安装一系列其他的软件。相比而言,安装 Beaker 比安装那些软件容易点。 + +就像它现在的地位,Beaker 不适合我。也许在将来更多的人使用 Beaker 或者其他浏览器支持 dat 协议。那时会很有趣。目前而言,聊胜于无。 + +在使用 Beaker 的时间里,我用内建的工具创建了一个[网站][19]。不要担心,我已经为它做种了。 + +![Beaker Bowser Site Source][20] + +你怎么看 Beaker 浏览器?你怎么看 P2P web?请尽情在下面评论。 + +如果你觉得本文有意思,请花点时间把它分享到社交媒体,Hacker News 或 [Reddit][21]。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/beaker-browser/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[lxbwolf](https://github.com/lxbwolf) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/lbry/ +[2]: https://itsfoss.com/mastodon-open-source-alternative-twitter/ +[3]: https://beakerbrowser.com/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser.jpg?resize=800%2C426&ssl=1 +[5]: https://pfrazee.hashbase.io/blog/what-is-the-p2p-web +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-setting.jpg?resize=800%2C573&ssl=1 +[7]: https://bluelinklabs.com/ +[8]: https://www.datprotocol.com/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-seedding.jpg?resize=800%2C466&ssl=1 +[10]: https://beakerbrowser.com/docs/faq/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-menu.jpg?ssl=1 +[12]: https://beakerbrowser.com/docs/guides/create-a-markdown-site +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-website-template.jpg?resize=800%2C459&ssl=1 +[14]: https://hashbase.io/ +[15]: https://github.com/beakerbrowser/homebase +[16]: https://beakerbrowser.com/install/ +[17]: https://beakerbrowser.com/docs/guides/ +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-browser-no-peer.jpg?resize=800%2C424&ssl=1 +[19]: https://41bfbd06731e8d9c5d5676e8145069c69b254e7a3b710ddda4f6e9804529690c/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/beaker-bowser-source.jpg?resize=800%2C544&ssl=1 +[21]: https://reddit.com/r/linuxusersgroup From f49b6bc5bed2a1aaf9aef634fc30512a89e011ea Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:07:02 +0800 Subject: [PATCH 156/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200507=20Ubuntu?= =?UTF-8?q?=20Studio=20To=20Replace=20Xfce=20With=20KDE=20Plasma=20Desktop?= =?UTF-8?q?=20Environment?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md --- ...fce With KDE Plasma Desktop Environment.md | 71 +++++++++++++++++++ 1 file changed, 71 insertions(+) create mode 100644 sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md diff --git a/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md b/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md new file mode 100644 index 0000000000..586232d723 --- /dev/null +++ b/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment) +[#]: via: (https://itsfoss.com/ubuntu-studio-opts-for-kde/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment +====== + +[Ubuntu Studio][1] is a popular [official flavour of Ubuntu][2] tailored for creative content creators involved in audio production, video, graphics, photography, and book publishing. It offers a lot of multimedia content creation applications out of the box with the best possible experience. + +After the recent 20.04 LTS release, the Ubuntu Studio team highlighted something very important in their [official announcement][3]. And, probably not everyone noticed the key information i.e Ubuntu Studio’s future. + +Ubuntu Studio 20.04 will be the last version to ship with the [Xfce desktop environment][4]. All the future releases will be using [KDE Plasma][5] instead. + +### Why is Ubuntu Studio ditching XFCE? + +![][6] + +As per their clarification, Ubuntu Studio isn’t focused on any particular look/feel but aims to provide the best user experience possible. And, KDE proves to be a better option. + +> Plasma has proven to have better tools for graphics artists and photographers, as can be seen in Gwenview, Krita, and even the file manager Dolphin. Additionally, it has Wacom tablet support better than any other desktop environment. + +> It has become so good that the majority of the Ubuntu Studio team is now using Kubuntu with Ubuntu Studio added-on via Ubuntu Studio Installer as their daily driver. With so many of us using Plasma, the timing just seems right to focus on a transition to Plasma with our next release. + +Of course every desktop environment has been tailored for something different. And, here, they think that KDE Plasma will be the most suitable desktop environment replacing XFCE to provide a better user experience to all the users. + +While I’m not sure how the users will react to this as every user has a different set of preferences. If the existing users won’t have a problem with KDE, it isn’t going to be a big deal. + +It is worth noting that Ubuntu Studio also mentioned why KDE is potentially a superior choice for them: + +> The Plasma desktop environment has, without Akonadi, become just as light in resource usage as Xfce, perhaps even lighter. Other audio-focused Linux distributions, such as Fedora Jam and KXStudio, have historically used the KDE Plasma desktop environment and done well with the audio. + +Also, they’ve highlighted [an article by Jason Evangelho at Forbes][7] where some benchmarks reveal that KDE is almost as light as Xfce. Even though that’s a good sign – we still have to wait for the users to test-drive the KDE-powered Ubuntu Studio. Only then we’ll be able to observe whether Ubuntu Studio’s decision to ditch XFCE desktop environment was the right thing to do. + +### What will change for Ubuntu Studio users after this change? + +The overall workflow may get affected (or improve) moving forward with KDE on Ubuntu Studio 20.10 and later. + +However, the upgrade process (from 20.04 to 20.10) will result in broken systems. So, a fresh install of Ubuntu Studio 20.10 or later versions will be the only way to go. + +They’ve also mentioned that they will be constantly evaluating for any duplication with the pre-installed apps. So, I believe more details will follow in the coming days. + +Ubuntu Studio is second distribution that has changed its main desktop environment in recent times. Earlier, [Lubuntu][8] switched to LXQt from LXDE. + +What do you think about this change? Feel free to share your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-studio-opts-for-kde/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://ubuntustudio.org/ +[2]: https://itsfoss.com/which-ubuntu-install/ +[3]: https://ubuntustudio.org/2020/04/ubuntu-studio-20-04-lts-released/ +[4]: https://xfce.org +[5]: https://kde.org/plasma-desktop +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-studio-kde-xfce.jpg?ssl=1 +[7]: https://www.forbes.com/sites/jasonevangelho/2019/10/23/bold-prediction-kde-will-steal-the-lightweight-linux-desktop-crown-in-2020 +[8]: https://itsfoss.com/lubuntu-20-04-review/ From 8357ae31341fdc38a453af4aa36a1bd18f93eb19 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:08:01 +0800 Subject: [PATCH 157/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200507=20Ubuntu?= =?UTF-8?q?=20Cinnamon=20Remix=2020.04=20Review:=20The=20Perfect=20Blend?= =?UTF-8?q?=20of=20Ubuntu=20With=20Cinnamon?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200507 Ubuntu Cinnamon Remix 20.04 Review- The Perfect Blend of Ubuntu With Cinnamon.md --- ...e Perfect Blend of Ubuntu With Cinnamon.md | 157 ++++++++++++++++++ 1 file changed, 157 insertions(+) create mode 100644 sources/tech/20200507 Ubuntu Cinnamon Remix 20.04 Review- The Perfect Blend of Ubuntu With Cinnamon.md diff --git a/sources/tech/20200507 Ubuntu Cinnamon Remix 20.04 Review- The Perfect Blend of Ubuntu With Cinnamon.md b/sources/tech/20200507 Ubuntu Cinnamon Remix 20.04 Review- The Perfect Blend of Ubuntu With Cinnamon.md new file mode 100644 index 0000000000..952e2ea946 --- /dev/null +++ b/sources/tech/20200507 Ubuntu Cinnamon Remix 20.04 Review- The Perfect Blend of Ubuntu With Cinnamon.md @@ -0,0 +1,157 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Ubuntu Cinnamon Remix 20.04 Review: The Perfect Blend of Ubuntu With Cinnamon) +[#]: via: (https://itsfoss.com/ubuntu-cinnamon-remix-review/) +[#]: author: (Dimitrios Savvopoulos https://itsfoss.com/author/dimitrios/) + +Ubuntu Cinnamon Remix 20.04 Review: The Perfect Blend of Ubuntu With Cinnamon +====== + +GNOME 3 was introduced in 2011, and the GNOME Shell immediately generated both positive and negative responses. Many users and developers liked the original GNOME interface enough that a few groups forked it and one of those, Linux Mint team, created the [Cinnamon desktop environment][1]. + +The Cinnamon desktop became the identity of Linux Mint. For years, Cinnamon has been synonymous to [Linux Mint][2]. It has changed slightly in the past few years as the popularity for Cinnamon grew. Now other distributions have also started offering Cinnamon desktop environment. [Manjaro][3] is one such example. + +A few months back, we introduced you to a [new Ubuntu flavor that provides an out of the box Cinnamon desktop experience][4]. let’s take a deeper look at [Ubuntu Cinnamon Remix][5] today. + +### Why Ubuntu Cinnamon Remix and not Linux Mint? + +It is true that Linux Mint is based on Ubuntu and Many Linux Mint users will have the question: Does it make any sense to switch over to Ubuntu as Linux Mint is such a mature project and the user experience will remain more or less the same? + +Ubuntu Cinnamon Remix has a number of small differences from Linux Mint, but has has one key difference that a Linux enthusiast can’t ignore. + +Linux Mint is based on “LTS” (Long-Term Support) versions of Ubuntu, meaning it stays behind the Canonical’s 6-month update cadence. Ubuntu Cinnamon Remix benefits from a newer kernel to other 6-month cycle feature upgrade and more recent software. + +Another key difference is that Ubuntu Cinnamon Remix will “inherit” [Snap support][6], and Linux Mint embraces [FlatPak][7]. Ubuntu Cinnamon Remix uses Ubuntu Software Center instead of Mint Software Manager. + +That said, I am a huge fan of Cinnamon. So I chose to review this mix of Ubuntu and Cinnamon and here I share my experience with it. + +### Experiencing Ubuntu Cinnamon Remix + +By any chance given, I will always mention how fast [Calamares installer][8] is and thanks to Ubuntu Cinnamon Remix Team for choosing so. + +![Calamares Installer][9] + +A fresh installation of Ubuntu Cinnamon Remix consumes approximately 750 MB of RAM. This is very similar to Linux Mint Cinnamon. + +![An idle Cinnamon takes 750 MB of RAM][10] + +I was also impressed by the beautiful [Kimmo theme][11] and the orange toned Ubuntu wallpaper which seems to be a result of a very meticulous effort. + +![Ubuntu Cinammon Remix 20.04 Desktop][12] + +#### Enough tools to get you started + +As with any other Ubuntu distribution, Ubuntu Cinnamon Remix is packed with the essential productivity tools, to name a few: + + * Firefox Web Browser + * Thunderbird – Email Client + * LibreOffice suite + * Celluloid – Multimedia player + * [GIMP][13] – Image processing software + * Synaptic Package Manager + * Gnome Software Center + * [Gparted][14] – Partition Manager + + + +Using Ubuntu Cinnamon Remix as my main runner for a few days, fulfilled my high expectations. Ubuntu is rock-solid stable, very fast and I didn’t face a single issue at my day to day tasks. + +#### Ubuntu for Linux Mint Lovers + +Are you enthusiastic about Ubuntu Cinnamon but got used to Linux Mint theme? Click below to see how you can get a full Linux Mint theme pack and how to configure it to keep the Ubuntu heritage. + +Give Ubuntu Cinnamon Remix the real Mint touch + +Firstly you have to download and unpack the following, easily done via terminal. + +**Get the Linux Mint-X icon pack** + +``` +wget http://packages.linuxmint.com/pool/main/m/mint-x-icons/mint-x-icons_1.5.5_all.deb +``` + +**Get the Linux Mint-Y icon pack** + +``` +wget http://packages.linuxmint.com/pool/main/m/mint-y-icons/mint-y-icons_1.3.9_all.deb +``` + +**Get the Linux Mint Themes** + +``` +wget http://packages.linuxmint.com/pool/main/m/mint-themes/mint-themes_1.8.4_all.deb +``` + +**Install the downloaded content** + +``` +sudo dpkg -i ./mint-x-icons_1.5.5_all.deb ./mint-y-icons_1.3.9_all.deb ./mint-themes_1.8.4_all.deb +``` + +When done, click on the Menu button at the bottom left corner and type themes. You can also find themes in system settings. + +![Accessing Themes][15] + +Once opened replace the kimmo icons and theme as shown below. The Linux Mint default “Green” is the plain Mint-Y but the orange colour is a perfect selection for Ubuntu. + +![Linux Mint Theme Settings][16] + +#### A treat for Cinnamon fans + +Let’s accept it, aesthetics are important. Cinnamon has a clean and elegant look, easy to read fonts and nice colour contrast themes. Cinnamon offers an uncluttered desktop with easily configured desktop icons simply by accessing the Desktop menu under System Settings. You can also choose the desktop icons to be shown only on the primary monitor, only on secondary monitor, or on both. This also applies to a beyond two monitor setup. + +![Ubuntu Cinnamon Remix Desklets][17] + +Desklets and applets are small, single-purpose applications that can be added to your desktop or your panel respectively. The most commonly used among the many you can choose are CPU or resources monitor, a weather applet, sticky notes, and calendar. + +The Cinnamon Control Center provides centralized access to many of the desktop configuration options. By accessing the themes section you can choose the desktop basic scheme and icons, window borders, mouse pointers, and controls look. Fonts can have a great impact on the overall desktop look and cinnamon makes the change easier than ever. + +The Cinnamon Control Center makes the configuration simple enough for a new user, compared to KDE Plasma that can lead a new user to confusion, due to the massive number of configuration options. + +![][18] + +The Cinnamon Panel contains the menu used to launch programs, a basic system tray, and an application selector. The panel is easy to configure and adding new program launchers is simply done by locating the program you want to add in the main Menu, right click on the icon and select “Add to panel.” You can also add the launcher icon to the desktop, and to the Cinnamon “Favourites” launcher bar. If you don’t like the order of the icons at your panel, just right click at the panel bar, enter panel’s Edit mode and rearrange the icons. + +#### **Conclusions** + +Whether you decide to “spice” up your desktop or thinking to move from [Windows to Linux][19], the Cinnamon Community has made plenty of spices for you. + +Traditional yet elegant, customizable but simple, Ubuntu Cinnamon Remix is an interesting project with a promising future, and for existing fans of the Cinnamon Desktop who love Ubuntu, this is probably a no-brainer. + +What do you think of Ubuntu Cinnamon Remix? Have you used it already? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-cinnamon-remix-review/ + +作者:[Dimitrios Savvopoulos][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/dimitrios/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Cinnamon_(desktop_environment) +[2]: https://www.linuxmint.com/ +[3]: https://manjaro.org/ +[4]: https://itsfoss.com/ubuntudde/ +[5]: https://ubuntucinnamon.org/ +[6]: https://snapcraft.io/ +[7]: https://flatpak.org/ +[8]: https://calamares.io/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/Calamares-Installer.png?resize=800%2C426&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/htop-running-on-Ubuntu-Cinnamon-Remix-20.04.png?ssl=1 +[11]: https://github.com/Ubuntu-Cinnamon-Remix/kimmo-gtk-theme +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/04/Ubuntu-Cinammon-Remix-20.04-desktop.png?resize=800%2C450&ssl=1 +[13]: https://itsfoss.com/gimp-2-10-release/ +[14]: https://itsfoss.com/gparted/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/04/accessing-themes.png?ssl=1 +[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/04/Linux-Mint-theme-settings.png?ssl=1 +[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-cinnamon-remix-desklets.jpg?fit=800%2C450&ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/ubuntu-cinnamon-control.jpg?fit=800%2C450&ssl=1 +[19]: https://itsfoss.com/windows-like-linux-distributions/ From 6b681ab52dea3daf62aa77906b9286bf12846c15 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:12:15 +0800 Subject: [PATCH 158/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=206=20op?= =?UTF-8?q?en=20source=20alternatives=20to=20Wunderlist?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 6 open source alternatives to Wunderlist.md --- ... open source alternatives to Wunderlist.md | 124 ++++++++++++++++++ 1 file changed, 124 insertions(+) create mode 100644 sources/tech/20200506 6 open source alternatives to Wunderlist.md diff --git a/sources/tech/20200506 6 open source alternatives to Wunderlist.md b/sources/tech/20200506 6 open source alternatives to Wunderlist.md new file mode 100644 index 0000000000..84c5cc828b --- /dev/null +++ b/sources/tech/20200506 6 open source alternatives to Wunderlist.md @@ -0,0 +1,124 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 open source alternatives to Wunderlist) +[#]: via: (https://opensource.com/article/20/5/alternatives-list) +[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike) + +6 open source alternatives to Wunderlist +====== +Love lists? Check out this handy list of open source apps for managing +all your lists! +![a checklist for a team][1] + +Wunderlist is an app for lists, loved by many, but gone for good as of May 6, 2020. The website encourages existing users to download and use Microsoft To Do in its place. That's tempting because it makes it easy to import all of those lists you've made over the years. Then again, maybe it's a chance to _Marie Kondo_ those lists and pare things down. Do you really need 30 lists? (Apparently, I've decided that I do, so I won't judge.) + +I have lists for all sorts of things, from "Plants for the garden 2020" to "Gifts for the husband." Some are checklists, some are To Do lists, and some are lists for list's sake. + +For my husband and me, the most useful list is our shared grocery list. We both have the app on our phones, we both add things to the list, we review it together but separately on our phones before he goes shopping (yes, you read that correctly), and he checks things off as he puts them in the cart. It makes the whole thing surprisingly efficient, and I think we save some money because we're into sticking to THE LIST. + +While its users loved it, Wunderlist isn't entirely unique. There are a gazillion list apps out there. With Wunderlist, I've specifically enjoyed its combination of simplicity and design, and that it managed to implement useful features like sharing and collaboration with others, dynamics checkboxes for lists, and a great user experience across both mobile and web interfaces. I've also enjoyed using it for a list that isn't an "active" document: a list I don't review weekly or make regular progress on—like my many lists I've used for brainstorming an idea (including that novel I've been meaning to write...). + +From the many wonderful articles we've published over the years, I've curated a list of open source alternatives to Wunderlist that may work for your needs, from simple task management and to-do lists to complex note-taking and process management. Or, if you are that person scribbling tasks and notes on paper scraps and post-it notes that are lying... er, around somewhere and everywhere... this might be a good time to try one of these digital options out. + +### Tasks—works with OwnCloud + +> Tasks is a free and open source app you can install from [F-droid][2]. Tasks is a mobile-only application, but it's extremely flexible in what it can sync to. You can save your lists to NextCloud or OwnCloud, Google Tasks, Apple Reminders, and just about any CalDAV server you have an account on. +> +> The default view of Tasks is a daily view, so any task you enter is assumed to be a task from today onward. If you're like me and you want to maintain several unique lists, you can do that with Tags. When you create a tag, you create a category for tasks. You can assign a colour and an icon so each list of tasks is unique. +> +> It takes a little getting used to, but tagging has many advantages. Because all tasks are tagged, you can view groups of tasks by clicking the tag you want to filter for, but you can also filter by day and even by place. That means that when you go grocery shopping, your grocery list becomes the active default list, and your everyday life list becomes active again when you return home. +> +> By syncing your data to one of your online accounts, you can share lists with loved ones, collaborators, and colleagues. +> +> Another great feature is that if you the same tasks every morning when you get to work, or the same 20 items in your weekly grocery list, you can create tasks that repeat on a regular basis. + +Reviewed by Seth Kenlon + +![Screenshot of Tasks interface][3] + +### OpenTasks—best for long lists + +> [OpenTasks][4] is an excellent task management tool for creating individual tasks with a wide variety of settings. It supports a wide range of fields when creating a task, ranging from basic things, such as name and description, to more complex items, such as choosing if the task is private, public, or confidential. The biggest thing that sets OpenTasks apart from the alternatives is its use of tabs on the app's main screen. These tabs quickly allow you to see the tasks due, tasks starting soon, tasks sorted by priority, and tasks sorted by current progress towards completion. Many of the other apps support doing things like these, but OpenTasks quickly easily accesses these lists. + +[Read the full OpenTasks review][5] by Joshua Allen Holm + +![OpenTasks in Google Play store][6] + +### Mirakel—great for nested lists + +> [Mirakel][7] is a task management app with a modern user interface and support for just about every format you might want in such a program. At Mirakel's basic level, it supports multiple lists, which are referred to as "meta lists." Creating an individual task has a plethora of options with deadlines, reminders, progress tracking, tags, notes, sub-tasks, and file attachments, all comprising a part of a task's entry. + +[Read the full Mirakel review][5] by Joshua Allen Holm + +![Screenshot from website of Mirakel app][8] + +### Todo—simple and effective, works anywhere + +> [Todo.txt][9] is one of the two to-do list and task management apps that I keep coming back to over and over again (the other is Org mode). And what keeps me coming back is that it is simple, portable, understandable, and has many great add-ons that don't break it if one machine has them and the others don't. And since it is a Bash shell script, I have never found a system that cannot support it. Read more about [how to install and use Todo.txt][10]. + +[Read the full todo.txt review][10] by Kevin Sonney + +![Drop-down menu for Todo.txt][11] + +Drop-down menu for Todo.txt + +### Joplin—best for private lists + +> [Joplin][12] is a NodeJS application that runs and stores information locally, allows you to encrypt your tasks and supports multiple sync methods. Joplin can run as a console or graphical application on Windows, Mac, and Linux. Joplin also has mobile apps for Android and iOS, meaning you can take your notes with you without a major hassle. Joplin even allows you to format your notes with Markdown, HTML, or plain text. + +[Read the full Joplin review][13] by Kevin Sonney + +![Joplin graphical version ][14] + +### CherryTree—great alternative to Evernote / OneNote / Keep + +> [CherryTree][15] is a GPLv3-licensed application that organizes information in nodes. Each node can have child nodes, allowing you to easily organize your lists and thoughts. And, child nodes can have their own children with independent properties. + +[Read the full CherryTree review][16] by Ben Cotton + +![CherryTree's hierarchical note layout][17] + +### Bonus: Wekan—for fans of Kanban + +> Kanban boards are a mainstay of today's agile processes. And many of us (myself included) use them to organize not just our work but also our personal lives. I know several artists who use apps like Trello to keep track of their commission lists as well as what's in progress and what's complete. But these apps are often linked to a work account or a commercial service. Enter [Wekan][18], an open source kanban board you can run locally or on the service of your choice. Wekan offers much of the same functionality as other Kanban apps, such as creating boards, lists, swimlanes, and cards, dragging and dropping between lists, assigning to users, labeling cards, and doing pretty much everything else you'd expect in a modern kanban board. + +[Read the full Wekan review][19]* by Kevin Sonney* + +![Wekan kanban board][20] + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/alternatives-list + +作者:[Jen Wike Huger][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jen-wike +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk (a checklist for a team) +[2]: https://f-droid.org/en/packages/org.tasks/ +[3]: https://opensource.com/sites/default/files/uploads/screenshot_tasks_resized.jpg (Screenshot of Tasks interface) +[4]: https://play.google.com/store/apps/details?id=org.dmfs.tasks +[5]: https://opensource.com/article/17/1/task-management-time-tracking-android +[6]: https://opensource.com/sites/default/files/uploads/opentasks_rezied.jpg (OpenTasks in Google Play store) +[7]: https://mirakel.azapps.de/ +[8]: https://opensource.com/sites/default/files/uploads/mirakel_web_resized.jpg (Screenshot from website of Mirakel app) +[9]: http://todotxt.org/ +[10]: https://opensource.com/article/20/1/open-source-to-do-list +[11]: https://opensource.com/sites/default/files/uploads/todo.txtmenu_3.png (Drop-down menu for Todo.txt) +[12]: https://joplin.cozic.net/ +[13]: https://opensource.com/article/19/1/productivity-tool-joplin +[14]: https://opensource.com/sites/default/files/uploads/joplin-1.png (Joplin graphical version ) +[15]: https://www.giuspen.com/cherrytree/ +[16]: https://opensource.com/article/19/5/cherrytree-notetaking +[17]: https://opensource.com/sites/default/files/uploads/cherrytree.png (CherryTree's hierarchical note layout) +[18]: https://wekan.github.io/ +[19]: https://opensource.com/article/19/1/productivity-tool-wekan +[20]: https://opensource.com/sites/default/files/uploads/wekan-board.png (Wekan kanban board) From 290c3681d6c02ae1a9d4c7b25e907ec1338867d6 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:12:50 +0800 Subject: [PATCH 159/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20Speed?= =?UTF-8?q?=20up=20administration=20of=20Kubernetes=20clusters=20with=20k9?= =?UTF-8?q?s?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 Speed up administration of Kubernetes clusters with k9s.md --- ...tration of Kubernetes clusters with k9s.md | 311 ++++++++++++++++++ 1 file changed, 311 insertions(+) create mode 100644 sources/tech/20200506 Speed up administration of Kubernetes clusters with k9s.md diff --git a/sources/tech/20200506 Speed up administration of Kubernetes clusters with k9s.md b/sources/tech/20200506 Speed up administration of Kubernetes clusters with k9s.md new file mode 100644 index 0000000000..03f563a952 --- /dev/null +++ b/sources/tech/20200506 Speed up administration of Kubernetes clusters with k9s.md @@ -0,0 +1,311 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Speed up administration of Kubernetes clusters with k9s) +[#]: via: (https://opensource.com/article/20/5/kubernetes-administration) +[#]: author: (Jessica Cherry https://opensource.com/users/cherrybomb) + +Speed up administration of Kubernetes clusters with k9s +====== +Check out this cool terminal UI for Kubernetes administration. +![Dogs playing chess][1] + +Usually, my articles about Kubernetes administration are full of kubectl commands for administration for your clusters. Recently, however, someone pointed me to the [k9s][2] project for a fast way to review and resolve day-to-day issues in Kubernetes. It's been a huge improvement to my workflow and I'll show you how to get started in this tutorial.  + +Installation can be done on a Mac, in Windows, and Linux. Instructions for each operating system can be found [here][2]. Be sure to complete installation to be able to follow along. + +I will be using Linux and Minikube, which is a lightweight way to run Kubernetes on a personal computer. Install it following [this tutorial][3] or by using the [documentation][4]. + +### Setting the k9s configuration file + +Once you've installed the k9s app, it's always good to start with the help command. + + +``` +$ k9s help +K9s is a CLI to view and manage your Kubernetes clusters. + +Usage: +  k9s [flags] +  k9s [command] + +Available Commands: +  help        Help about any command +  info        Print configuration info +  version     Print version/build info + +Flags: +  -A, --all-namespaces                 Launch K9s in all namespaces +      --as string                      Username to impersonate for the operation +      --as-group stringArray           Group to impersonate for the operation +      --certificate-authority string   Path to a cert file for the certificate authority +      --client-certificate string      Path to a client certificate file for TLS +      --client-key string              Path to a client key file for TLS +      --cluster string                 The name of the kubeconfig cluster to use +  -c, --command string                 Specify the default command to view when the application launches +      --context string                 The name of the kubeconfig context to use +      --demo                           Enable demo mode to show keyboard commands +      --headless                       Turn K9s header off +  -h, --help                           help for k9s +      --insecure-skip-tls-verify       If true, the server's caCertFile will not be checked for validity +      --kubeconfig string              Path to the kubeconfig file to use for CLI requests +  -l, --logLevel string                Specify a log level (info, warn, debug, error, fatal, panic, trace) (default "info") +  -n, --namespace string               If present, the namespace scope for this CLI request +      --readonly                       Disable all commands that modify the cluster +  -r, --refresh int                    Specify the default refresh rate as an integer (sec) (default 2) +      --request-timeout string         The length of time to wait before giving up on a single server request +      --token string                   Bearer token for authentication to the API server +      --user string                    The name of the kubeconfig user to use + +Use "k9s [command] --help" for more information about a command. +``` + +As you can see, there is a lot of functionality we can configure with k9s. The only step we need to take place to get off the ground is to write a configuration file. The **info** command will point us to where the application is looking for it. + + +``` +$ k9s info + ____  __.________ +|    |/ _/   __   \\______ +|      < \\____    /  ___/ +|    |  \   /    /\\___ \ +|____|__ \ /____//____  > +        \/            \/ + +Configuration:   /Users/jess/.k9s/config.yml +Logs:            /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-jess.log +Screen Dumps:    /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-screens-jess +``` + + By default, k9s expects a configuration file and will fail to run without one. The command will return without any message, but if we look at the log file we see an error. + + +``` +$ tail -1 /var/folders/5l/c1y1gcw97szdywgf9rk1100m0000gn/T/k9s-mbbroberg.log +10:56AM FTL Unable to connect to api server error="Missing or incomplete configuration info.  Please point to an existing, complete config file:\n\n  1. Via the command-line flag --kubeconfig\n  2. Via the KUBECONFIG environment variable\n  3. In your home directory as ~/.kube/config\n\nTo view or setup config directly use the 'config' command." +``` + +To add a file, make the directory if it doesn't already exist and then add one. + + +``` +$ mkdir -p ~/.k9s/ +$ touch ~/.k9s/config.yml +``` + +For this introduction, we will use the default config.yml recommendations from the k9s repository. The maintainers note that this format is subject to change, so we can [check here][5] for the latest version. + + +``` +k9s: +  refreshRate: 2 +  headless: false +  readOnly: false +  noIcons: false +  logger: +    tail: 200 +    buffer: 500 +    sinceSeconds: 300 +    fullScreenLogs: false +    textWrap: false +    showTime: false +  currentContext: minikube +  currentCluster: minikube +  clusters: +    minikube: +      namespace: +        active: "" +        favorites: +       - all +        - kube-system +        - default +      view: +        active: dp +  thresholds: +    cpu: +      critical: 90 +      warn: 70 +    memory: +      critical: 90 +      warn: 70 +``` + +We set k9s to look for a local minikube configuration, so I'm going to confirm minikube is online and ready to go.  + + +``` +$ minikube status +host: Running +kubelet: Running +apiserver: Running +kubeconfig: Configured +``` + +### Running k9s to explore a Kubernetes cluster + +### With a configuration file set and pointing at our local cluster, we can now run the **k9s** command. + + +``` +`$ k9s` +``` + +Once you start it up, the k9s text-based user interface (UI) will pop up. With no flag for a namespace, it will show you the pods in the default namespace. + +![K9s screenshot][6] + +If you run in an environment with a lot of pods, the default view can be overwhelming. Alternatively, we can focus on a given namespace. Exit the application and run **k9s -n <namespace>** where _<namespace>_ is an existing namespace. In the picture below, I ran **k9s -n minecraft,** and it shows my broken pod + +![K9s screenshot][7] + +So once you have k9s up and running, there are a bunch of things you can do quickly.  + +Navigating k9s happens through shortcut keys. We can always use arrow keys and the enter key to choose items listed. There are quite a few other universal keystrokes to navigate to different views: + + * **0**—Show all pods in all namespaces + + + +![K9s screenshot][8] + + * **d**—Describe the selected pod + + + +![K9s screenshot][9] + + * **l**—Show logs for the selected pod pod + + + +![Using k9s to show Kubernetes pod logs][10] + +You may notice that k9s is set to use [Vim command keys][11], including moving up and down using **J** and **K** keys. Good luck exiting, emacs users :) + +### Viewing different Kubernetes resources quickly + +Need to get to something that's not a pod? Yea I do too. There are a number of shortcuts that are available when we enter a colon (":") key. From there, you can use the following commands to navigate around there. + + * **:svc**—Jump to a services view. + + + +![K9s screenshot][12] + + * **:deploy**—Jump to a deployment view. + + + +![K9s screenshot][13] + + * **:rb**—Jump to a Rolebindings view for [role-based access control (RBAC)][14] management. + + + +![K9s screenshot][15] + + * **:namespace**—Jump back to the namespaces view. + + + +![K9s screenshot][16] + + * **:cj**—Jump to the cronjobs view to see what jobs are scheduled in the cluster. + + + +![K9s screenshot][17] + +The most used tool for this application will be the keyboard; to go up or down on any page, use the arrow keys. If you need to quit, remember to use Vim keybindings. Type **:q** and hit enter to leave. + +### Example of troubleshooting Kubernetes with k9s + +How does k9s help when something goes wrong? To walk through an example, I let several pods die due to misconfiguration. Below you can see my terrible hello deployment that's crashing. Once we highlight it, we press **d** to run a _describe_ command to see what is causing the failure. + +![K9s screenshot][18] + +![K9s screenshot][19] + +Skimming the events does not tell us a reason for the failure. Next, I hit the **esc** key and go check the logs by highlighting the pod and entering **<shift-l>**. + +![K9s screenshot][20] + +Unfortunately, the logs don't offer anything helpful either (probably because the deployment was never correctly configured), and the pod will not come up. + +I then **esc** to step back out, and I will see if deleting the pod will take care of this issue. To do so, I highlight the pod and use **<ctrl-d>**. Thankfully, k9s prompts users before deletion.  + +![K9s screenshot][21] + +While I did delete the pod, the deployment resource still exists, so a new pod will come back up. It will also continue to restart and crash for whatever reason (we don't know yet). + +Here is the point where I would repeat reviewing logs, describing resources, and use the **e** shortcut to even edit a running pod to troubleshoot the behavior. In this particular case, the failing pod is not configured to run in this environment. So let's delete the deployment to stop crash-then-reboot loop we are in. + +We can get to deployments by typing **:deploy** and clicking enter. From there we highlight and press **<ctrl-d>** to delete. + +![K9s screenshot][22] + +![K9s screenshot][23] + +And poof the deployment is gone! It only took a few keystrokes to clean up this failed deployment. + +### k9s is incredibly customizable + +So this application has a ton of customization options, down to the color scheme of the UI. Here are a few editable options you may be interested in: + + * Adjust where you put the config.yml file (so you can store it in [version control][24]) + * Add [custom aliases][25] to an **alias.yml** file + * Create [custom hotkeys][26] in a **hotkey.yml** file + * Explore available [plugins][27] or write your own + + + +The entire application is configured in YAML files, so customization will feel familiar to any Kubernetes administrator. + +### Simplify your life with k9s + +I'm prone to administrating over my team's systems in a very manual way, more for brain training than anything else. When I first heard about k9s, I thought, "This is just lazy Kubernetes," so I dismissed it and went back to doing my manual intervention everywhere. I actually started using it daily while working through my backlog, and I was blown away at how much faster it was to use than kubectl alone. Now I'm a convert.  + +It's important to know your tools and master the "hard way" of doing something. It is also important to remember, as far as administration goes, it's important to work smarter, not harder. Using k9s is the way I live up to that objective. I guess we can call it lazy Kubernetes administration, and that's okay. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/kubernetes-administration + +作者:[Jessica Cherry][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cherrybomb +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game-dogs-chess-play-lead.png?itok=NAuhav4Z (Dogs playing chess) +[2]: https://github.com/derailed/k9s +[3]: https://opensource.com/article/18/10/getting-started-minikube +[4]: https://kubernetes.io/docs/tasks/tools/install-minikube/ +[5]: https://github.com/derailed/k9s#k9s-configuration +[6]: https://opensource.com/sites/default/files/uploads/k9s_1.png (K9s screenshot) +[7]: https://opensource.com/sites/default/files/uploads/k9s_2.png (K9s screenshot) +[8]: https://opensource.com/sites/default/files/uploads/k9s_3.png (K9s screenshot) +[9]: https://opensource.com/sites/default/files/uploads/k9s_5_0.png (K9s screenshot) +[10]: https://opensource.com/sites/default/files/uploads/k9s-show-logs-opensourcedotcom.png (Using k9s to show Kubernetes pod logs) +[11]: https://opensource.com/article/19/3/getting-started-vim +[12]: https://opensource.com/sites/default/files/uploads/k9s_5.png (K9s screenshot) +[13]: https://opensource.com/sites/default/files/uploads/k9s_6.png (K9s screenshot) +[14]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[15]: https://opensource.com/sites/default/files/uploads/k9s_7.png (K9s screenshot) +[16]: https://opensource.com/sites/default/files/uploads/k9s_8.png (K9s screenshot) +[17]: https://opensource.com/sites/default/files/uploads/k9s_9.png (K9s screenshot) +[18]: https://opensource.com/sites/default/files/uploads/k9s_10.png (K9s screenshot) +[19]: https://opensource.com/sites/default/files/uploads/k9s_11.png (K9s screenshot) +[20]: https://opensource.com/sites/default/files/uploads/k9s_12.png (K9s screenshot) +[21]: https://opensource.com/sites/default/files/uploads/k9s_13.png (K9s screenshot) +[22]: https://opensource.com/sites/default/files/uploads/k9s_14.png (K9s screenshot) +[23]: https://opensource.com/sites/default/files/uploads/k9s_15.png (K9s screenshot) +[24]: https://opensource.com/article/19/3/move-your-dotfiles-version-control +[25]: https://k9scli.io/topics/aliases/ +[26]: https://k9scli.io/topics/hotkeys/ +[27]: https://github.com/derailed/k9s/tree/master/plugins From 36fe6fa73d662e98210850732e6239c52fc3d555 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:13:33 +0800 Subject: [PATCH 160/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20Custom?= =?UTF-8?q?izing=20my=20open=20source=20PHP=20framework=20for=20web=20deve?= =?UTF-8?q?lopment?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 Customizing my open source PHP framework for web development.md --- ...ource PHP framework for web development.md | 85 +++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 sources/tech/20200506 Customizing my open source PHP framework for web development.md diff --git a/sources/tech/20200506 Customizing my open source PHP framework for web development.md b/sources/tech/20200506 Customizing my open source PHP framework for web development.md new file mode 100644 index 0000000000..00b605148d --- /dev/null +++ b/sources/tech/20200506 Customizing my open source PHP framework for web development.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Customizing my open source PHP framework for web development) +[#]: via: (https://opensource.com/article/20/5/codeignitor) +[#]: author: (Wee Ben Sen https://opensource.com/users/bswee14) + +Customizing my open source PHP framework for web development +====== +Codeignitor is a PHP framework that empowers companies to develop +high-performance websites with flexibility and ease. +![Business woman on laptop sitting in front of window][1] + +PHP Codeignitor is an open source framework providing business applications with the easy-to-use PHP programming language and powerful tools for coding. It also provides business intelligence, server monitoring, development, and application integration facilities. It's a relatively quiet project that you don't hear much about, but it's got a lot going for it that many developers new to it find surprising and refreshing. + +I use [Codeignitor][2] at my job working for an online tuition service provider in Singapore. We offer services that aren't common enough to be the default feature set for templates or existing back-ends, so I need something that provides good, solid, raw materials I can build upon. Initially, I was considering other platforms such as Wordpress for our website; however, I arrived at Codeignitor due to its flexibility and integration of functions needed in the tuition-matching process. + +Here are the points that sold me on Codeignitor: + + * Database integration with MySQL—A major functionality is allowing clients to browse the tutor database and add tutors like a "shopping cart" similar to an e-commerce platform. + * Client interface system—Users can log in to manage preferences and edit their particulars, modify subject taught, areas traveled, mobile number, address, etc. + * Customized administrator panel—The administrator can access the client's submission with a customized admin panel, which is integrated with a customer service feature so the administrator can follow up individually. + * Payment system—The admin panel comes with an invoice and payments gateway, which is integrated with Paypal. + * CMS editor interface—The administrator is able to edit text and images in the blog and subject pages, as well as add new pages. + + + +The project took around six months to complete and another two months of debugging work. If I'd had to build all of it from scratch or try to rework an existing framework to suit our needs, it would have taken longer, and I probably wouldn't have ended up with what I needed for the demands of our customers. + +### Features and benefits + +There are many more features that draw developers to PHP Codeignitor, including error handling and code formatting, which are useful in every coding situation. It supports templates, which can be used to add functionality to an existing website or to generate new ones. There are many features available for a business that needs to use a web-based system, including the ability to use custom tags. Most can be used by even an average developer who does not have any prior experience in programming. + +The key features of Codeignitor are: + + * XML core services, + * HTTP/FTP core services + * AppData and PHP sandbox features + * XSLT and HTML templates + * Encrypted information transfer + * PCM Codeignitor server monitoring + * Application integration + * File Transfer Protocol (FTP) + * Help desk support + * Apache POI (content management infrastructure used for hosting a website) + + + +#### Compatibility + +Codeignitor is compatible with many leading software applications like PHP, MySQL, [MariaDB][3], [phpMyAdmin][4], [Apache][5], OpenBSD, XSLT, [SQLite][6], and more. A number of companies prefer to use Codeignitor products for their website requirements because they are easy to work with and integrate. If you're not comfortable creating your own website, you can find many developers and design agencies that provide custom web development services. + +#### Security + +Codeignitor also provides data security through SSL encryption. The encryption protects the data from external threats such as intruders and firewalls. The data storage facility also allows for security audits of the company's website. + +#### Other features + +A good PHP web development company uses several advanced and third-party technologies such as XML and PHP. It provides organizations with a complete platform to develop professional-looking, useful websites with a business application. Codeignitor makes it easy to use third party technology, and works with common web development software. This allows web agencies to easily create websites with their chosen modules. Most PHP developers offer support and training services for individuals, as well. + +### Using PHP framework Codeignitor + +Codeignitor allows businesses to have a complete package for PHP development that will offer the right combination of power, flexibility, and performance. So far, I am very pleased with our website and I have continuously upgraded and added new features along the way. I look forward to discovering what else I can do with our website using Codeignitor. Could it be right for you too? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/codeignitor + +作者:[Wee Ben Sen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bswee14 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-concentration-focus-windows-office.png?itok=-8E2ihcF (Woman using laptop concentrating) +[2]: https://codeigniter.com/ +[3]: http://mariadb.org/ +[4]: https://www.phpmyadmin.net/ +[5]: http://apache.org/ +[6]: http://sqlite.org/ From d771233d1eef0243d1f53c0758065738cce95480 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:14:26 +0800 Subject: [PATCH 161/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20Managi?= =?UTF-8?q?ng=20Git=20projects=20with=20submodules=20and=20subtrees?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 Managing Git projects with submodules and subtrees.md --- ...t projects with submodules and subtrees.md | 212 ++++++++++++++++++ 1 file changed, 212 insertions(+) create mode 100644 sources/tech/20200506 Managing Git projects with submodules and subtrees.md diff --git a/sources/tech/20200506 Managing Git projects with submodules and subtrees.md b/sources/tech/20200506 Managing Git projects with submodules and subtrees.md new file mode 100644 index 0000000000..f906f2a4e4 --- /dev/null +++ b/sources/tech/20200506 Managing Git projects with submodules and subtrees.md @@ -0,0 +1,212 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Managing Git projects with submodules and subtrees) +[#]: via: (https://opensource.com/article/20/5/git-submodules-subtrees) +[#]: author: (Manaswini Das https://opensource.com/users/manaswinidas) + +Managing Git projects with submodules and subtrees +====== +Submodules and subtrees help you manage child projects across multiple +repositories. +![Digital creative of a browser on the internet][1] + +If you are into open source development, you have probably worked with Git to manage source code. You might have come across projects with numerous dependencies and/or sub-projects. How do you manage them? + +For an open source organization, it can be tricky to achieve single-source documentation and dependency management for the community _and_ the product. The documentation and project often end up fragmented and redundant, which makes them difficult to maintain. + +### The need + +Suppose you want to use a single project as a child project inside a repository. The traditional method is just to copy the project to the parent repository. But, what if you want to use the same child project in many parent repositories? It wouldn't be feasible to copy the child project into every parent and have to make changes in all of them whenever you update it. This would create redundancy and inconsistency in the parent repositories and make it difficult to update and maintain the child project. + +### Git submodules and subtrees + +What if you could put one project within another using a single command? What if you could just add the project as a child to any number of projects and push changes on the go, whenever you want to? Git provides solutions for this: Git submodules and Git subtrees. These tools were created to support code-sharing development workflows on a more modular level, aspiring to bridge the gap between the Git repository's source-code management (SCM) and the sub-repos within it. + +![Cherry tree growing on a mulberry tree][2] + +Cherry tree growing on a mulberry tree + +This is a real-life scenario of the concepts this article will cover in detail. If you're already familiar with trees, here is what this model will look like: + +![Tree with subtrees][3] + +CC BY-SA opensource.com + +### What are Git submodules? + +Git provides submodules in its default package that enable Git repositories to be nested within other repositories. To be precise, the Git submodule points to a specific commit on the child repository. Here is what Git submodules look like in my [Docs-test][4] GitHub repo: + +![Git submodules screenshot][5] + +The format **[folder@commitId][6]** indicates that the repository is a submodule, and you can directly click on the folder to go to the child repository. The config file called **.gitmodules** contains all the submodule repository details. My repo's **.gitmodules** file looks like this: + +![Screenshot of .gitmodules file][7] + +You can use the following commands to use Git submodules in your repositories. + +#### Clone a repository and load submodules + +To clone a repository containing submodules: + + +``` +`$ git clone --recursive ` +``` + +If you have already cloned a repository and want to load its submodules: + + +``` +`$ git submodule update --init` +``` + +If there are nested submodules: + + +``` +`$ git submodule update --init --recursive` +``` + +#### Download submodules + +Downloading submodules sequentially can be a tedious task, so **clone** and **submodule update** will support the **\--jobs** or **-j** parameter. + +For example, to download eight submodules at once, use: + + +``` +$ git submodule update --init --recursive -j 8 +$ git clone --recursive --jobs 8 <URL to Git repo> +``` + +#### Pull submodules + +Before running or building the parent repository, you have to make sure that the child dependencies are up to date. + +To pull all changes in submodules: + + +``` +`$ git submodule update --remote` +``` + +#### Create repositories with submodules + +To add a child repository to a parent repository: + + +``` +`$ git submodule add ` +``` + +To initialize an existing Git submodule: + + +``` +`$ git submodule init` +``` + +You can also create branches and track commits in your submodules by adding **\--update** to your **submodule update** command: + + +``` +`$ git submodule update --remote` +``` + +#### Update submodule commits + +As explained above, a submodule is a link that points to a specific commit in the child repository. If you want to update the commit of the submodule, don't worry. You don't need to specify the latest commit explicitly. You can just use the general **submodule update** command: + + +``` +`$ git submodule update` +``` + +Just add and commit as you normally would to create and push the parent repository to GitHub. + +#### Delete a submodule from a parent repository + +Merely deleting a child project folder manually won't remove the child project from the parent repository. To delete a submodule named **childmodule**, use: + + +``` +`$ git rm -f childmodule` +``` + +Although Git submodules may appear easy to work with, it can be difficult for beginners to find their way around them. + +### What are Git subtrees? + +Git subtrees, introduced in Git 1.7.11, allow you to insert a copy of any repository as a subdirectory of another one. It is one of several ways Git projects can inject and manage project dependencies. It stores the external dependencies in regular commits. Git subtrees provide clean integration points, so they're easier to revert. + +If you use the [subtrees tutorial provided by GitHub][8] to use subtrees, you won't see a **.gittrees** config file in your local whenever you add a subtree. This makes it difficult to recognize subtrees because subtrees look like general folders, but they are copies of the child repository. The version of Git subtree with the **.gittrees** config file is not available with the default Git package, so to get the git-subtree with **.gittrees** config file, you must download git-subtree from the [**/contrib/subtree** folder][9] in the Git source repository. + +You can clone any repository containing subtrees, just like any other general repository, but it may take longer because entire copies of the child repository reside in the parent repository. + +You can use the following commands to use Git subtrees in your repositories. + +#### Add a subtree to a parent repository + +To add a new subtree to a parent repository, you first need to **remote add** it and then run the **subtree add** command, like: + + +``` +$ git remote add remote-name <URL to Git repo> +$ git subtree add --prefix=folder/ remote-name <URL to Git repo> subtree-branchname +``` + +This merges the whole child project's commit history to the parent repository. + +#### Push and pull changes to and from the subtree + + +``` +`$ git subtree push-all` +``` + +or + + +``` +`$ git subtree pull-all` +``` + +### Which should you use? + +Every tool has pros and cons. Here are some features that may help you decide which is best for your use case. + + * Git submodules have a smaller repository size since they are just links that point to a specific commit in the child project, whereas Git subtrees house the entire child project along with its history. + * Git submodules need to be accessible in a server, but subtrees are decentralized. + * Git submodules are mostly used in component-based development, whereas Git subtrees are used in system-based development. + + + +A Git subtree isn't a direct alternative to a Git submodule. There are certain caveats that guide where each can be used. If there is an external repository you own and are likely to push code back to, use Git submodule since it is easier to push. If you have third-party code that you are unlikely to push to, use Git subtree since it is easier to pull. + +Give Git subtrees and submodules a try and let me know how it goes in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/git-submodules-subtrees + +作者:[Manaswini Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/manaswinidas +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_web_internet_website.png?itok=g5B_Bw62 (Digital creative of a browser on the internet) +[2]: https://opensource.com/sites/default/files/uploads/640px-bialbero_di_casorzo.jpg (Cherry tree growing on a mulberry tree) +[3]: https://opensource.com/sites/default/files/subtree_0.png (Tree with subtrees) +[4]: https://github.com/manaswinidas/Docs-test/ +[5]: https://opensource.com/sites/default/files/uploads/git-submodules_github.png (Git submodules screenshot) +[6]: mailto:folder@commitId +[7]: https://opensource.com/sites/default/files/uploads/gitmodules.png (Screenshot of .gitmodules file) +[8]: https://help.github.com/en/github/using-git/about-git-subtree-merges +[9]: https://github.com/git/git/tree/master/contrib/subtree From b75fb525cba2f1e4b07042ad202a49b19c2d6384 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Thu, 7 May 2020 01:18:56 +0800 Subject: [PATCH 162/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20IBM=20?= =?UTF-8?q?rolls=20Red=20Hat=20into=20edge,=20AI,=20hybrid-cloud=20expansi?= =?UTF-8?q?on?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/talk/20200506 IBM rolls Red Hat into edge, AI, hybrid-cloud expansion.md --- ...t into edge, AI, hybrid-cloud expansion.md | 78 +++++++++++++++++++ 1 file changed, 78 insertions(+) create mode 100644 sources/talk/20200506 IBM rolls Red Hat into edge, AI, hybrid-cloud expansion.md diff --git a/sources/talk/20200506 IBM rolls Red Hat into edge, AI, hybrid-cloud expansion.md b/sources/talk/20200506 IBM rolls Red Hat into edge, AI, hybrid-cloud expansion.md new file mode 100644 index 0000000000..ef7d152e55 --- /dev/null +++ b/sources/talk/20200506 IBM rolls Red Hat into edge, AI, hybrid-cloud expansion.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (IBM rolls Red Hat into edge, AI, hybrid-cloud expansion) +[#]: via: (https://www.networkworld.com/article/3542409/ibm-rolls-red-hat-into-edge-ai-hybrid-cloud-expansion.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +IBM rolls Red Hat into edge, AI, hybrid-cloud expansion +====== +Every company will be an AI company new IBM CEO Krishna tells Think! virtual gathering +Getty + +Deeply assimilating its Red Hat technology, IBM this week rolled out a set of new platforms and services designed to help customers manage edge-based application workloads and exploit artificial intelligence for infrastructure resiliency. + +The announcements came at IBM’s virtualized Think! 2020 event that also featured the first Big Blue keynote by [the company's new CEO Arvind Krishna][1], during which he told the online audience about challenges of COVID-19: "History will look back on this as the moment when the digital transformation of business and society suddenly accelerated,” but also that [hybrid cloud and AI][2] are the two dominant forces driving digital transformation. + +[Now see how AI can boost data-center availability and efficiency][3] + +“More than 20 years ago, experts predicted every company would become an internet company. I am predicting today that every company will become an AI company, not because they can, but because they must,” he said. + +With that idea in mind the company rolled out [IBM Watson AIOps][4], an application that uses AI to automate how enterprises detect, diagnose and respond to IT anomalies in real time. Watson AIOps works by grouping log anomalies and alerts based on spatial and temporal reasoning as well as similarity to past situations, IBM said. It uses IBM’s natural language processing technology to understand the content in trouble tickets to identify and extract resolution actions automatically. + +Then it provides a pointer to where the problem is and identifies other services that might be affected.  “It does this by showing details of the problem based on data from existing tools in the environment, all in the context of the application topology, distilling multiple signals into a succinct report” and eliminating the need for multiple dashboards, IBM stated. + +AI can automate tasks like shifting traffic from one router to another, freeing up space on a drive, or restarting an application. AI systems can also be trained to self-correct, IBM stated. + +“The problem is that many businesses are consumed with fixing problems after they occur, instead of preventing them before they happen. Watson AIOps relies on AI to solve and automate how enterprises self-detect, diagnose and respond to anomalies in real time,” Krishna said. + +AIOps is built on the latest release of Red Hat OpenShift, supports Slack and Box, and can be integrated with IT-monitoring packages from Mattermost and ServiceNow, IBM stated.  + +The Kubernetes-based OpenShift Container Platform lets enterprise customers deploy and manage containers on their infrastructure of choice, be it private or public clouds, including AWS, Microsoft Azure, Google Cloud Platform, Alibaba and IBM Cloud.  It also integrates with IBM prepackaged Cloud Paks, which include a secured Kubernetes container and containerized IBM middleware designed to let customers quickly spin-up enterprise-ready containers. + +OpenShift is also the underlying package for a new version of its [edge network][5] management application called IBM Edge Application Manager. Based on the open source project [Open Horizon][6], the Edge Application Manager can use AI and analytics to help deploy and manage up to 10,000 edge nodes simultaneously by a single administrator. With the platform customers can remotely add new capabilities to a single-purpose device or automatically direct a device to use a variety of cloud-based resources depending on what resources it needs.   + +Cisco said it was working with the IBM Edge Application Manager to deploy apps and analytics models that run on a broad range of Cisco products, such as servers, its industrial portfolio of gateways, routers, switches, SD-WAN, and wireless-connectivity offerings for edge computing.  + +“As an example, IBM Edge Application Manager leverages [Cisco HyperFlex Edge][7] and Cisco IC3000 Industrial Compute Gateway servers. The HyperFlex Edge and IC3K platforms are specifically designed to support a number of edge use cases, such as optimizing traffic management, increasing manufacturing productivity, and increasing the safety of oil and gas pipelines,” Cisco [stated][8]. + +In addition, Cisco said it has used the capabilities in IBM Edge Application Manager to build an “Edge in a Box proposal,” where customers can deploy remote edge applications that run entirely disconnected from public or private clouds but are also synchronized and managed remotely in controlled connectivity windows. For instance, client edge locations may need to operate in disconnected mode but have the ability to synch up for automated application updates and data exchanges, Cisco stated. + +Other edge-related announcements include: + + * IBM [Edge Ecosystem][9], a group of industry players who will target open technology developments to let customers move data and applications between private data centers, hybrid multicloud environments and the edge. The group includes Cisco, Juniper, Samsung and NVIDIA among others. IBM said a Telco Network Cloud Ecosystem will serve a similar function for their network cloud platforms. + * A preview of an upcoming service, called IBM Cloud Satellite. This will extend IBM’s public-cloud service to give customers the ability to use IBM Cloud anywhere – on-premises, in data centers or at the edge – delivered as a service that can be managed from a single pane of glass controlled through the public cloud. It lets customers run applications where it makes sense while utilizing cloud security and ops benefits, IBM stated. Satellite runs on Red Hat OpenShift. + * Telco Network Cloud Manager – a telco/service provider offering that runs  on Red Hat OpenShift to deliver intelligent automation capabilities to orchestrate virtual and container network functions in minutes. Service providers will have the ability to manage workloads on both Red Hat OpenShift and Red Hat OpenStack platforms, which will be critical as telcos increasingly look to modernize their networks for greater agility and efficiency, and to provide new services as 5G adoption expands, IBM stated. + * New capabilities for some of its Cloud Paks including extending the Cloud Pak for Data to include the ability to better automate planning, budgeting and forecasting in [hybrid-cloud][10] environments. IBM upgraded tools for business routing and data capture to the Cloud Pak for Automation as well. + + + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3542409/ibm-rolls-red-hat-into-edge-ai-hybrid-cloud-expansion.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3536654/ibm-taps-new-leaders-for-hybrid-cloud-battles-ahead.html +[2]: https://www.infoworld.com/article/3541825/ibms-new-ceo-lays-out-his-roadmap.html +[3]: https://www.networkworld.com/article/3274654/ai-boosts-data-center-availability-efficiency.html +[4]: https://www.ibm.com/watson/assets/duo/pdf/WDDE814_IBM_Watson_AIOps_Web.pdf +[5]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html +[6]: https://developer.ibm.com/blogs/open-horizon-joins-linux-foundation-grow-open-edge-computing-platform/ +[7]: https://www.cisco.com/c/en/us/products/hyperconverged-infrastructure/index.html?dtid=oblgzzz001087 +[8]: https://blogs.cisco.com/partner/cisco-and-ibm-teaming-at-the-edge +[9]: https://www.ibm.com/blogs/business-partners/join-the-edge-ecosystem/ +[10]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world From 3c1f6f4adaea0bc7680f2b2ba50c145ad80817bf Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 7 May 2020 08:41:17 +0800 Subject: [PATCH 163/178] translated --- ...200428 Upgrading Fedora 31 to Fedora 32.md | 99 ------------------- ...200428 Upgrading Fedora 31 to Fedora 32.md | 99 +++++++++++++++++++ 2 files changed, 99 insertions(+), 99 deletions(-) delete mode 100644 sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md create mode 100644 translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md diff --git a/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md deleted file mode 100644 index 5dd426ccfc..0000000000 --- a/sources/tech/20200428 Upgrading Fedora 31 to Fedora 32.md +++ /dev/null @@ -1,99 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Upgrading Fedora 31 to Fedora 32) -[#]: via: (https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/) -[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) - -Upgrading Fedora 31 to Fedora 32 -====== - -![][1] - -Fedora 32 [is available now][2]. You’ll likely want to upgrade your system to get the latest features available in Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 30 to Fedora 31. - -Before upgrading, visit the [wiki page of common Fedora 32 bugs][3] to see if there’s an issue that might affect your upgrade. Although the Fedora community tries to ensure upgrades work well, there’s no way to guarantee this for every combination of hardware and software that users might have. - -### Upgrading Fedora 31 Workstation to Fedora 32 - -Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell. - -Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 32 is Now Available. - -If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available. - -Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later. - -### Using the command line - -If you’ve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 31 to Fedora 32. Using this plugin will make your upgrade to Fedora 32 simple and easy. - -#### 1\. Update software and back up your system - -Before you do start the upgrade process, make sure you have the latest software for Fedora 31. This is particularly important if you have modular software installed; the latest versions of dnf and GNOME Software include improvements to the upgrade process for some modular streams. To update your software, use _GNOME Software_ or enter the following command in a terminal. - -``` -sudo dnf upgrade --refresh -``` - -Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][4] on the Fedora Magazine. - -#### 2\. Install the DNF plugin - -Next, open a terminal and type the following command to install the plugin: - -``` -sudo dnf install dnf-plugin-system-upgrade -``` - -#### 3\. Start the update with DNF - -Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal: - -``` -sudo dnf system-upgrade download --releasever=32 -``` - -This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _‐‐allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade. - -#### 4\. Reboot and upgrade - -Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal: - -``` -sudo dnf system-upgrade reboot -``` - -Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 31; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process. - -Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 32 system. - -![][5] - -### Resolving upgrade problems - -On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade quick docs][6] for more information on troubleshooting. - -If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/ - -作者:[Adam Šamalík][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/asamalik/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/31-32-816x345.png -[2]: https://fedoramagazine.org/announcing-fedora-32/ -[3]: https://fedoraproject.org/wiki/Common_F32_bugs -[4]: https://fedoramagazine.org/taking-smart-backups-duplicity/ -[5]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png -[6]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues diff --git a/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md new file mode 100644 index 0000000000..637bf41ce8 --- /dev/null +++ b/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Upgrading Fedora 31 to Fedora 32) +[#]: via: (https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +将 Fedora 31 升级到 Fedora 32 +====== + +![][1] + +Fedora 32 [已经发布][2]。你可能想升级系统以获得 Fedora 中的最新功能。Fedora Workstation 有图形化的升级方法。另外,Fedora 提供了命令行方法,用于将 Fedora 30 升级到 Fedora 31。 + +升级前,请访问 [Fedora 32 个常见 bug 的维基页面] [3],查看是否存在可能影响升级的问题。尽管 Fedora 社区试图确保升级正常进行,但是无法为用户可能使用的每种软硬件组合提供保证。 + +### 将Fedora 31 Workstation 升级到 Fedora 32 + +发布不久之后就会出现通知,告诉你有可用的升级。你可以单击通知启动 **GNOME Software**。或者,你可以从 GNOME Shell 中选择“软件”。 + +在 GNOME Software中 选择 _Updates_ 选项卡,你会看到一个页面通知你 Fedora 32 现在可用。 + +如果你在此页面看不到任何内容,请尝试使用左上方的重新加载按钮。发布后,所有系统可能都需要一段时间才能看到可用的升级。 + +选择 _Download_ 获取升级包。你可以继续做事直到下载完成。然后使用 GNOME Software 重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再回来。 + +### 使用命令行 + +如果你是从 Fedora 的先前版本升级的,那么你可能对 _dnf upgrade_ 插件很熟悉。此方法是从 Fedora 31 升级到 Fedora 32 的推荐和受支持的方法。使用此插件将使你轻松地升级到 Fedora 32。 + +#### 1\. 更新软件并备份系统 + +在开始升级过程之前,请确保你有 Fedora 31 的最新软件。如果你安装了模块化软件,这尤为重要。dnf 和 GNOME Software 的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用_ GNOME Software_ 或在终端中输入以下命令。 + +``` +sudo dnf upgrade --refresh +``` + +此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列][4]。 + +#### 2\. 安装 DNF 插件 + +接下来,打开终端并输入以下命令安装插件: + +``` +sudo dnf install dnf-plugin-system-upgrade +``` + +#### 3\. 使用 DNF 开始更新 + +现在,你的系统已更新、已备份、并且已安装 DNF 插件,你可以在终端中使用以下命令开始升级: + +``` +sudo dnf system-upgrade download --releasever=32 +``` + +该命令将开始在本地下载计算机的所有升级,以准备升级。如果由于没有更新的软件包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上述命令时添加 _‐allowerasing_ 标志。这将使 DNF 删除可能阻止系统升级的软件包。 + +#### 4\. 重启并升级 + +当上一个命令完成了所有升级的下载,你的系统就可以重新启动了。要将系统引导至升级过程,请在终端中输入以下命令: + +``` +sudo dnf system-upgrade reboot +``` + +此后,系统将重启。在许多版本之前,_fedup_ 工具会在内核选择/引导页上创建一个新选项。使用 _dnf-plugin-system-upgrade_ 包,你的系统将重启进入 Fedora 31 当前安装的内核;这个是正常的。在选择内核之后,你的系统会立即开始升级过程。 + +现在可能是喝杯咖啡休息的好时机!完成后,系统将重启,你将能够登录到新升级的 Fedora 32 系统。 + +![][5] + +### 解决升级问题 + +有时,升级系统时可能会出现意外问题。如果你遇到任何问题,请访问 [DNF 系统升级文档][6],以获取有关故障排除的更多信息。 + +如果升级时遇到问题,并且系统上安装了第三方仓库,那么在升级时可能需要禁用这些仓库。对于 Fedora 不提供的仓库的支持,请联系仓库的提供者。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/31-32-816x345.png +[2]: https://fedoramagazine.org/announcing-fedora-32/ +[3]: https://fedoraproject.org/wiki/Common_F32_bugs +[4]: https://fedoramagazine.org/taking-smart-backups-duplicity/ +[5]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png +[6]: https://docs.fedoraproject.org/en-US/quick-docs/dnf-system-upgrade/#Resolving_post-upgrade_issues From 9f2a56309ae5a977e41ccd004f8a06c8108c6dc1 Mon Sep 17 00:00:00 2001 From: geekpi Date: Thu, 7 May 2020 08:48:05 +0800 Subject: [PATCH 164/178] translating --- sources/tech/20200428 What-s new in Fedora 32 Workstation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200428 What-s new in Fedora 32 Workstation.md b/sources/tech/20200428 What-s new in Fedora 32 Workstation.md index 402a8e63a7..7553d1bb99 100644 --- a/sources/tech/20200428 What-s new in Fedora 32 Workstation.md +++ b/sources/tech/20200428 What-s new in Fedora 32 Workstation.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 366f92cf3ad9cf9b17feb32c3c88a6a651c57713 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 7 May 2020 10:07:38 +0800 Subject: [PATCH 165/178] PRF @tinyeyeser --- ...o avoid man-in-the-middle cyber attacks.md | 52 +++++++++---------- 1 file changed, 24 insertions(+), 28 deletions(-) diff --git a/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md b/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md index 7ccce7c3bd..ca5b6b0aac 100644 --- a/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md +++ b/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md @@ -1,68 +1,64 @@ [#]: collector: "lujun9972" -[#]: translator: " " -[#]: reviewer: " " +[#]: translator: "tinyeyeser" +[#]: reviewer: "wxy" [#]: publisher: " " [#]: url: " " [#]: subject: "How to avoid man-in-the-middle cyber attacks" [#]: via: "https://opensource.com/article/20/4/mitm-attacks" [#]: author: "Jackie Lam https://opensource.com/users/beenverified" -如何避免中间人攻击 +如何避免中间人攻击(MITM) ====== -首先搞明白到底什么是中间人攻击,才能避免成为此类高科技窃听的受害者。 +> 首先搞明白到底什么是中间人攻击(MITM),才能避免成为此类高科技窃听的受害者。 -![Security monster][1] +![](https://img.linux.net.cn/data/attachment/album/202005/07/100655i7og1eqsw6o3ww81.jpg) 当你使用电脑发送数据或与某人在线通话的时候,你一定采取了某种程度的安全隐私手段。 但如果有第三方在你不知情的情况下窃听,甚至冒充某个你信任的商业伙伴窃取破坏性的信息呢?你的私人数据就这样被放在了危险分子的手中。 -这就是臭名昭著的中间人攻击。 +这就是臭名昭著的中间人攻击man-in-the-middle(MITM)。 ### 到底什么是中间人攻击? -黑客潜入到你与受害者或是某个设备间的通信过程中,窃取敏感信息——多数是身份信息——进而从事各种违法行为的过程,就是一次中间人攻击。Scamicide公司创始人Steve J. J. Weisman介绍说: +黑客潜入到你与受害者或是某个设备间的通信过程中,窃取敏感信息(多数是身份信息)进而从事各种违法行为的过程,就是一次中间人攻击。Scamicide 公司创始人 Steve J. J. Weisman 介绍说: -“中间人攻击也可以发生在受害者与某个合法app或网页中间。当受害者以为自己面对的是正常app或网页时,其实Ta 正在与一个仿冒的app或网页互动,将自己的敏感信息透露给不法分子。” +> “中间人攻击也可以发生在受害者与某个合法 app 或网页中间。当受害者以为自己面对的是正常 app 或网页时,其实 Ta 正在与一个仿冒的 app 或网页互动,将自己的敏感信息透露给了不法分子。” -中间人攻击诞生于1980年代,是最古老的网络攻击形式之一。但它却更为常见。Weisman解释道,发生中间人攻击的场景有很多种: +中间人攻击诞生于 1980 年代,是最古老的网络攻击形式之一。但它却更为常见。Weisman 解释道,发生中间人攻击的场景有很多种: - * **攻陷一个未有效加密的WiFi路由器**:该场景多见于人们使用公共WiFi的时候。“虽然家用路由器也很脆弱,但黑客攻击公共WiFi网络的情况更为常见。”Weisman说,“黑客的目标就是从毫无戒心的人们那里窃取在线银行账户这样的敏感信息。” - * **攻陷银行、金融顾问等机构的电子邮件账户**:“一旦黑客攻陷了这些电子邮件系统,他们就会冒充银行或此类公司给受害者发邮件”,Weisman说,”他们以紧急情况的名义索要个人信息,诸如用户名和密码。受害者很容易被诱骗交出这些信息。“ - * **发送钓鱼邮件**:窃贼们还可能冒充成与受害者有合作关系的公司,向其索要个人信息。”在多个案例中,钓鱼邮件会引导受害者访问一个伪造的网页,这个伪造的网页看起来就和受害者常常访问的合法公司网页一模一样。“Weisman说道。 - * **在合法网页中嵌入恶意代码**:攻击者还会把恶意代码——通常是JavaScript——嵌入到一个合法的网页中。”当受害者加载这个合法网页时,恶意代码首先按兵不动,直到用户输入账户登录或是信用卡信息时,恶意代码就会复制这些信息并将其发送至攻击者的服务器。“网络安全专家Nicholas McBride介绍说。 + * **攻陷一个未有效加密的 WiFi 路由器**:该场景多见于人们使用公共 WiFi 的时候。“虽然家用路由器也很脆弱,但黑客攻击公共 WiFi 网络的情况更为常见。”Weisman 说,“黑客的目标就是从毫无戒心的人们那里窃取在线银行账户这样的敏感信息。” + * **攻陷银行、金融顾问等机构的电子邮件账户**:“一旦黑客攻陷了这些电子邮件系统,他们就会冒充银行或此类公司给受害者发邮件”,Weisman 说,”他们以紧急情况的名义索要个人信息,诸如用户名和密码。受害者很容易被诱骗交出这些信息。” + * **发送钓鱼邮件**:窃贼们还可能冒充成与受害者有合作关系的公司,向其索要个人信息。“在多个案例中,钓鱼邮件会引导受害者访问一个伪造的网页,这个伪造的网页看起来就和受害者常常访问的合法公司网页一模一样。”Weisman 说道。 + * **在合法网页中嵌入恶意代码**:攻击者还会把恶意代码(通常是 JavaScript)嵌入到一个合法的网页中。“当受害者加载这个合法网页时,恶意代码首先按兵不动,直到用户输入账户登录或是信用卡信息时,恶意代码就会复制这些信息并将其发送至攻击者的服务器。”网络安全专家 Nicholas McBride 介绍说。 ### 有哪些中间人攻击的著名案例? -联想作为主流的计算机制造厂商,在2014到2015年售卖的消费级笔记本电脑中预装了一款叫做 VisualDiscovery 的软件,拦截用户的网页浏览行为。当用户的鼠标在某个产品页面经过时,这款软件就会弹出一个来自合作伙伴的类似产品的广告。 +联想作为主流的计算机制造厂商,在 2014 到 2015 年售卖的消费级笔记本电脑中预装了一款叫做 VisualDiscovery 的软件,拦截用户的网页浏览行为。当用户的鼠标在某个产品页面经过时,这款软件就会弹出一个来自合作伙伴的类似产品的广告。 -这起中间人攻击事件的关键在于:VisualDiscovery 拥有访问用户所有私人数据的权限,包括身份证号、金融交易信息、医疗信息、登录名和密码等等。所有这些访问行为都是在用户不知情和未获得授权的情况下进行的。联邦交易委员会(FTC)认定此次事件为欺诈与不公平竞争。2019年,联想同意为此支付8300万美元的集体诉讼罚款。 +这起中间人攻击事件的关键在于:VisualDiscovery 拥有访问用户所有私人数据的权限,包括身份证号、金融交易信息、医疗信息、登录名和密码等等。所有这些访问行为都是在用户不知情和未获得授权的情况下进行的。联邦交易委员会(FTC)认定此次事件为欺诈与不公平竞争。2019 年,联想同意为此支付 8300 万美元的集体诉讼罚款。 ### 我如何才能避免遭受中间人攻击? - * **避免使用公共WiFi:**Weisman建议,从来都不要使用公开的WiFi进行金融交易,除非你安装了可靠的VPN客户端并连接至可信任的VPN服务器。通过VPN连接,你的通信是加密的,信息也就不会失窃。 + * **避免使用公共 WiFi**:Weisman 建议,从来都不要使用公开的 WiFi 进行金融交易,除非你安装了可靠的 VPN 客户端并连接至可信任的 VPN 服务器。通过 VPN 连接,你的通信是加密的,信息也就不会失窃。 * **时刻注意:**对要求你更新密码或是提供用户名等私人信息的邮件或文本消息要时刻保持警惕。这些手段很可能被用来窃取你的身份信息。 -如果不确定收到的邮件来自于确切哪一方,你可以使用诸如电话反查或是邮件反查等工具。通过电话反查,你可以找出未知发件人的更多身份信息。通过邮件反查,你可以尝试确定谁给你发来了这条消息。 + 如果不确定收到的邮件来自于确切哪一方,你可以使用诸如电话反查或是邮件反查等工具。通过电话反查,你可以找出未知发件人的更多身份信息。通过邮件反查,你可以尝试确定谁给你发来了这条消息。 -通常来讲,如果发现某些方面确实有问题,你可以听从公司中某个你认识或是信任的人的意见。或者,你也可以去你的银行、学校或其他某个组织,当面寻求他们的帮助。总之,重要的账户信息绝对不要透露给不认识的“技术人员”。 + 通常来讲,如果发现某些方面确实有问题,你可以听从公司中某个你认识或是信任的人的意见。或者,你也可以去你的银行、学校或其他某个组织,当面寻求他们的帮助。总之,重要的账户信息绝对不要透露给不认识的“技术人员”。 - * **不要点击邮件中的链接:**如果有人给你发了一封邮件,说你需要登录某个账户,不要点击邮件中的链接。相反,要通过平常习惯的方式自行去访问,并留意是否有告警信息。如果在账户设置中没有看到告警信息,给客服打电话的时候也_不要_联系邮件中留的电话,而是站点页面中的联系人信息。 - * **安装可靠的安全软件:**如果你使用的是Windows操作系统,安装开源的杀毒软件,如[ClamAV][2]。如果使用的是其他平台,要保持你的软件安装有最新的安全补丁。 - * **认真对待告警信息:**如果你正在访问的页面以HTTPS开头,浏览器可能会出现一则告警信息。例如,站点证书的域名与你尝试访问的站点域名不相匹配。千万不要忽视此类告警信息。听从告警建议,迅速关掉页面。确认域名没有输入错误的情况下,如果情况依旧,要立刻联系站点所有者。 - * **使用广告屏蔽软件:**弹窗广告(也叫广告软件攻击)可被用于窃取个人信息,因此你还可以使用广告屏蔽类软件。对个人用户来说,中间人攻击其实是很难防范的,因为它被设计出来的时候,就是为了让受害者始终蒙在鼓里,意识不到任何异常。有一款不错的开源广告屏蔽软件叫 [uBlock origin][4]。可以同时支持Firefox和Chromium(以及所有基于Chromium的浏览器,例如Chrome、Brave、Vivaldi、Edge等),甚至还支持Safari。 + * **不要点击邮件中的链接**:如果有人给你发了一封邮件,说你需要登录某个账户,不要点击邮件中的链接。相反,要通过平常习惯的方式自行去访问,并留意是否有告警信息。如果在账户设置中没有看到告警信息,给客服打电话的时候也*不要*联系邮件中留的电话,而是联系站点页面中的联系人信息。 + * **安装可靠的安全软件**:如果你使用的是 Windows 操作系统,安装开源的杀毒软件,如 [ClamAV][2]。如果使用的是其他平台,要保持你的软件安装有最新的安全补丁。 + * **认真对待告警信息**:如果你正在访问的页面以 HTTPS 开头,浏览器可能会出现一则告警信息。例如,站点证书的域名与你尝试访问的站点域名不相匹配。千万不要忽视此类告警信息。听从告警建议,迅速关掉页面。确认域名没有输入错误的情况下,如果情况依旧,要立刻联系站点所有者。 + * **使用广告屏蔽软件**:弹窗广告(也叫广告软件攻击)可被用于窃取个人信息,因此你还可以使用广告屏蔽类软件。对个人用户来说,中间人攻击其实是很难防范的,因为它被设计出来的时候,就是为了让受害者始终蒙在鼓里,意识不到任何异常。有一款不错的开源广告屏蔽软件叫 [uBlock origin][4]。可以同时支持 Firefox 和 Chromium(以及所有基于 Chromium 的浏览器,例如 Chrome、Brave、Vivaldi、Edge 等),甚至还支持 Safari。 ### 保持警惕 -要时刻记住,你并不需要立刻就点击某些链接,你也并不需要跟随某个陌生人的建议,无论这些信息看起来有多么紧急。互联网始终都在。你大可以先离开电脑,去证实一下这些人的真实身份,看看这些”无比紧急“的页面到底是真是假。 +要时刻记住,你并不需要立刻就点击某些链接,你也并不需要听从某个陌生人的建议,无论这些信息看起来有多么紧急。互联网始终都在。你大可以先离开电脑,去证实一下这些人的真实身份,看看这些“无比紧急”的页面到底是真是假。 尽管任何人都可能遭遇中间人攻击,只要弄明白何为中间人攻击,理解中间人攻击如何发生,并采取有效的防范措施,就可以保护自己避免成为其受害者。 -* * * - -_This article was originally published on [BeenVerified.com][5] under a [CC BY-SA 2.0][6] license._ - -------------------------------------------------------------------------------- via: https://opensource.com/article/20/4/mitm-attacks @@ -70,7 +66,7 @@ via: https://opensource.com/article/20/4/mitm-attacks 作者:[Jackie Lam][a] 选题:[lujun9972][b] 译者:[tinyeyeser](https://github.com/tinyeyeser) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 From b525cbdf895c58609a8996f05a198f01ff4196e3 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 7 May 2020 10:08:10 +0800 Subject: [PATCH 166/178] PUB @tinyeyeser https://linux.cn/article-12191-1.html --- .../20200407 How to avoid man-in-the-middle cyber attacks.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200407 How to avoid man-in-the-middle cyber attacks.md (99%) diff --git a/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md b/published/20200407 How to avoid man-in-the-middle cyber attacks.md similarity index 99% rename from translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md rename to published/20200407 How to avoid man-in-the-middle cyber attacks.md index ca5b6b0aac..8ae4bb8a6d 100644 --- a/translated/tech/20200407 How to avoid man-in-the-middle cyber attacks.md +++ b/published/20200407 How to avoid man-in-the-middle cyber attacks.md @@ -1,8 +1,8 @@ [#]: collector: "lujun9972" [#]: translator: "tinyeyeser" [#]: reviewer: "wxy" -[#]: publisher: " " -[#]: url: " " +[#]: publisher: "wxy" +[#]: url: "https://linux.cn/article-12191-1.html" [#]: subject: "How to avoid man-in-the-middle cyber attacks" [#]: via: "https://opensource.com/article/20/4/mitm-attacks" [#]: author: "Jackie Lam https://opensource.com/users/beenverified" From c9bea52ee1574bc76d95802cbd33a555c5aeb81c Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 7 May 2020 14:40:15 +0800 Subject: [PATCH 167/178] PRF @lxbwolf --- ...nd JPG for your online images- Use WebP.md | 83 ++++++++++--------- 1 file changed, 42 insertions(+), 41 deletions(-) diff --git a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md index 690f4c6c39..0450e63f23 100644 --- a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -1,28 +1,30 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Drop PNG and JPG for your online images: Use WebP) [#]: via: (https://opensource.com/article/20/4/webp-image-compression) [#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) -线上图片请抛弃 PNG 和 JPG 后缀:使用 WebP +线上图片请抛弃 PNG 和 JPG:使用 WebP ====== -了解一下这个开源的图片编辑工具来节省时间和空间。 -![Painting art on a computer screen][1] -WebP 是 2010 年 Google 开发的一种图片格式,能提供对网络图片的无损压缩和有损压缩。网络开发者们可以使用 WebP 来创建更小更丰富的图片,以此来提高网站的速度。更快的加载速度对于网站的用户体验和网站市场的效能是至关重要的。 +> 了解一下这个开源的图片编辑工具来节省时间和空间。 -为了提供领先于所有的设备和用户的图片加载能力,你网站上的图片文件大小不应该超过 500 KB。 +![](https://img.linux.net.cn/data/attachment/album/202005/07/143932l22hot7ebhbbqjmm.jpg) -WebP 无损图片通常比 PNG 图片文件小至少 25%。在相同的 SSIM(structural similarity,结构相似性)质量指标下,WebP 有损图片通常比 JPEG 图片小 25% 到 34%。 +WebP 是 2010 年 Google 开发的一种图片格式,它为网页上的图片提供了卓越的无损和有损压缩。网站开发者们可以使用 WebP 来创建尺寸更小、细节更丰富的图片,以此来提高网站的速度。更快的加载速度对于网站的用户体验和网站的营销效果是至关重要的。 -无损 WebP 也支持透明度。在有损 RGB 压缩可接受的情况下,有损 WebP 也支持透明度,PNG 文件的大小通常为 WebP 文件大小的四倍。 +为了在所有设备和用户中达到最佳加载效果,你网站上的图片文件大小不应该超过 500 KB。 -Google 报告把动图 GIF 文件转换为有损 WebP后文件大小减少了 64%,转换为无损 WebP 后文件大小减少了 19%。 +与 PNG 图片相比,WebP 无损图片通常至少要比 PNG 图片小 25%。在同等的 SSIM(结构相似度structural similarity)质量指标下,WebP 有损图片通常比 JPEG 图片小 25% 到 34%。 -WebP 文件格式是一种基于 RIFF(resource interchange file format,资源交换文件格式)的文档格式。你可以用 [hexdump][2] 看到文件的签名是 **52 49 46 46** (RIFF): +无损 WebP 也支持透明度。而在可接受有损 RGB 压缩的情况下,有损 WebP 也支持透明度,通常其大小比 PNG 文件小三倍。 + +Google 报告称,把动画 GIF 文件转换为有损 WebP 后文件大小减少了 64%,转换为无损 WebP 后文件大小减少了 19%。 + +WebP 文件格式是一种基于 RIFF(资源互换文件格式resource interchange file format)的文档格式。你可以用 [hexdump][2] 看到文件的签名是 `52 49 46 46`(RIFF): ``` @@ -33,31 +35,31 @@ $ hexdump --canonical pixel.webp 0000002e ``` -独立的 libwebp 库以 WebP 技术规范的引用实现的方式提供服务,可以从 Google 的 [Git 仓库][3] 或下载 tar 包获得。 +独立的 libwebp 库作为 WebP 技术规范的参考实现,可以从 Google 的 [Git 仓库][3] 或 tar 包中获得。 -全球在用的 80% 的 web 浏览器兼容 WebP 格式。本文撰写时,Apple 的 Safari 浏览器还不兼容。对于不兼容的情况,应变方法是在 WebP 图片旁边准备一张 JPG/PNG 图片,我们有很多种方法和 Wordpress 插件来实现。 +全球在用的 80% 的 web 浏览器兼容 WebP 格式。本文撰写时,Apple 的 Safari 浏览器还不兼容。解决这个问题的方法是将 JPG/PNG 图片与 WebP 图片一起提供,有一些方法和 Wordpress 插件可以做到这一点。 -### 为什么这很重要? +### 为什么要这样做? -我的部分工作是设计和维护我们组织的网站。由于网站是个市场工具并且网站速度是衡量用户体验的重要指标,我一直致力于提高网站速度,通过把图片转换为 WebP 来减少图片大小是一个有效的解决方案。 +我的部分工作是设计和维护我们组织的网站。由于网站是个营销工具,而网站的速度是衡量用户体验的重要指标,我一直致力于提高网站速度,通过把图片转换为 WebP 来减少图片大小是一个很好的解决方案。 -我使用了 **web.dev** 来检测其中一个网页,该工具是由 Lighthouse 提供服务的,遵循 Apache 2.0 证书,可以在 找到。 +我使用了 web.dev 来检测其中一个网页,该工具是由 Lighthouse 提供服务的,遵循 Apache 2.0 许可证,可以在 找到。 -根据官方描述,”LIghthouse 是一个开源的,旨在提升网页质量的自动化工具。你可以在任何网页上运行它 — 公共的或需要鉴权的。它有性能、可用性、积极的 web 应用、SEO和其他项目的审计。你可以使用命令行、作为一个 Node 模块或在 Chrome DevTools 里运行 Lighthouse。你输入一个 URL 给 Lighthouse,它对这个网页运行一系列的审计规则,之后生成这个网页的审计结果报告。从报告的失败审计条目中可以知道应该怎么优化网页。每条审计都有对应的文档解释为什么该项目是重要的,以及如何修复它。“ +据其官方描述,“LIghthouse 是一个开源的,旨在提升网页质量的自动化工具。你可以在任何公共的或需要鉴权的网页上运行它。它有性能、可用性、渐进式 web 应用、SEO 等方面的审计。你可以在 Chrome 浏览器的开发工具中运行 Lighthouse,也可以通过命令行或作为 Node 模块运行。你输入一个 URL 给 Lighthouse,它会对这个网页进行一系列的审计,然后生成这个网页的审计结果报告。从报告的失败审计条目中可以知道应该怎么优化网页。每条审计都有对应的文档解释为什么该项目是重要的,以及如何修复它。” ### 创建更小的 WebP 图片 -我测试的页面返回了三张图片。在它生成的报告中,它提供了推荐和目标格式。我选择了它报告有 650 KB 的 ”app-graphic“ 图片。通过把它转换为 WebP 格式,预计可以图片大小降到 61 KB,节省 589 KB。我在 Photoshop 中把它转换了,用默认的 WebP 设置参数保存它,它的文件大小为 44.9 KB。比预期的还要好!从下面的 Photoshop 截图中可以看出,两张图在视觉上完全一样。 +我测试的页面返回了三张图片。在它生成的报告中,它提供了推荐和目标。我选择了它报告有 650 KB 的 `app-graphic` 图片。通过把它转换为 WebP 格式,预计可以把图片大小降到 61 KB,节省 589 KB。我在 Photoshop 中把它转换了,用默认的 WebP 设置参数保存它,它的文件大小为 44.9 KB。比预期的还要好!从下面的 Photoshop 截图中可以看出,两张图在视觉质量上完全一样。 ![WebP vs JPG comparison][4] -左图:650 KB(实际大小)。右图: 589 KB(转换之后的目标大小)。 +*左图:650 KB(实际大小)。右图: 44.9 KB(转换之后的目标大小)。* 当然,也可以用开源图片编辑工具 [GIMP][5] 把图片导出为 WebP。它提供了几个质量和压缩的参数: ![GIMP dialog for exporting webp, as a webp][6] -另一张图拉近视野后: +另一张图放大后: ![WebP vs PNG comparison][7] @@ -67,55 +69,54 @@ PNG(左图)和 WebP(右图),都是从 JPG 转换而来,两图对比 你也可以用 Linux 的命令行工具把图片从 JPG/PNG 转换为 WebP: -在命令行使用 **cwebp** 把 PNG 或 JPG 图片文件转换为 WebP 格式。你可以用下面的命令把 PNG 图片文件转换为质量参数为 80 的 WebP 图片。 - +在命令行使用 `cwebp` 把 PNG 或 JPG 图片文件转换为 WebP 格式。你可以用下面的命令把 PNG 图片文件转换为质量参数为 80 的 WebP 图片。 ``` -`cwebp -q 80 image.png -o image.webp` +cwebp -q 80 image.png -o image.webp ``` -你还可以用 [Image Magick][8],这个工具可能在你的发行版本软件仓库中可以找到。转换的子命令是 **convert**,它需要的所有参数就是输入和输出文件: - +你还可以用 [Image Magick][8],这个工具可能在你的发行版本软件仓库中可以找到。转换的子命令是 `convert`,它需要的所有参数就是输入和输出文件: ``` -`convert pixel.png pixel.webp` +convert pixel.png pixel.webp ``` ### 使用编辑器把图片转换为 WebP -使用 [GIMP][9] 图片编辑器来把图片转换为 WebP。从 2.10 版本开始,它原生地支持 WebP。 +要在图片编辑器中来把图片转换为 WebP,可以使用 [GIMP][9]。从 2.10 版本开始,它原生地支持 WebP。 -如果你是 Photoshop 用户,由于 Photoshop 默认不包含 WebP,因此你需要一个转换插件。遵循 Apache License 2.0 证书发行的 WebPShop 0.2.1 是一个用户打开和保存包括动图在内的 WebP 图片的 Photoshop 模块,在 可以找到。 +如果你是 Photoshop 用户,由于 Photoshop 默认不包含 WebP 支持,因此你需要一个转换插件。遵循 Apache License 2.0 许可证发布的 WebPShop 0.2.1 是一个用于打开和保存包括动画图在内的 WebP 图片的 Photoshop 模块,在 可以找到。 -为了能正常使用它,你需要把它放进 Photoshop 插件目录下的 **bin** 文件夹: +为了能正常使用它,你需要把它放进 Photoshop 插件目录下的 `bin` 文件夹: -Windows x64—C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi +Windows x64 :`C:\Program Files\Adobe\Adobe Photoshop\Plug-ins\WebPShop.8bi` -Mac—Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin +Mac:`Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin` ### Wordpress 上的 WebP 很多网站是用 Wordpress 搭建的(我的网站就是)。因此,Wordpress 怎么上传 WebP 图片?本文撰写时,它还不支持。但是,当然已经有插件来满足这种需求,因此你可以在你的网站上同时准备 WebP 和 PNG/JPG 图片(为 Apple 用户)。 -在 [Marius Hosting][11] 有下面的指示: +在 [Marius Hosting][11] 有下面的[说明][10]: -”直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 functions.php 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你陈述怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入 Appearance/Theme Editor 找到 functions.php。拷贝下面的代码粘贴到文件最后并保存。 +> “直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 `functions.php` 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你介绍一下怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入‘外观/主题编辑器’找到 `functions.php`。复制下面的代码粘贴到该文件最后并保存: - -``` -`//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes');` +> ``` +//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes'); ``` -"如果你想在 Media/Library 看缩略图预览,那么你需要把下面的代码也添加到 functions.php 文件。为了找到 functions.php 文件,进入 Appearance/Theme Editor 并搜索 functions.php,然后拷贝下面的代码粘贴到文件最后并保存。“ +> 如果你想在‘媒体/媒体库’时看到缩略图预览,那么你需要把下面的代码也添加到 `functions.php` 文件。为了找到 `functions.php` 文件,进入‘外观/主题编辑器’并搜索 `functions.php`,然后复制下面的代码粘贴到文件最后并保存: +> ``` +//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2); ``` -`//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2);` -``` + +> ” ### WebP 和未来 -WebP 是鲁棒的和最优的格式。它看起来更好,有更好的压缩率,它拥有其他大部分常见图片格式的所有特性。不必再等了,现在就使用它把。 +WebP 是一个健壮而优化的格式。它看起来更好,压缩率更高,并具有其他大部分常见图片格式的所有特性。不必再等了,现在就使用它吧。 -------------------------------------------------------------------------------- @@ -124,7 +125,7 @@ via: https://opensource.com/article/20/4/webp-image-compression 作者:[Jeff Macharyas][a] 选题:[lujun9972][b] 译者:[lxbwolf](https://github.com/lxbwolf) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -135,7 +136,7 @@ via: https://opensource.com/article/20/4/webp-image-compression [3]: https://storage.googleapis.com/downloads.webmproject.org/releases/webp/index.html [4]: https://opensource.com/sites/default/files/uploads/webp-vs-jpg-app-graphic.png (WebP vs JPG comparison) [5]: http://gimp.org -[6]: https://opensource.com/sites/default/files/webp-gimp.webp (GIMP dialog for exporting webp, as a webp) +[6]: https://img.linux.net.cn/data/attachment/album/202005/07/143538plu797s4wmhy9b1p.jpg (GIMP dialog for exporting webp, as a webp) [7]: https://opensource.com/sites/default/files/uploads/xcompare-png-left-webp-right.png (WebP vs PNG comparison) [8]: https://imagemagick.org [9]: https://en.wikipedia.org/wiki/GIMP From 20a7f216748da621361b1eb9144ac850c208df5f Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 7 May 2020 14:52:53 +0800 Subject: [PATCH 168/178] PRF --- ...nd JPG for your online images- Use WebP.md | 35 +++++++++++++++---- 1 file changed, 28 insertions(+), 7 deletions(-) diff --git a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md index 0450e63f23..7cf2cdd803 100644 --- a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ b/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -99,20 +99,41 @@ Mac:`Applications/Adobe Photoshop/Plug-ins/WebPShop.plugin` 在 [Marius Hosting][11] 有下面的[说明][10]: -> “直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 `functions.php` 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你介绍一下怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入‘外观/主题编辑器’找到 `functions.php`。复制下面的代码粘贴到该文件最后并保存: +“直接向 Wordpress 上传 WebP 图片会怎样?这很简单。向你的主题 `functions.php` 文件添加几行内容就可以了。Wordpress 默认不支持展示和上传 WebP 文件,但是我会向你介绍一下怎么通过几个简单的步骤来让它支持。登录进你的 Wordpress 管理员界面,进入‘外观/主题编辑器’找到 `functions.php`。复制下面的代码粘贴到该文件最后并保存: -> ``` -//** *Enable upload for webp image files.*/ function webp_upload_mimes($existing_mimes) { $existing_mimes['webp'] = 'image/webp'; return $existing_mimes; } add_filter('mime_types', 'webp_upload_mimes'); +``` +//** *Enable upload for webp image files.*/ +function webp_upload_mimes($existing_mimes) { + $existing_mimes['webp'] = 'image/webp'; + return $existing_mimes; +} +add_filter('mime_types', 'webp_upload_mimes'); ``` -> 如果你想在‘媒体/媒体库’时看到缩略图预览,那么你需要把下面的代码也添加到 `functions.php` 文件。为了找到 `functions.php` 文件,进入‘外观/主题编辑器’并搜索 `functions.php`,然后复制下面的代码粘贴到文件最后并保存: +如果你想在‘媒体/媒体库’时看到缩略图预览,那么你需要把下面的代码也添加到 `functions.php` 文件。为了找到 `functions.php` 文件,进入‘外观/主题编辑器’并搜索 `functions.php`,然后复制下面的代码粘贴到文件最后并保存: +``` +//** * Enable preview / thumbnail for webp image files.*/ +function webp_is_displayable($result, $path) { + if ($result === false) { + $displayable_image_types = array( IMAGETYPE_WEBP ); + $info = @getimagesize( $path ); -> ``` -//** * Enable preview / thumbnail for webp image files.*/ function webp_is_displayable($result, $path) { if ($result === false) { $displayable_image_types = array( IMAGETYPE_WEBP ); $info = @getimagesize( $path ); if (empty($info)) { $result = false; } elseif (!in_array($info[2], $displayable_image_types)) { $result = false; } else { $result = true; } } return $result; } add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2); + if (empty($info)) { + $result = false; + } elseif (!in_array($info[2], $displayable_image_types)) { + $result = false; + } else { + $result = true; + } + } + + return $result; +} +add_filter('file_is_displayable_image', 'webp_is_displayable', 10, 2); ``` -> ” +” ### WebP 和未来 From 1baf20a87c0e4d7c0534a572361511c4280ef2bd Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Thu, 7 May 2020 14:55:16 +0800 Subject: [PATCH 169/178] PUB @lxbwolf https://linux.cn/article-12193-1.html --- ...00429 Drop PNG and JPG for your online images- Use WebP.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200429 Drop PNG and JPG for your online images- Use WebP.md (99%) diff --git a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md b/published/20200429 Drop PNG and JPG for your online images- Use WebP.md similarity index 99% rename from translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md rename to published/20200429 Drop PNG and JPG for your online images- Use WebP.md index 7cf2cdd803..7f581e778f 100644 --- a/translated/tech/20200429 Drop PNG and JPG for your online images- Use WebP.md +++ b/published/20200429 Drop PNG and JPG for your online images- Use WebP.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (lxbwolf) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12193-1.html) [#]: subject: (Drop PNG and JPG for your online images: Use WebP) [#]: via: (https://opensource.com/article/20/4/webp-image-compression) [#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) From a537b46c0f00cdc990c4ebb716e1511d4060ae09 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 8 May 2020 00:57:13 +0800 Subject: [PATCH 170/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200508=20Good?= =?UTF-8?q?=20News!=20You=20Can=20Now=20Buy=20the=20De-Googled=20/e/OS=20S?= =?UTF-8?q?martphone=20from=20Fairphone?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md --- ...Googled -e-OS Smartphone from Fairphone.md | 116 ++++++++++++++++++ 1 file changed, 116 insertions(+) create mode 100644 sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md diff --git a/sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md b/sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md new file mode 100644 index 0000000000..19d0bb67c2 --- /dev/null +++ b/sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md @@ -0,0 +1,116 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Good News! You Can Now Buy the De-Googled /e/OS Smartphone from Fairphone) +[#]: via: (https://itsfoss.com/fairphone-with-e-os/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Good News! You Can Now Buy the De-Googled /e/OS Smartphone from Fairphone +====== + +Fairphone is known for its ethical (or fair) approach of making a smartphone. + +Normally, the ethical approach involves that the workers get paid well, the smartphone build materials are safer for the planet, and the phone is durable/sustainable. And, they’ve already done a good job with their [Fairphone 1][1] , [Fairphone 2][2], and [Fairphone 3][3] smartphones. + +Now, to take things up a notch, Fairphone has teamed up with [/e/OS][4] which is a de-googled Android fork, to launch a separate edition of [Fairphone 3][3] (its latest smartphone) that comes with **/e/OS** out of the box. + +In case you didn’t know about the mobile operating system, you can read our [interview with Gael Duval (Founder of /e/OS)][5] to know more about it. + +While we already have some privacy-focused smartphones like [Librem 5][6], Fairphone 3 with /e/OS is something different to its core. In this article, I’ll try highlighting the key things that you need to know before ordering a Fairphone 3 with /e/OS loaded. + +### The First Privacy Conscious & Sustainable Phone + +You may have noticed a privacy-focused smartphone manufactured in some corner of the world, like [Librem 5][7]. + +But for the most part, it looks like the Fairphone 3 is the first privacy-conscious sustainable phone to get the spotlight. + +![][8] + +The de-googled operating system /e/OS ensures that the smartphone does not rely on Google services to function among other things. Hence, /e/OS should be a great choice for Fairphone 3 for privacy-focused users. + +Also, to support /e/OS out of the box wasn’t just the decision of the manufacturer – but its community. + +As per their announcement, they’ve mentioned: + +> For many, fairer technology isn’t just about the device and its components, it is also about the software that powers the product; and when Fairphone community members were asked what their preferred alternative operating system (OS) was for the next Fairphone, the Fairphone 3, they voted for /e/OS. + +So, it looks like the users do prefer to have /e/OS on their smartphones. + +### Fairphone 3: Overview + +![][9] + +To tell you what I think about it, let me first share the technical specifications of the phone: + + * Dual Nano-SIM (4G LTE/3G/2G support) + * **Display:** 5.65-inch LCD (IPS) with Corning Gorilla Glass 5 protection + * **Screen Resolution**: 2160 x 1080 + * **RAM:** 4 GB + * **Chipset**: Qualcomm Snapdragon 632 + * **Internal Storage:** 64 GB + * **Rear Camera:** 12 MP (IMX363 sensor) + * **Front Camera:** 8 MP + * Bluetooth 5.0 + * WiFi 802.11a/b/g/n/ac + * NFC Supported + * USB-C + * Expandable Storage supported + + + +So, on paper, it sounds like a decent budget smartphone. But, the pricing and availability will be an important factor keeping in mind that it’s a one-of-a-kind smartphone and we don’t really have alternatives to compare to. + +Not just how it’s unique for privacy-focused users, but it is potentially the easiest phone to fix (as suggested by [iFixit’s teardown][10]). + +### Fairphone 3 with /e/OS: Pre-Order, Price & Availability + +![][11] + +As for its availability – the Fairphone 3 with /e/OS is available to pre-order through the [online shop of /e/OS][12] for **€479.90** across Europe.  + +If you are an existing Fairphone 3 user, you can also install /e/OS from the [available build here][13]. + +You get 2 years of warranty along with a 14-day return policy. + +[Pre-Order Fairphone 3 With /e/OS][12] + +### My Thoughts On Fairphone 3 with /e/OS + +It’s important to consider that the smartphone is targeting a particular group of consumers. So, it’s quite obvious that it isn’t meant for everyone. The specifications on paper may look good – but not necessarily the best bang for the buck. + +Also, looking at the smartphone market right now – the specifications and its value for money matter more than what we privacy-focused users want. + +But it’s definitely something impressive and I believe it’s going to get good attention specially among the privacy-aware people who don’t want their smartphone spying on them. + +With Fairphone 3’s launch with /e/OS, the lesser tech savvy people can now get an out of the box privacy-focused smartphone experience. + +What do you think about the Fairphone 3 with /e/OS? Let me know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/fairphone-with-e-os/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Fairphone_1 +[2]: https://en.wikipedia.org/wiki/Fairphone_2 +[3]: https://shop.fairphone.com/en/?ref=header +[4]: https://e.foundation/ +[5]: https://itsfoss.com/gael-duval-interview/ +[6]: https://itsfoss.com/librem-5-available/ +[7]: https://itsfoss.com/librem-linux-phone/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/Fairphone-3-battery.png?ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2020/05/fairphone-3.png?ssl=1 +[10]: https://www.ifixit.com/Teardown/Fairphone+3+Teardown/125573 +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2020/05/fairphone-e-os.png?ssl=1 +[12]: https://e.foundation/product/e-os-fairphone-3/ +[13]: https://doc.e.foundation/devices/FP3/ From e7bb7616b6e076d6699bc1c91f6c0ebaf4995699 Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 8 May 2020 00:57:46 +0800 Subject: [PATCH 171/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200508=20Fixing?= =?UTF-8?q?=20=E2=80=9CUnable=20to=20parse=20package=20file=20/var/lib/apt?= =?UTF-8?q?/lists=E2=80=9D=20Error=20in=20Ubuntu=20and=20Other=20Linux=20D?= =?UTF-8?q?istributions?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200508 Fixing -Unable to parse package file -var-lib-apt-lists- Error in Ubuntu and Other Linux Distributions.md --- ...in Ubuntu and Other Linux Distributions.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 sources/tech/20200508 Fixing -Unable to parse package file -var-lib-apt-lists- Error in Ubuntu and Other Linux Distributions.md diff --git a/sources/tech/20200508 Fixing -Unable to parse package file -var-lib-apt-lists- Error in Ubuntu and Other Linux Distributions.md b/sources/tech/20200508 Fixing -Unable to parse package file -var-lib-apt-lists- Error in Ubuntu and Other Linux Distributions.md new file mode 100644 index 0000000000..6ef4f2b55a --- /dev/null +++ b/sources/tech/20200508 Fixing -Unable to parse package file -var-lib-apt-lists- Error in Ubuntu and Other Linux Distributions.md @@ -0,0 +1,112 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fixing “Unable to parse package file /var/lib/apt/lists” Error in Ubuntu and Other Linux Distributions) +[#]: via: (https://itsfoss.com/unable-to-parse-package-file/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Fixing “Unable to parse package file /var/lib/apt/lists” Error in Ubuntu and Other Linux Distributions +====== + +I have discussed a number of [Ubuntu update errors][1] in the past. If you [use the command line to update Ubuntu][2], you might run into some ‘errors’. + +Some of these ‘errors’ are basically built-in features to prevent unwarranted changes to your system. I am not going into those details in this quick tutorial. + +In this quick tip, I’ll show you how to tackle the following error that you could encounter while updating your system or installing new software: + +**Reading package lists… Error! +E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease +E: The package lists or status file could not be parsed or opened.** + +A similar error can be encountered in Debian: + +**E: Unable to parse package file /var/lib/apt/extended_states (1)** + +There is absolutely no need to panic even thought it says ‘**The package cache file is corrupted**‘. This is really easy to ‘fix’. + +### Handling “Unable to parse package file” error in Ubuntu and Debian-based Linux distributions + +![][3] + +Here’s what you need to do. Take a closer look at the name and path of the file the [Ubuntu][4] is complaining about. + +Reading package lists… Error! +**E: Unable to parse package file /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease** +E: The package lists or status file could not be parsed or opened. + +For example, in the above error, it was complaining about /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease + +This gives you the idea that something is not right with this file. Now all you need to do is to remove this file and regenerate the cache. + +``` +sudo rm +``` + +So in my case, I could use this command: **sudo rm /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_bionic_InRelease** and then rebuild the cache with sudo apt update command. + +#### Step by step for beginners + +If you are familiar with Linux commands, you may know how to do delete the file with its absolute path. For novice users, let me guide you to safely delete the file. + +First, you should go to the directory where the file is stored: + +``` +cd /var/lib/apt/lists/ +``` + +Now delete the file which is not being parsed: + +``` +sudo rm archive.ubuntu.com_ubuntu_dists_bionic_InRelease +``` + +Now if you run the update again, the apt cache will be regenerated. + +``` +sudo apt update +``` + +#### Too many files cannot be parsed? + +This is fine if you have one or two files that are not being parsed while updating the system. But if the system complains about ten or twenty such files, removing them one by one is too tiring. + +What you can do in such a case to remove the entire cache and then generate it again: + +``` +sudo rm -r /var/lib/apt/lists/* +sudo apt update +``` + +#### Explanation of how it fixed your problem + +The /var/lib/apt is the directory where files and data related to the apt package manager are stored. The /var/lib/apt/lists is the directory which is used for storing information for each package resource specified in your system’s sources.list. + +In slightly non complicated terms, this /var/lib/apt/lists stores the package information cache. When you want to install or update a program, your system checks in this directory for the information on the said package. If it finds the detail on the package, then it goes to remote repository and actually download the program or its update. + +When you run the “sudo apt update”, it builds the cache. This is why even when you remove everything in the /var/lib/apt/lists directory, running the update will build a fresh cache. + +This is how it handles the issue of file not being parsed. Your system complained about a particular package or repository information that somehow got corrupted (either a failed download or manual change to sources.list). Removing that file (or everything) and rebuilding the cache solves the issue. + +#### Still facing error? + +This should fix the issue for you. But if the problem still persists or if you have some other related issue, let me know in the comment section and I’ll try to help you out. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/unable-to-parse-package-file/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/ubuntu-update-error/ +[2]: https://itsfoss.com/update-ubuntu/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2020/05/Unable-to-parse-package-file.png?ssl=1 +[4]: https://ubuntu.com/ From 52b62af73c729813313bbf83fce220728aaeb9ad Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 8 May 2020 01:02:30 +0800 Subject: [PATCH 172/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200507=20Using?= =?UTF-8?q?=20the=20systemctl=20command=20to=20manage=20systemd=20units?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200507 Using the systemctl command to manage systemd units.md --- ...stemctl command to manage systemd units.md | 618 ++++++++++++++++++ 1 file changed, 618 insertions(+) create mode 100644 sources/tech/20200507 Using the systemctl command to manage systemd units.md diff --git a/sources/tech/20200507 Using the systemctl command to manage systemd units.md b/sources/tech/20200507 Using the systemctl command to manage systemd units.md new file mode 100644 index 0000000000..e305cee36c --- /dev/null +++ b/sources/tech/20200507 Using the systemctl command to manage systemd units.md @@ -0,0 +1,618 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using the systemctl command to manage systemd units) +[#]: via: (https://opensource.com/article/20/5/systemd-units) +[#]: author: (David Both https://opensource.com/users/dboth) + +Using the systemctl command to manage systemd units +====== +Units are the basis of everything in systemd. +![woman on laptop sitting at the window][1] + +In the first two articles in this series, I explored the Linux systemd startup sequence. In the [first article][2], I looked at systemd's functions and architecture and the controversy around its role as a replacement for the old SystemV init program and startup scripts. And in the [second article][3], I examined two important systemd tools, systemctl and journalctl, and explained how to switch from one target to another and to change the default target. + +In this third article, I'll look at systemd units in more detail and how to use the systemctl command to explore and manage units. I'll also explain how to stop and disable units and how to create a new systemd mount unit to mount a new filesystem and enable it to initiate during startup. + +### Preparation + +All of the experiments in this article should be done as the root user (unless otherwise specified). Some of the commands that simply list various systemd units can be performed by non-root users, but the commands that make changes cannot. Make sure to do all of these experiments only on non-production hosts or virtual machines (VMs). + +One of these experiments requires the sysstat package, so install it before you move on. For Fedora and other Red Hat-based distributions you can install sysstat with: + + +``` +`dnf -y install sysstat` +``` + +The sysstat RPM installs several statistical tools that can be used for problem determination. One is [System Activity Report][4] (SAR), which records many system performance data points at regular intervals (every 10 minutes by default). Rather than run as a daemon in the background, the sysstat package installs two systemd timers. One timer runs every 10 minutes to collect data, and the other runs once a day to aggregate the daily data. In this article, I will look briefly at these timers but wait to explain how to create a timer in a future article. + +### systemd suite + +The fact is, systemd is more than just one program. It is a large suite of programs all designed to work together to manage nearly every aspect of a running Linux system. A full exposition of systemd would take a book on its own. Most of us do not need to understand all of the details about how all of systemd's components fit together, so I will focus on the programs and components that enable you to manage various Linux services and deal with log files and journals. + +### Practical structure + +The structure of systemd—outside of its executable files—is contained in its many configuration files. Although these files have different names and identifier extensions, they are all called "unit" files. Units are the basis of everything systemd. + +Unit files are ASCII plain-text files that are accessible to and can be created or modified by a sysadmin. There are a number of unit file types, and each has its own man page. Figure 1 lists some of these unit file types by their filename extensions and a short description of each. + +systemd unit | Description +---|--- +.automount | The **.automount** units are used to implement on-demand (i.e., plug and play) and mounting of filesystem units in parallel during startup. +.device | The **.device** unit files define hardware and virtual devices that are exposed to the sysadmin in the **/dev/directory**. Not all devices have unit files; typically, block devices such as hard drives, network devices, and some others have unit files. +.mount | The **.mount** unit defines a mount point on the Linux filesystem directory structure. +.scope | The **.scope** unit defines and manages a set of system processes. This unit is not configured using unit files, rather it is created programmatically. Per the **systemd.scope** man page, “The main purpose of scope units is grouping worker processes of a system service for organization and for managing resources.” +.service | The **.service** unit files define processes that are managed by systemd. These include services such as crond cups (Common Unix Printing System), iptables, multiple logical volume management (LVM) services, NetworkManager, and more. +.slice | The **.slice** unit defines a “slice,” which is a conceptual division of system resources that are related to a group of processes. You can think of all system resources as a pie and this subset of resources as a “slice” out of that pie. +.socket | The **.socket** units define interprocess communication sockets, such as network sockets. +.swap | The **.swap** units define swap devices or files. +.target | The **.target** units define groups of unit files that define startup synchronization points, runlevels, and services. Target units define the services and other units that must be active in order to start successfully. +.timer | The **.timer** unit defines timers that can initiate program execution at specified times. + +### systemctl + +I looked at systemd's startup functions in the [second article][3], and here I'll explore its service management functions a bit further. systemd provides the **systemctl** command that is used to start and stop services, configure them to launch (or not) at system startup, and monitor the current status of running services. + +In a terminal session as the root user, ensure that root's home directory ( **~** ) is the [PWD][5]. To begin looking at units in various ways, list all of the loaded and active systemd units. systemctl automatically pipes its [stdout][6] data stream through the **less** pager, so you don't have to: + + +``` +[root@testvm1 ~]# systemctl +UNIT                                       LOAD   ACTIVE SUB       DESCRIPTION               +proc-sys-fs-binfmt_misc.automount          loaded active running   Arbitrary Executable File> +sys-devices-pci0000:00-0000:00:01.1-ata7-host6-target6:0:0-6:0:0:0-block-sr0.device loaded a> +sys-devices-pci0000:00-0000:00:03.0-net-enp0s3.device loaded active plugged   82540EM Gigabi> +sys-devices-pci0000:00-0000:00:05.0-sound-card0.device loaded active plugged   82801AA AC'97> +sys-devices-pci0000:00-0000:00:08.0-net-enp0s8.device loaded active plugged   82540EM Gigabi> +sys-devices-pci0000:00-0000:00:0d.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda1.device loa> +sys-devices-pci0000:00-0000:00:0d.0-ata1-host0-target0:0:0-0:0:0:0-block-sda-sda2.device loa> +<snip – removed lots of lines of data from here> + +LOAD   = Reflects whether the unit definition was properly loaded. +ACTIVE = The high-level unit activation state, i.e. generalization of SUB. +SUB    = The low-level unit activation state, values depend on unit type. + +206 loaded units listed. Pass --all to see loaded but inactive units, too. +To show all installed unit files use 'systemctl list-unit-files'. +``` + +As you scroll through the data in your terminal session, look for some specific things. The first section lists devices such as hard drives, sound cards, network interface cards, and TTY devices. Another section shows the filesystem mount points. Other sections include various services and a list of all loaded and active targets. + +The sysstat timers at the bottom of the output are used to collect and generate daily system activity summaries for SAR. SAR is a very useful problem-solving tool. (You can learn more about it in Chapter 13 of my book [_Using and Administering Linux: Volume 1, Zero to SysAdmin: Getting Started_][7].) + +Near the very bottom, three lines describe the meanings of the statuses (loaded, active, and sub). Press **q** to exit the pager. + +Use the following command (as suggested in the last line of the output above) to see all the units that are installed, whether or not they are loaded. I won't reproduce the output here, because you can scroll through it on your own. The systemctl program has an excellent tab-completion facility that makes it easy to enter complex commands without needing to memorize all the options: + + +``` +`[root@testvm1 ~]# systemctl list-unit-files` +``` + +You can see that some units are disabled. Table 1 in the man page for systemctl lists and provides short descriptions of the entries you might see in this listing. Use the **-t** (type) option to view just the timer units: + + +``` +[root@testvm1 ~]# systemctl list-unit-files -t timer +UNIT FILE                    STATE   +[chrony-dnssrv@.timer][8]         disabled +dnf-makecache.timer          enabled +fstrim.timer                 disabled +logrotate.timer              disabled +logwatch.timer               disabled +[mdadm-last-resort@.timer][9]     static   +mlocate-updatedb.timer       enabled +sysstat-collect.timer        enabled +sysstat-summary.timer        enabled +systemd-tmpfiles-clean.timer static   +unbound-anchor.timer         enabled +``` + +You could do the same thing with this alternative, which provides considerably more detail: + + +``` +[root@testvm1 ~]# systemctl list-timers +Thu 2020-04-16 09:06:20 EDT  3min 59s left n/a                          n/a           systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service +Thu 2020-04-16 10:02:01 EDT  59min left    Thu 2020-04-16 09:01:32 EDT  49s ago       dnf-makecache.timer          dnf-makecache.service +Thu 2020-04-16 13:00:00 EDT  3h 57min left n/a                          n/a           sysstat-collect.timer        sysstat-collect.service +Fri 2020-04-17 00:00:00 EDT  14h left      Thu 2020-04-16 12:51:37 EDT  3h 49min left mlocate-updatedb.timer       mlocate-updatedb.service +Fri 2020-04-17 00:00:00 EDT  14h left      Thu 2020-04-16 12:51:37 EDT  3h 49min left unbound-anchor.timer         unbound-anchor.service +Fri 2020-04-17 00:07:00 EDT  15h left      n/a                          n/a           sysstat-summary.timer        sysstat-summary.service + +6 timers listed. +Pass --all to see loaded but inactive timers, too. +[root@testvm1 ~]# +``` + +Although there is no option to do systemctl list-mounts, you can list the mount point unit files: + + +``` +[root@testvm1 ~]# systemctl list-unit-files -t mount +UNIT FILE                     STATE     +-.mount                       generated +boot.mount                    generated +dev-hugepages.mount           static   +dev-mqueue.mount              static   +home.mount                    generated +proc-fs-nfsd.mount            static   +proc-sys-fs-binfmt_misc.mount disabled +run-vmblock\x2dfuse.mount     disabled +sys-fs-fuse-connections.mount static   +sys-kernel-config.mount       static   +sys-kernel-debug.mount        static   +tmp.mount                     generated +usr.mount                     generated +var-lib-nfs-rpc_pipefs.mount  static   +var.mount                     generated + +15 unit files listed. +[root@testvm1 ~]# +``` + +The STATE column in this data stream is interesting and requires a bit of explanation. The "generated" states indicate that the mount unit was generated on the fly during startup using the information in **/etc/fstab**. The program that generates these mount units is **/lib/systemd/system-generators/systemd-fstab-generator,** along with other tools that generate a number of other unit types. The "static" mount units are for filesystems like **/proc** and **/sys**, and the files for these are located in the **/usr/lib/systemd/system** directory. + +Now, look at the service units. This command will show all services installed on the host, whether or not they are active: + + +``` +`[root@testvm1 ~]# systemctl --all -t service` +``` + +The bottom of this listing of service units displays 166 as the total number of loaded units on my host. Your number will probably differ. + +Unit files do not have a filename extension (such as **.unit**) to help identify them, so you can generalize that most configuration files that belong to systemd are unit files of one type or another. The few remaining files are mostly **.conf** files located in **/etc/systemd**. + +Unit files are stored in the **/usr/lib/systemd** directory and its subdirectories, while the **/etc/systemd/** directory and its subdirectories contain symbolic links to the unit files necessary to the local configuration of this host. + +To explore this, make **/etc/systemd** the PWD and list its contents. Then make **/etc/systemd/system** the PWD and list its contents, and list the contents of at least a couple of the current PWD's subdirectories. + +Take a look at the **default.target** file, which determines which runlevel target the system will boot to. In the second article in this series, I explained how to change the default target from the GUI (**graphical.target**) to the command-line only (**multi-user.target**) target. The **default.target** file on my test VM is simply a symlink to **/usr/lib/systemd/system/graphical.target**. + +Take a few minutes to examine the contents of the **/etc/systemd/system/default.target** file: + + +``` +[root@testvm1 system]# cat default.target +#  SPDX-License-Identifier: LGPL-2.1+ +# +#  This file is part of systemd. +# +#  systemd is free software; you can redistribute it and/or modify it +#  under the terms of the GNU Lesser General Public License as published by +#  the Free Software Foundation; either version 2.1 of the License, or +#  (at your option) any later version. + +[Unit] +Description=Graphical Interface +Documentation=man:systemd.special(7) +Requires=multi-user.target +Wants=display-manager.service +Conflicts=rescue.service rescue.target +After=multi-user.target rescue.service rescue.target display-manager.service +AllowIsolate=yes +``` + +Note that this requires the **multi-user.target**; the **graphical.target** cannot start if the **multi-user.target** is not already up and running. It also says it "wants" the **display-manager.service** unit. A "want" does not need to be fulfilled in order for the unit to start successfully. If the "want" cannot be fulfilled, it will be ignored by systemd, and the rest of the target will start regardless. + +The subdirectories in **/etc/systemd/system** are lists of wants for various targets. Take a few minutes to explore the files and their contents in the **/etc/systemd/system/graphical.target.wants** directory. + +The **systemd.unit** man page contains a lot of good information about unit files, their structure, the sections they can be divided into, and the options that can be used. It also lists many of the unit types, all of which have their own man pages. If you want to interpret a unit file, this would be a good place to start. + +### Service units + +A Fedora installation usually installs and enables services that particular hosts do not need for normal operation. Conversely, sometimes it doesn't include services that need to be installed, enabled, and started. Services that are not needed for the Linux host to function as desired, but which are installed and possibly running, represent a security risk and should—at minimum—be stopped and disabled and—at best—should be uninstalled. + +The systemctl command is used to manage systemd units, including services, targets, mounts, and more. Take a closer look at the list of services to identify services that will never be used: + + +``` +[root@testvm1 ~]# systemctl --all -t service +UNIT                           LOAD      ACTIVE SUB        DESCRIPTION                             +<snip> +chronyd.service                loaded    active running    NTP client/server                       +crond.service                  loaded    active running    Command Scheduler                       +cups.service                   loaded    active running    CUPS Scheduler                           +dbus-daemon.service            loaded    active running    D-Bus System Message Bus                 +<snip> +● ip6tables.service           not-found inactive dead     ip6tables.service                   +● ipset.service               not-found inactive dead     ipset.service                       +● iptables.service            not-found inactive dead     iptables.service                     +<snip> +firewalld.service              loaded    active   running  firewalld - dynamic firewall daemon +<snip> +● ntpd.service                not-found inactive dead     ntpd.service                         +● ntpdate.service             not-found inactive dead     ntpdate.service                     +pcscd.service                  loaded    active   running  PC/SC Smart Card Daemon +``` + +I have pruned out most of the output from the command to save space. The services that show "loaded active running" are obvious. The "not-found" services are ones that systemd is aware of but are not installed on the Linux host. If you want to run those services, you must install the packages that contain them. + +Note the **pcscd.service** unit. This is the PC/SC smart-card daemon. Its function is to communicate with smart-card readers. Many Linux hosts—including VMs—have no need for this reader nor the service that is loaded and taking up memory and CPU resources. You can stop this service and disable it, so it will not restart on the next boot. First, check its status: + + +``` +[root@testvm1 ~]# systemctl status pcscd.service +● pcscd.service - PC/SC Smart Card Daemon +   Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled) +   Active: active (running) since Fri 2019-05-10 11:28:42 EDT; 3 days ago +     Docs: man:pcscd(8) + Main PID: 24706 (pcscd) +    Tasks: 6 (limit: 4694) +   Memory: 1.6M +   CGroup: /system.slice/pcscd.service +           └─24706 /usr/sbin/pcscd --foreground --auto-exit + +May 10 11:28:42 testvm1 systemd[1]: Started PC/SC Smart Card Daemon. +``` + +This data illustrates the additional information systemd provides versus SystemV, which only reports whether or not the service is running. Note that specifying the **.service** unit type is optional. Now stop and disable the service, then re-check its status: + + +``` +[root@testvm1 ~]# systemctl stop pcscd ; systemctl disable pcscd +Warning: Stopping pcscd.service, but it can still be activated by: +  pcscd.socket +Removed /etc/systemd/system/sockets.target.wants/pcscd.socket. +[root@testvm1 ~]# systemctl status pcscd +● pcscd.service - PC/SC Smart Card Daemon +   Loaded: loaded (/usr/lib/systemd/system/pcscd.service; indirect; vendor preset: disabled) +   Active: failed (Result: exit-code) since Mon 2019-05-13 15:23:15 EDT; 48s ago +     Docs: man:pcscd(8) + Main PID: 24706 (code=exited, status=1/FAILURE) + +May 10 11:28:42 testvm1 systemd[1]: Started PC/SC Smart Card Daemon. +May 13 15:23:15 testvm1 systemd[1]: Stopping PC/SC Smart Card Daemon... +May 13 15:23:15 testvm1 systemd[1]: pcscd.service: Main process exited, code=exited, status=1/FAIL> +May 13 15:23:15 testvm1 systemd[1]: pcscd.service: Failed with result 'exit-code'. +May 13 15:23:15 testvm1 systemd[1]: Stopped PC/SC Smart Card Daemon. +``` + +The short log entry display for most services prevents having to search through various log files to locate this type of information. Check the status of the system runlevel targets—specifying the "target" unit type is required: + + +``` +[root@testvm1 ~]# systemctl status multi-user.target +● multi-user.target - Multi-User System +   Loaded: loaded (/usr/lib/systemd/system/multi-user.target; static; vendor preset: disabled) +   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago +     Docs: man:systemd.special(7) + +May 09 13:27:22 testvm1 systemd[1]: Reached target Multi-User System. +[root@testvm1 ~]# systemctl status graphical.target +● graphical.target - Graphical Interface +   Loaded: loaded (/usr/lib/systemd/system/graphical.target; indirect; vendor preset: disabled) +   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago +     Docs: man:systemd.special(7) + +May 09 13:27:22 testvm1 systemd[1]: Reached target Graphical Interface. +[root@testvm1 ~]# systemctl status default.target +● graphical.target - Graphical Interface +   Loaded: loaded (/usr/lib/systemd/system/graphical.target; indirect; vendor preset: disabled) +   Active: active since Thu 2019-05-09 13:27:22 EDT; 4 days ago +     Docs: man:systemd.special(7) + +May 09 13:27:22 testvm1 systemd[1]: Reached target Graphical Interface. +``` + +The default target is the graphical target. The status of any unit can be checked in this way. + +### Mounts the old way + +A mount unit defines all of the parameters required to mount a filesystem on a designated mount point. systemd can manage mount units with more flexibility than those using the **/etc/fstab** filesystem configuration file. Despite this, systemd still uses the **/etc/fstab** file for filesystem configuration and mounting purposes. systemd uses the **systemd-fstab-generator** tool to create transient mount units from the data in the **fstab** file. + +I will create a new filesystem and a systemd mount unit to mount it. If you have some available disk space on your test system, you can do it along with me. + +_Note that the volume group and logical volume names may be different on your test system. Be sure to use the names that are pertinent to your system._ + +You will need to create a partition or logical volume, then make an EXT4 filesystem on it. Add a label to the filesystem, **TestFS**, and create a directory for a mount point **/TestFS**. + +To try this on your own, first, verify that you have free space on the volume group. Here is what that looks like on my VM where I have some space available on the volume group to create a new logical volume: + + +``` +[root@testvm1 ~]# lsblk +NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda             8:0    0  120G  0 disk +├─sda1          8:1    0    4G  0 part /boot +└─sda2          8:2    0  116G  0 part +  ├─VG01-root 253:0    0    5G  0 lvm  / +  ├─VG01-swap 253:1    0    8G  0 lvm  [SWAP] +  ├─VG01-usr  253:2    0   30G  0 lvm  /usr +  ├─VG01-home 253:3    0   20G  0 lvm  /home +  ├─VG01-var  253:4    0   20G  0 lvm  /var +  └─VG01-tmp  253:5    0   10G  0 lvm  /tmp +sr0            11:0    1 1024M  0 rom   +[root@testvm1 ~]# vgs +  VG   #PV #LV #SN Attr   VSize    VFree   +  VG01   1   6   0 wz--n- <116.00g <23.00g +``` + +Then create a new volume on **VG01** named **TestFS**. It does not need to be large; 1GB is fine. Then create a filesystem, add the filesystem label, and create the mount point: + + +``` +[root@testvm1 ~]# lvcreate -L 1G -n TestFS VG01 +  Logical volume "TestFS" created. +[root@testvm1 ~]# mkfs -t ext4 /dev/mapper/VG01-TestFS +mke2fs 1.45.3 (14-Jul-2019) +Creating filesystem with 262144 4k blocks and 65536 inodes +Filesystem UUID: 8718fba9-419f-4915-ab2d-8edf811b5d23 +Superblock backups stored on blocks: +        32768, 98304, 163840, 229376 + +Allocating group tables: done                             +Writing inode tables: done                             +Creating journal (8192 blocks): done +Writing superblocks and filesystem accounting information: done + +[root@testvm1 ~]# e2label /dev/mapper/VG01-TestFS TestFS +[root@testvm1 ~]# mkdir /TestFS +``` + +Now, mount the new filesystem: + + +``` +[root@testvm1 ~]# mount /TestFS/ +mount: /TestFS/: can't find in /etc/fstab. +``` + +This will not work because you do not have an entry in **/etc/fstab**. You can mount the new filesystem even without the entry in **/etc/fstab** using both the device name (as it appears in **/dev**) and the mount point. Mounting in this manner is simpler than it used to be—it used to require the filesystem type as an argument. The mount command is now smart enough to detect the filesystem type and mount it accordingly. + +Try it again: + + +``` +[root@testvm1 ~]# mount /dev/mapper/VG01-TestFS /TestFS/ +[root@testvm1 ~]# lsblk +NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT +sda               8:0    0  120G  0 disk +├─sda1            8:1    0    4G  0 part /boot +└─sda2            8:2    0  116G  0 part +  ├─VG01-root   253:0    0    5G  0 lvm  / +  ├─VG01-swap   253:1    0    8G  0 lvm  [SWAP] +  ├─VG01-usr    253:2    0   30G  0 lvm  /usr +  ├─VG01-home   253:3    0   20G  0 lvm  /home +  ├─VG01-var    253:4    0   20G  0 lvm  /var +  ├─VG01-tmp    253:5    0   10G  0 lvm  /tmp +  └─VG01-TestFS 253:6    0    1G  0 lvm  /TestFS +sr0              11:0    1 1024M  0 rom   +[root@testvm1 ~]# +``` + +Now the new filesystem is mounted in the proper location. List the mount unit files: + + +``` +`[root@testvm1 ~]# systemctl list-unit-files -t mount` +``` + +This command does not show a file for the **/TestFS** filesystem because no file exists for it. The command **systemctl status TestFS.mount** does not display any information about the new filesystem either. You can try it using wildcards with the **systemctl status** command: + + +``` +[root@testvm1 ~]# systemctl status *mount +● usr.mount - /usr +   Loaded: loaded (/etc/fstab; generated) +   Active: active (mounted) +    Where: /usr +     What: /dev/mapper/VG01-usr +     Docs: man:fstab(5) +           man:systemd-fstab-generator(8) + +<SNIP> +● TestFS.mount - /TestFS +   Loaded: loaded (/proc/self/mountinfo) +   Active: active (mounted) since Fri 2020-04-17 16:02:26 EDT; 1min 18s ago +    Where: /TestFS +     What: /dev/mapper/VG01-TestFS + +● run-user-0.mount - /run/user/0 +   Loaded: loaded (/proc/self/mountinfo) +   Active: active (mounted) since Thu 2020-04-16 08:52:29 EDT; 1 day 5h ago +    Where: /run/user/0 +     What: tmpfs + +● var.mount - /var +   Loaded: loaded (/etc/fstab; generated) +   Active: active (mounted) since Thu 2020-04-16 12:51:34 EDT; 1 day 1h ago +    Where: /var +     What: /dev/mapper/VG01-var +     Docs: man:fstab(5) +           man:systemd-fstab-generator(8) +    Tasks: 0 (limit: 19166) +   Memory: 212.0K +      CPU: 5ms +   CGroup: /system.slice/var.mount +``` + +This command provides some very interesting information about your system's mounts, and your new filesystem shows up. The **/var** and **/usr** filesystems are identified as being generated from **/etc/fstab**, while your new filesystem simply shows that it is loaded and provides the location of the info file in the **/proc/self/mountinfo** file. + +Next, automate this mount. First, do it the old-fashioned way by adding an entry in **/etc/fstab**. Later, I'll show you how to do it the new way, which will teach you about creating units and integrating them into the startup sequence. + +Unmount **/TestFS** and add the following line to the **/etc/fstab** file: + + +``` +`/dev/mapper/VG01-TestFS  /TestFS       ext4    defaults        1 2` +``` + +Now, mount the filesystem with the simpler **mount** command and list the mount units again: + + +``` +[root@testvm1 ~]# mount /TestFS +[root@testvm1 ~]# systemctl status *mount +<SNIP> +● TestFS.mount - /TestFS +   Loaded: loaded (/proc/self/mountinfo) +   Active: active (mounted) since Fri 2020-04-17 16:26:44 EDT; 1min 14s ago +    Where: /TestFS +     What: /dev/mapper/VG01-TestFS +<SNIP> +``` + +This did not change the information for this mount because the filesystem was manually mounted. Reboot and run the command again, and this time specify **TestFS.mount** rather than using the wildcard. The results for this mount are now consistent with it being mounted at startup: + + +``` +[root@testvm1 ~]# systemctl status TestFS.mount +● TestFS.mount - /TestFS +   Loaded: loaded (/etc/fstab; generated) +   Active: active (mounted) since Fri 2020-04-17 16:30:21 EDT; 1min 38s ago +    Where: /TestFS +     What: /dev/mapper/VG01-TestFS +     Docs: man:fstab(5) +           man:systemd-fstab-generator(8) +    Tasks: 0 (limit: 19166) +   Memory: 72.0K +      CPU: 6ms +   CGroup: /system.slice/TestFS.mount + +Apr 17 16:30:21 testvm1 systemd[1]: Mounting /TestFS... +Apr 17 16:30:21 testvm1 systemd[1]: Mounted /TestFS. +``` + +### Creating a mount unit + +Mount units may be configured either with the traditional **/etc/fstab** file or with systemd units. Fedora uses the **fstab** file as it is created during the installation. However, systemd uses the **systemd-fstab-generator** program to translate the **fstab** file into systemd units for each entry in the **fstab** file. Now that you know you can use systemd **.mount** unit files for filesystem mounting, try it out by creating a mount unit for this filesystem. + +First, unmount **/TestFS**. Edit the **/etc/fstab** file and delete or comment out the **TestFS** line. Now, create a new file with the name **TestFS.mount** in the **/etc/systemd/system** directory. Edit it to contain the configuration data below. The unit file name and the name of the mount point _must_ be identical, or the mount will fail: + + +``` +# This mount unit is for the TestFS filesystem +# By David Both +# Licensed under GPL V2 +# This file should be located in the /etc/systemd/system directory + +[Unit] +Description=TestFS Mount + +[Mount] +What=/dev/mapper/VG01-TestFS +Where=/TestFS +Type=ext4 +Options=defaults + +[Install] +WantedBy=multi-user.target +``` + +The **Description** line in the **[Unit]** section is for us humans, and it provides the name that's shown when you list mount units with **systemctl -t mount**. The data in the **[Mount]** section of this file contains essentially the same data that would be found in the **fstab** file. + +Now enable the mount unit: + + +``` +[root@testvm1 etc]# systemctl enable TestFS.mount +Created symlink /etc/systemd/system/multi-user.target.wants/TestFS.mount → /etc/systemd/system/TestFS.mount. +``` + +This creates the symlink in the **/etc/systemd/system** directory, which will cause this mount unit to be mounted on all subsequent boots. The filesystem has not yet been mounted, so you must "start" it: + + +``` +`[root@testvm1 ~]# systemctl start TestFS.mount` +``` + +Verify that the filesystem has been mounted: + + +``` +[root@testvm1 ~]# systemctl status TestFS.mount +● TestFS.mount - TestFS Mount +   Loaded: loaded (/etc/systemd/system/TestFS.mount; enabled; vendor preset: disabled) +   Active: active (mounted) since Sat 2020-04-18 09:59:53 EDT; 14s ago +    Where: /TestFS +     What: /dev/mapper/VG01-TestFS +    Tasks: 0 (limit: 19166) +   Memory: 76.0K +      CPU: 3ms +   CGroup: /system.slice/TestFS.mount + +Apr 18 09:59:53 testvm1 systemd[1]: Mounting TestFS Mount... +Apr 18 09:59:53 testvm1 systemd[1]: Mounted TestFS Mount. +``` + +This experiment has been specifically about creating a unit file for a mount, but it can be applied to other types of unit files as well. The details will be different, but the concepts are the same. Yes, I know it is still easier to add a line to the **/etc/fstab** file than it is to create a mount unit. But this is a good example of how to create a unit file because systemd does not have generators for every type of unit. + +### In summary + +This article looked at systemd units in more detail and how to use the systemctl command to explore and manage units. It also showed how to stop and disable units and create a new systemd mount unit to mount a new filesystem and enable it to initiate during startup. + +In the next article in this series, I will take you through a recent problem I had during startup and show you how I circumvented it using systemd. + +### Resources + +There is a great deal of information about systemd available on the internet, but much is terse, obtuse, or even misleading. In addition to the resources mentioned in this article, the following webpages offer more detailed and reliable information about systemd startup. + + * The Fedora Project has a good, practical [guide][10] [to systemd][10]. It has pretty much everything you need to know in order to configure, manage, and maintain a Fedora computer using systemd. + * The Fedora Project also has a good [cheat sheet][11] that cross-references the old SystemV commands to comparable systemd ones. + * For detailed technical information about systemd and the reasons for creating it, check out [Freedesktop.org][12]'s [description of systemd][13]. + * [Linux.com][14]'s "More systemd fun" offers more advanced systemd [information and tips][15]. + + + +There is also a series of deeply technical articles for Linux sysadmins by Lennart Poettering, the designer and primary developer of systemd. These articles were written between April 2010 and September 2011, but they are just as relevant now as they were then. Much of everything else good that has been written about systemd and its ecosystem is based on these papers. + + * [Rethinking PID 1][16] + * [systemd for Administrators, Part I][17] + * [systemd for Administrators, Part II][18] + * [systemd for Administrators, Part III][19] + * [systemd for Administrators, Part IV][20] + * [systemd for Administrators, Part V][21] + * [systemd for Administrators, Part VI][22] + * [systemd for Administrators, Part VII][23] + * [systemd for Administrators, Part VIII][24] + * [systemd for Administrators, Part IX][25] + * [systemd for Administrators, Part X][26] + * [systemd for Administrators, Part XI][27] + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/20/5/systemd-units + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/lenovo-thinkpad-laptop-window-focus.png?itok=g0xPm2kD (young woman working on a laptop) +[2]: https://opensource.com/article/20/4/systemd +[3]: https://opensource.com/article/20/4/systemd-startup +[4]: https://en.wikipedia.org/wiki/Sar_%28Unix%29 +[5]: https://en.wikipedia.org/wiki/Pwd +[6]: https://en.wikipedia.org/wiki/Standard_streams#Standard_output_(stdout) +[7]: http://www.both.org/?page_id=1183 +[8]: mailto:chrony-dnssrv@.timer +[9]: mailto:mdadm-last-resort@.timer +[10]: https://docs.fedoraproject.org/en-US/quick-docs/understanding-and-administering-systemd/index.html +[11]: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet +[12]: http://Freedesktop.org +[13]: http://www.freedesktop.org/wiki/Software/systemd +[14]: http://Linux.com +[15]: https://www.linux.com/training-tutorials/more-systemd-fun-blame-game-and-stopping-services-prejudice/ +[16]: http://0pointer.de/blog/projects/systemd.html +[17]: http://0pointer.de/blog/projects/systemd-for-admins-1.html +[18]: http://0pointer.de/blog/projects/systemd-for-admins-2.html +[19]: http://0pointer.de/blog/projects/systemd-for-admins-3.html +[20]: http://0pointer.de/blog/projects/systemd-for-admins-4.html +[21]: http://0pointer.de/blog/projects/three-levels-of-off.html +[22]: http://0pointer.de/blog/projects/changing-roots +[23]: http://0pointer.de/blog/projects/blame-game.html +[24]: http://0pointer.de/blog/projects/the-new-configuration-files.html +[25]: http://0pointer.de/blog/projects/on-etc-sysinit.html +[26]: http://0pointer.de/blog/projects/instances.html +[27]: http://0pointer.de/blog/projects/inetd.html From 866f79c53dcdaf1cc6d114f2e333c63be2ca3e0a Mon Sep 17 00:00:00 2001 From: DarkSun Date: Fri, 8 May 2020 01:04:05 +0800 Subject: [PATCH 173/178] =?UTF-8?q?=E9=80=89=E9=A2=98:=2020200506=20Custom?= =?UTF-8?q?izing=20my=20open=20source=20PHP=20framework=20for=20web=20deve?= =?UTF-8?q?lopment?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit sources/tech/20200506 Customizing my open source PHP framework for web development.md --- ...ource PHP framework for web development.md | 28 +++++++++---------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/sources/tech/20200506 Customizing my open source PHP framework for web development.md b/sources/tech/20200506 Customizing my open source PHP framework for web development.md index 00b605148d..8247a6ad49 100644 --- a/sources/tech/20200506 Customizing my open source PHP framework for web development.md +++ b/sources/tech/20200506 Customizing my open source PHP framework for web development.md @@ -4,20 +4,20 @@ [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Customizing my open source PHP framework for web development) -[#]: via: (https://opensource.com/article/20/5/codeignitor) +[#]: via: (https://opensource.com/article/20/5/codeigniter) [#]: author: (Wee Ben Sen https://opensource.com/users/bswee14) Customizing my open source PHP framework for web development ====== -Codeignitor is a PHP framework that empowers companies to develop +Codeigniter is a PHP framework that empowers companies to develop high-performance websites with flexibility and ease. ![Business woman on laptop sitting in front of window][1] -PHP Codeignitor is an open source framework providing business applications with the easy-to-use PHP programming language and powerful tools for coding. It also provides business intelligence, server monitoring, development, and application integration facilities. It's a relatively quiet project that you don't hear much about, but it's got a lot going for it that many developers new to it find surprising and refreshing. +PHP Codeigniter is an open source framework providing business applications with the easy-to-use PHP programming language and powerful tools for coding. It also provides business intelligence, server monitoring, development, and application integration facilities. It's a relatively quiet project that you don't hear much about, but it's got a lot going for it that many developers new to it find surprising and refreshing. -I use [Codeignitor][2] at my job working for an online tuition service provider in Singapore. We offer services that aren't common enough to be the default feature set for templates or existing back-ends, so I need something that provides good, solid, raw materials I can build upon. Initially, I was considering other platforms such as Wordpress for our website; however, I arrived at Codeignitor due to its flexibility and integration of functions needed in the tuition-matching process. +I use [Codeigniter][2] at my job working for an online tuition service provider in Singapore. We offer services that aren't common enough to be the default feature set for templates or existing back-ends, so I need something that provides good, solid, raw materials I can build upon. Initially, I was considering other platforms such as Wordpress for our website; however, I arrived at Codeigniter due to its flexibility and integration of functions needed in the tuition-matching process. -Here are the points that sold me on Codeignitor: +Here are the points that sold me on Codeigniter: * Database integration with MySQL—A major functionality is allowing clients to browse the tutor database and add tutors like a "shopping cart" similar to an e-commerce platform. * Client interface system—Users can log in to manage preferences and edit their particulars, modify subject taught, areas traveled, mobile number, address, etc. @@ -31,16 +31,16 @@ The project took around six months to complete and another two months of debuggi ### Features and benefits -There are many more features that draw developers to PHP Codeignitor, including error handling and code formatting, which are useful in every coding situation. It supports templates, which can be used to add functionality to an existing website or to generate new ones. There are many features available for a business that needs to use a web-based system, including the ability to use custom tags. Most can be used by even an average developer who does not have any prior experience in programming. +There are many more features that draw developers to PHP Codeigniter, including error handling and code formatting, which are useful in every coding situation. It supports templates, which can be used to add functionality to an existing website or to generate new ones. There are many features available for a business that needs to use a web-based system, including the ability to use custom tags. Most can be used by even an average developer who does not have any prior experience in programming. -The key features of Codeignitor are: +The key features of Codeigniter are: * XML core services, * HTTP/FTP core services * AppData and PHP sandbox features * XSLT and HTML templates * Encrypted information transfer - * PCM Codeignitor server monitoring + * PCM Codeigniter server monitoring * Application integration * File Transfer Protocol (FTP) * Help desk support @@ -50,23 +50,23 @@ The key features of Codeignitor are: #### Compatibility -Codeignitor is compatible with many leading software applications like PHP, MySQL, [MariaDB][3], [phpMyAdmin][4], [Apache][5], OpenBSD, XSLT, [SQLite][6], and more. A number of companies prefer to use Codeignitor products for their website requirements because they are easy to work with and integrate. If you're not comfortable creating your own website, you can find many developers and design agencies that provide custom web development services. +Codeigniter is compatible with many leading software applications like PHP, MySQL, [MariaDB][3], [phpMyAdmin][4], [Apache][5], OpenBSD, XSLT, [SQLite][6], and more. A number of companies prefer to use Codeigniter products for their website requirements because they are easy to work with and integrate. If you're not comfortable creating your own website, you can find many developers and design agencies that provide custom web development services. #### Security -Codeignitor also provides data security through SSL encryption. The encryption protects the data from external threats such as intruders and firewalls. The data storage facility also allows for security audits of the company's website. +Codeigniter also provides data security through SSL encryption. The encryption protects the data from external threats such as intruders and firewalls. The data storage facility also allows for security audits of the company's website. #### Other features -A good PHP web development company uses several advanced and third-party technologies such as XML and PHP. It provides organizations with a complete platform to develop professional-looking, useful websites with a business application. Codeignitor makes it easy to use third party technology, and works with common web development software. This allows web agencies to easily create websites with their chosen modules. Most PHP developers offer support and training services for individuals, as well. +A good PHP web development company uses several advanced and third-party technologies such as XML and PHP. It provides organizations with a complete platform to develop professional-looking, useful websites with a business application. Codeigniter makes it easy to use third party technology, and works with common web development software. This allows web agencies to easily create websites with their chosen modules. Most PHP developers offer support and training services for individuals, as well. -### Using PHP framework Codeignitor +### Using PHP framework Codeigniter -Codeignitor allows businesses to have a complete package for PHP development that will offer the right combination of power, flexibility, and performance. So far, I am very pleased with our website and I have continuously upgraded and added new features along the way. I look forward to discovering what else I can do with our website using Codeignitor. Could it be right for you too? +Codeigniter allows businesses to have a complete package for PHP development that will offer the right combination of power, flexibility, and performance. So far, I am very pleased with our website and I have continuously upgraded and added new features along the way. I look forward to discovering what else I can do with our website using Codeigniter. Could it be right for you too? -------------------------------------------------------------------------------- -via: https://opensource.com/article/20/5/codeignitor +via: https://opensource.com/article/20/5/codeigniter 作者:[Wee Ben Sen][a] 选题:[lujun9972][b] From 473726103224793847e84829d310e7fcbaf8f427 Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 8 May 2020 08:39:09 +0800 Subject: [PATCH 174/178] translating --- ...rgerfs to increase your virtual storage.md | 147 ------------------ ...rgerfs to increase your virtual storage.md | 147 ++++++++++++++++++ 2 files changed, 147 insertions(+), 147 deletions(-) delete mode 100644 sources/tech/20200501 Using mergerfs to increase your virtual storage.md create mode 100644 translated/tech/20200501 Using mergerfs to increase your virtual storage.md diff --git a/sources/tech/20200501 Using mergerfs to increase your virtual storage.md b/sources/tech/20200501 Using mergerfs to increase your virtual storage.md deleted file mode 100644 index 734af93148..0000000000 --- a/sources/tech/20200501 Using mergerfs to increase your virtual storage.md +++ /dev/null @@ -1,147 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (geekpi) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Using mergerfs to increase your virtual storage) -[#]: via: (https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/) -[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) - -Using mergerfs to increase your virtual storage -====== - -![][1] - -What happens if you have multiple disks or partitions that you’d like to use for a media project and you don’t want to lose any of your existing data, but you’d like to have everything located or mounted under one drive. That’s where mergerfs can come to your rescue! - -[mergerfs][2] is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. - -You will need to grab the latest RPM from their github page [here][3]. The releases for Fedora have _**fc**_ and the version number in the name. For example here is the version for Fedora 31: - -[mergerfs-2.29.0-1.fc31.x86_64.rpm][4] - -### Installing and configuring mergerfs - -Install the mergerfs package that you’ve downloaded using sudo: - -``` -$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm -``` - -You will now be able to mount multiple disks as one drive. This comes in handy if you have a media server and you’d like all of your media files to show up under one location. If you upload new files to your system, you can copy them to your mergerfs directory and mergerfs will automatically copy them to which ever drive has enough free space available. - -Here is an example to make it easier to understand: - -``` -$ df -hT | grep disk -/dev/sdb1 ext4 23M 386K 21M 2% /disk1 -/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 - -$ ls -l /disk1/Videos/ -total 1 --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv - -$ ls -l /disk2/Videos/ -total 2 --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv --rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv -``` - -In this example there are two disks mounted as _disk1_ and _disk2_. Both drives have a _**Videos**_ directory with existing files. - -Now we’re going to mount those drives using mergerfs to make them appear as one larger drive. - -``` -$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media -``` - -The mergerfs man page is quite extensive and complex so we’ll break down the options that were specified. - - * _defaults_: This will use the default settings unless specified. - * _allow_other_: allows users besides sudo or root to see the filesystem. - * _use_ino_: Causes mergerfs to supply file/directory inodes rather than libfuse. While not a default it is recommended it be enabled so that linked files share the same inode value. - * _category.create=mfs_: Spreads files out across your drives based on available space. - * _moveonenospc=true_: If enabled, if writing fails, a scan will be done looking for the drive with the most free space. - * _minfreespace=1M_: The minimum space value used. - * _disk1_: First hard drive. - * _disk2_: Second hard drive. - * _/media_: The directory folder where the drives are mounted. - - - -Here is what it looks like: - -``` -$ df -hT | grep disk -/dev/sdb1 ext4 23M 386K 21M 2% /disk1 -/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 - -$ df -hT | grep media -1:2 fuse.mergerfs 66M 1.4M 60M 3% /media -``` - -You can see that the mergerfs mount now shows a total capacity of 66M which is the combined total of the two hard drives. - -Continuing with the example: - -There is a 30Mb video called _Baby’s second Xmas.mkv_. Let’s copy it to the _/media_ folder which is the mergerfs mount. - -``` -$ ls -lh "Baby's second Xmas.mkv" --rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv -$ cp "Baby's second Xmas.mkv" /media/Videos/ -``` - -Here is the end result: - -``` -$ df -hT | grep disk -/dev/sdb1 ext4 23M 386K 21M 2% /disk1 -/dev/sdc1 ext4 44M 31M 9.8M 76% /disk2 - -$ df -hT | grep media -1:2 fuse.mergerfs 66M 31M 30M 51% /media -``` - -You can see from the disk space utilization that mergerfs automatically copied the file to disk2 because disk1 did not have enough free space. - -Here is a breakdown of all of the files: - -``` -$ ls -l /disk1/Videos/ -total 1 --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv - -$ ls -l /disk2/Videos/ -total 30003 --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv --rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv --rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv - -$ ls -l /media/Videos/ -total 30004 --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv --rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv --rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv --rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv -``` - -When you copy files to your mergerfs mount, it will always copy the files to the hard disk that has enough free space. If none of the drives in the pool have enough free space, then you won’t be able to copy them. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/ - -作者:[Curt Warfield][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/rcurtiswarfield/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/mergerfs-816x346.png -[2]: https://github.com/trapexit/mergerfs -[3]: https://github.com/trapexit/mergerfs/releases -[4]: https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm diff --git a/translated/tech/20200501 Using mergerfs to increase your virtual storage.md b/translated/tech/20200501 Using mergerfs to increase your virtual storage.md new file mode 100644 index 0000000000..936ed454d9 --- /dev/null +++ b/translated/tech/20200501 Using mergerfs to increase your virtual storage.md @@ -0,0 +1,147 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using mergerfs to increase your virtual storage) +[#]: via: (https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/) +[#]: author: (Curt Warfield https://fedoramagazine.org/author/rcurtiswarfield/) + +使用 mergefs 增加虚拟存储 +====== + +![][1] + +如果你想在一个媒体项目中用上多个磁盘或分区,而又不想丢失任何现有数据,但又想将所有文件都存放在一个驱动器下,该怎么办?这就是 mergefs 派上用场的地方! + +[mergerfs][2] 是旨在简化跨多个商业存储设备文件的存储和管理的联合文件系统。 + +你需要从[这个][3] github 页面获取最新的 RPM。Fedora 的版本名称中带有 _**fc**_ 和版本号。例如,以下是 Fedora 31 的版本: + +[mergerfs-2.29.0-1.fc31.x86_64.rpm][4] + +### 安装和配置 mergefs + +使用 sudo 安装已下载的 mergefs 软件包: + +``` +$ sudo dnf install mergerfs-2.29.0-1.fc31.x86_64.rpm +``` + +现在,你可以将多个磁盘挂载为一个驱动器。如果你有一台媒体服务器,并且希望所有媒体文件都显示在一个地方,这将很方便。如果将新文件上传到系统,那么可以将它们复制到 mergefs 目录,mergefs 会自动将它们复制具有足够可用空间的磁盘上。 + +这是使你更容易理解的例子: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 + +$ ls -l /disk1/Videos/ +total 1 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv + +$ ls -l /disk2/Videos/ +total 2 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv +``` + +在此例中挂载了两块磁盘,分别为 _disk1_ 和 _disk2_。两个驱动器都有一个包含文件的 _**Videos**_ 目录。 + +现在,我们将使用 mergefs 挂载这些驱动器,使它们看起来像一个更大的驱动器。 + +``` +$ sudo mergerfs -o defaults,allow_other,use_ino,category.create=mfs,moveonenospc=true,minfreespace=1M /disk1:/disk2 /media +``` + +mergefs 手册页非常广泛且复杂,因此我们将说明上面提到的选项。 + + * _defaults_:除非指定,否则将使用默认设置。 + * _allow_other_:允许 sudo 或 root 以外的用户查看文件系统。 + * _use_ino_:让 mergefs 提供文件/目录 inode 而不是 libfuse。虽然不是默认值,但建议你启用它,以便链接的文件共享相同的 inode 值。 + * _category.create=mfs_:根据可用空间在驱动器间传播文件。 + * _moveonenospc=true_:如果启用,那么如果写入失败,将进行扫描以查找具有最大可用空间的驱动器。 + * _minfreespace=1M_:最小使用空间值。 + * _disk1_:第一块硬盘。 + * _disk2_:第二块硬盘。 + * _/media_:挂载驱动器的目录。 + + + +看起来是这样的: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 1.1M 40M 3% /disk2 + +$ df -hT | grep media +1:2 fuse.mergerfs 66M 1.4M 60M 3% /media +``` + +你可以看到现在 mergefs 挂载显示的总容量为 66M,这是两块硬盘的总容量。 + +继续示例: + +有一个叫 _Baby’s second Xmas.mkv_ 的 30M 视频。让我们将其复制到用 mergerfs 挂载的 _/media_ 文件夹中。 + +``` +$ ls -lh "Baby's second Xmas.mkv" +-rw-rw-r--. 1 curt curt 30M Apr 20 08:45 Baby's second Xmas.mkv +$ cp "Baby's second Xmas.mkv" /media/Videos/ +``` + +这是最终结果: + +``` +$ df -hT | grep disk +/dev/sdb1 ext4 23M 386K 21M 2% /disk1 +/dev/sdc1 ext4 44M 31M 9.8M 76% /disk2 + +$ df -hT | grep media +1:2 fuse.mergerfs 66M 31M 30M 51% /media +``` + +从磁盘空间利用率中可以看到,因为 disk1 没有足够的可用空间,所以 mergefs 自动将文件复制到 disk2。 + +这是所有文件详情: + +``` +$ ls -l /disk1/Videos/ +total 1 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv + +$ ls -l /disk2/Videos/ +total 30003 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv + +$ ls -l /media/Videos/ +total 30004 +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Baby's first Xmas.mkv +-rw-rw-r--. 1 curt curt 30720000 Apr 20 08:47 Baby's second Xmas.mkv +-rw-rw-r--. 1 curt curt 0 Mar 8 17:21 Halloween hijinks.mkv +-rw-r--r--. 1 curt curt 0 Mar 8 17:17 Our Wedding.mkv +``` + +当你将文件复制到 mergefs 挂载点时,它将始终将文件复制到有足够可用空间的硬盘上。如果池中的所有驱动器都没有足够的可用空间,那么你将无法复制它们。 + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-mergerfs-to-increase-your-virtual-storage/ + +作者:[Curt Warfield][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/rcurtiswarfield/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2020/04/mergerfs-816x346.png +[2]: https://github.com/trapexit/mergerfs +[3]: https://github.com/trapexit/mergerfs/releases +[4]: https://github.com/trapexit/mergerfs/releases/download/2.29.0/mergerfs-2.29.0-1.fc31.x86_64.rpm From c6aea43b045b063f0e9a7d072b89a26abcd93efd Mon Sep 17 00:00:00 2001 From: geekpi Date: Fri, 8 May 2020 08:43:51 +0800 Subject: [PATCH 175/178] translating --- ...tudio To Replace Xfce With KDE Plasma Desktop Environment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md b/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md index 586232d723..61c2007784 100644 --- a/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md +++ b/sources/tech/20200507 Ubuntu Studio To Replace Xfce With KDE Plasma Desktop Environment.md @@ -1,5 +1,5 @@ [#]: collector: (lujun9972) -[#]: translator: ( ) +[#]: translator: (geekpi) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) From 0f1c6aa3bbbcf726d40eb12f1a14a931ffab9807 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 8 May 2020 09:40:39 +0800 Subject: [PATCH 176/178] PRF @geekpi --- ...200428 Upgrading Fedora 31 to Fedora 32.md | 36 +++++++++---------- 1 file changed, 18 insertions(+), 18 deletions(-) diff --git a/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md index 637bf41ce8..a5f6cc9ce0 100644 --- a/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md +++ b/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md @@ -1,6 +1,6 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) -[#]: reviewer: ( ) +[#]: reviewer: (wxy) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Upgrading Fedora 31 to Fedora 32) @@ -12,27 +12,27 @@ ![][1] -Fedora 32 [已经发布][2]。你可能想升级系统以获得 Fedora 中的最新功能。Fedora Workstation 有图形化的升级方法。另外,Fedora 提供了命令行方法,用于将 Fedora 30 升级到 Fedora 31。 +Fedora 32 [已经发布][2]。你可能想升级系统以获得 Fedora 中的最新功能。Fedora Workstation 有图形化的升级方法。另外,Fedora 提供了命令行方法,用于将 Fedora 31 升级到 Fedora 32。 -升级前,请访问 [Fedora 32 个常见 bug 的维基页面] [3],查看是否存在可能影响升级的问题。尽管 Fedora 社区试图确保升级正常进行,但是无法为用户可能使用的每种软硬件组合提供保证。 +升级前,请访问 [Fedora 32 个常见 bug 的维基页面][3],查看是否存在可能影响升级的问题。尽管 Fedora 社区试图确保升级正常进行,但是无法为用户可能使用的每种软硬件组合提供保证。 -### 将Fedora 31 Workstation 升级到 Fedora 32 +### 将 Fedora 31 Workstation 升级到 Fedora 32 -发布不久之后就会出现通知,告诉你有可用的升级。你可以单击通知启动 **GNOME Software**。或者,你可以从 GNOME Shell 中选择“软件”。 +在新版本发布不久之后就会出现通知,告诉你有可用的升级。你可以单击该通知启动 “GNOME 软件”。或者,你可以从 GNOME Shell 中选择“软件”。 -在 GNOME Software中 选择 _Updates_ 选项卡,你会看到一个页面通知你 Fedora 32 现在可用。 +在 “GNOME 软件”中选择更新Updates选项卡,你会看到一个页面通知你 Fedora 32 现在可用。 如果你在此页面看不到任何内容,请尝试使用左上方的重新加载按钮。发布后,所有系统可能都需要一段时间才能看到可用的升级。 -选择 _Download_ 获取升级包。你可以继续做事直到下载完成。然后使用 GNOME Software 重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再回来。 +选择下载Download获取升级包。你可以继续做事直到下载完成。然后使用 “GNOME 软件”重启系统并应用升级。升级需要时间,因此你可能需要喝杯咖啡,稍后再回来。 ### 使用命令行 -如果你是从 Fedora 的先前版本升级的,那么你可能对 _dnf upgrade_ 插件很熟悉。此方法是从 Fedora 31 升级到 Fedora 32 的推荐和受支持的方法。使用此插件将使你轻松地升级到 Fedora 32。 +如果你是从 Fedora 的先前版本升级的,那么你可能对 `dnf upgrade` 插件很熟悉。这个方法是推荐和受支持的从 Fedora 31 升级到 Fedora 32 的方法。使用此插件将使你轻松地升级到 Fedora 32。 -#### 1\. 更新软件并备份系统 +#### 1、更新软件并备份系统 -在开始升级过程之前,请确保你有 Fedora 31 的最新软件。如果你安装了模块化软件,这尤为重要。dnf 和 GNOME Software 的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用_ GNOME Software_ 或在终端中输入以下命令。 +在开始升级过程之前,请确保你有 Fedora 31 的最新软件。如果你安装了模块化软件modular software,这尤为重要。`dnf` 和 “GNOME 软件”的最新版本对某些模块化流的升级过程进行了改进。要更新软件,请使用 “GNOME 软件” 或在终端中输入以下命令。 ``` sudo dnf upgrade --refresh @@ -40,7 +40,7 @@ sudo dnf upgrade --refresh 此外,在继续操作之前,请确保备份系统。有关备份的帮助,请参阅 Fedora Magazine 上的[备份系列][4]。 -#### 2\. 安装 DNF 插件 +#### 2、安装 DNF 插件 接下来,打开终端并输入以下命令安装插件: @@ -48,7 +48,7 @@ sudo dnf upgrade --refresh sudo dnf install dnf-plugin-system-upgrade ``` -#### 3\. 使用 DNF 开始更新 +#### 3、使用 DNF 开始更新 现在,你的系统已更新、已备份、并且已安装 DNF 插件,你可以在终端中使用以下命令开始升级: @@ -56,17 +56,17 @@ sudo dnf install dnf-plugin-system-upgrade sudo dnf system-upgrade download --releasever=32 ``` -该命令将开始在本地下载计算机的所有升级,以准备升级。如果由于没有更新的软件包、损坏的依赖项或已淘汰的软件包而在升级时遇到问题,请在输入上述命令时添加 _‐allowerasing_ 标志。这将使 DNF 删除可能阻止系统升级的软件包。 +这个命令将开始在本地下载所有的升级包,为升级做准备。如果你在升级的时候因为没有更新的包、依赖关系破损或退役的包而出现问题,请在输入上述命令时添加 `--allowerasing` 标志。这将允许 DNF 移除可能阻碍系统升级的软件包。 -#### 4\. 重启并升级 +#### 4、重启并升级 -当上一个命令完成了所有升级的下载,你的系统就可以重新启动了。要将系统引导至升级过程,请在终端中输入以下命令: +当上一个命令完成了所有升级包的下载,你的系统就可以重新启动了。要将系统引导至升级过程,请在终端中输入以下命令: ``` sudo dnf system-upgrade reboot ``` -此后,系统将重启。在许多版本之前,_fedup_ 工具会在内核选择/引导页上创建一个新选项。使用 _dnf-plugin-system-upgrade_ 包,你的系统将重启进入 Fedora 31 当前安装的内核;这个是正常的。在选择内核之后,你的系统会立即开始升级过程。 +此后,系统将重启。在许多版本之前,`fedup` 工具会在内核选择/启动页上创建一个新选项。使用 `dnf-plugin-system-upgrade` 包,你的系统会重启进入 Fedora 31 当前安装的内核;这个是正常的。在选择内核之后,你的系统会立即开始升级过程。 现在可能是喝杯咖啡休息的好时机!完成后,系统将重启,你将能够登录到新升级的 Fedora 32 系统。 @@ -85,14 +85,14 @@ via: https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/ 作者:[Adam Šamalík][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/asamalik/ [b]: https://github.com/lujun9972 [1]: https://fedoramagazine.org/wp-content/uploads/2020/04/31-32-816x345.png -[2]: https://fedoramagazine.org/announcing-fedora-32/ +[2]: https://linux.cn/article-12164-1.html [3]: https://fedoraproject.org/wiki/Common_F32_bugs [4]: https://fedoramagazine.org/taking-smart-backups-duplicity/ [5]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png From ed260ec05ed019b957efea8774ede77278f690d1 Mon Sep 17 00:00:00 2001 From: Xingyu Wang Date: Fri, 8 May 2020 09:41:06 +0800 Subject: [PATCH 177/178] PUB @geekpi https://linux.cn/article-12195-1.html --- .../20200428 Upgrading Fedora 31 to Fedora 32.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) rename {translated/tech => published}/20200428 Upgrading Fedora 31 to Fedora 32.md (98%) diff --git a/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md b/published/20200428 Upgrading Fedora 31 to Fedora 32.md similarity index 98% rename from translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md rename to published/20200428 Upgrading Fedora 31 to Fedora 32.md index a5f6cc9ce0..c2c4bc0301 100644 --- a/translated/tech/20200428 Upgrading Fedora 31 to Fedora 32.md +++ b/published/20200428 Upgrading Fedora 31 to Fedora 32.md @@ -1,8 +1,8 @@ [#]: collector: (lujun9972) [#]: translator: (geekpi) [#]: reviewer: (wxy) -[#]: publisher: ( ) -[#]: url: ( ) +[#]: publisher: (wxy) +[#]: url: (https://linux.cn/article-12195-1.html) [#]: subject: (Upgrading Fedora 31 to Fedora 32) [#]: via: (https://fedoramagazine.org/upgrading-fedora-31-to-fedora-32/) [#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) From 26adc8d62d551e8bcf00e27f43dae24e412a9669 Mon Sep 17 00:00:00 2001 From: "Xingyu.Wang" Date: Fri, 8 May 2020 09:58:48 +0800 Subject: [PATCH 178/178] Rename sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md to sources/news/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md --- ... Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename sources/{tech => news}/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md (100%) diff --git a/sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md b/sources/news/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md similarity index 100% rename from sources/tech/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md rename to sources/news/20200508 Good News- You Can Now Buy the De-Googled -e-OS Smartphone from Fairphone.md