From 4d68f76999a3729fbd9c3303f2e4df66a6a09a00 Mon Sep 17 00:00:00 2001 From: Bright Huang Date: Mon, 12 Dec 2022 16:28:49 +0800 Subject: [PATCH] first translation --- ...ng Linus-s Law for open source security.md | 90 ------------------ ...ng Linus-s Law for open source security.md | 91 +++++++++++++++++++ 2 files changed, 91 insertions(+), 90 deletions(-) delete mode 100644 sources/talk/20210209 Understanding Linus-s Law for open source security.md create mode 100644 translated/talk/20210209 Understanding Linus-s Law for open source security.md diff --git a/sources/talk/20210209 Understanding Linus-s Law for open source security.md b/sources/talk/20210209 Understanding Linus-s Law for open source security.md deleted file mode 100644 index 258d401019..0000000000 --- a/sources/talk/20210209 Understanding Linus-s Law for open source security.md +++ /dev/null @@ -1,90 +0,0 @@ -[#]: collector: (lujun9972) -[#]: translator: (CanYellow) -[#]: reviewer: ( ) -[#]: publisher: ( ) -[#]: url: ( ) -[#]: subject: (Understanding Linus's Law for open source security) -[#]: via: (https://opensource.com/article/21/2/open-source-security) -[#]: author: (Seth Kenlon https://opensource.com/users/seth) - -Understanding Linus's Law for open source security -====== -Linus's Law is that given enough eyeballs, all bugs are shallow. How -does this apply to open source software security? -![Hand putting a Linux file folder into a drawer][1] - -In 2021, there are more reasons why people love Linux than ever before. In this series, I'll share 21 different reasons to use Linux. This article discusses Linux's influence on the security of open source software. - -An often-praised virtue of open source software is that its code can be reviewed (or "audited," as security professionals like to say) by anyone and everyone. However, if you actually ask many open source users when the last time they reviewed code was, you might get answers ranging from a blank stare to an embarrassed murmur. And besides, there are some really big open source applications out there, so it can be difficult to review every single line of code effectively. - -Extrapolating from these slightly uncomfortable truths, you have to wonder: When nobody looks at the code, does it really matter whether it's open or not? - -### Should you trust open source? - -We tend to make a trite assumption in hobbyist computing that open source is "more secure" than anything else. We don't often talk about what that means, what the basis of comparison is ("more" secure than what?), or how the conclusion has even been reached. It's a dangerous statement to make because it implies that as long as you call something _open source_, it automatically and magically inherits enhanced security. That's not what open source is about, and in fact, it's what open source security is very much against. - -You should never assume an application is secure unless you have personally audited and understood its code. Once you have done this, you can assign _ultimate trust_ to that application. Ultimate trust isn't a thing you do on a computer; it's something you do in your own mind: You trust software because you choose to believe that it is secure, at least until someone finds a way to exploit that software. - -You're the only person who can place ultimate trust in that code, so every user who wants that luxury must audit the code for themselves. Taking someone else's word for it doesn't count! - -So until you have audited and understood a codebase for yourself, the maximum trust level you can give to an application is a spectrum ranging from approximately, _not trustworthy at all_ to _pretty trustworthy_. There's no cheat sheet for this. It's a personal choice you must make for yourself. If you've heard from people you strongly trust that an application is secure, then you might trust that software more than you trust something for which you've gotten no trusted recommendations. - -Because you cannot audit proprietary (non-open source) code, you can never assign it _ultimate trust_. - -### Linus's Law - -The reality is, not everyone is a programmer, and not everyone who is a programmer has the time to dedicate to reviewing hundreds and hundreds of lines of code. So if you're not going to audit code yourself, then you must choose to trust (to some degree) the people who _do_ audit code. - -So exactly who does audit code, anyway? - -Linus's Law asserts that _given enough eyeballs, all bugs are shallow_, but we don't really know how many eyeballs are "enough." However, don't underestimate the number. Software is very often reviewed by more people than you might imagine. The original developer or developers obviously know the code that they've written. However, open source is often a group effort, so the longer code is open, the more software developers end up seeing it. A developer must review major portions of a project's code because they must learn a codebase to write new features for it. - -Open source packagers also get involved with many projects in order to make them available to a Linux distribution. Sometimes an application can be packaged with almost no familiarity with the code, but often a packager gets familiar with a project's code, both because they don't want to sign off on software they don't trust and because they may have to make modifications to get it to compile correctly. Bug reporters and triagers also sometimes get familiar with a codebase as they try to solve anomalies ranging from quirks to major crashes. Of course, some bug reporters inadvertently reveal code vulnerabilities not by reviewing it themselves but by bringing attention to something that obviously doesn't work as intended. Sysadmins frequently get intimately familiar with the code of an important software their users rely upon. Finally, there are security researchers who dig into code exclusively to uncover potential exploits. - -### Trust and transparency - -Some people assume that because major software is composed of hundreds of thousands of lines of code, it's basically impossible to audit. Don't be fooled by how much code it takes to make an application run. You don't actually have to read millions of lines. Code is highly structured, and exploitable flaws are rarely just a single line hidden among the millions of lines; there are usually whole functions involved. - -There are exceptions, of course. Sometimes a serious vulnerability is enabled with just one system call or by linking to one flawed library. Luckily, those kinds of errors are relatively easy to notice, thanks to the active role of security researchers and vulnerability databases. - -Some people point to bug trackers, such as the [Common Vulnerabilities and Exposures (CVE)][2] website, and deduce that it's actually as plain as day that open source isn't secure. After all, hundreds of security risks are filed against lots of open source projects, out in the open for everyone to see. Don't let that fool you, though. Just because you don't get to see the flaws in closed software doesn't mean those flaws don't exist. In fact, we know that they do because exploits are filed against them, too. The difference is that _all_ exploits against open source applications are available for developers (and users) to see so those flaws can be mitigated. That's part of the system that boosts trust in open source, and it's wholly missing from proprietary software. - -There may never be "enough" eyeballs on any code, but the stronger and more diverse the community around the code, the better chance there is to uncover and fix weaknesses. - -### Trust and people - -In open source, the probability that many developers, each working on the same project, have noticed something _not secure_ but have all remained equally silent about that flaw is considered to be low because humans rarely mutually agree to conspire in this way. We've seen how disjointed human behavior can be recently with COVID-19 mitigation: - - * We've all identified a flaw (a virus). - * We know how to prevent it from spreading (stay home). - * Yet the virus continues to spread because one or more people deviate from the mitigation plan. - - - -The same is true for bugs in software. If there's a flaw, someone noticing it will bring it to light (provided, of course, that someone sees it). - -However, with proprietary software, there can be a high probability that many developers working on a project may notice something not secure but remain equally silent because the proprietary model relies on paychecks. If a developer speaks out against a flaw, then that developer may at best hurt the software's reputation, thereby decreasing sales, or at worst, may be fired from their job. Developers being paid to work on software in secret do not tend to talk about its flaws. If you've ever worked as a developer, you've probably signed an NDA, and you've been lectured on the importance of trade secrets, and so on. Proprietary software encourages, and more often enforces, silence even in the face of serious flaws. - -### Trust and software - -Don't trust software you haven't audited. - -If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability. - -Open source isn't inherently more secure than proprietary software, but the systems in place to fix it are far better planned, implemented, and staffed. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/21/2/open-source-security - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer) -[2]: https://cve.mitre.org diff --git a/translated/talk/20210209 Understanding Linus-s Law for open source security.md b/translated/talk/20210209 Understanding Linus-s Law for open source security.md new file mode 100644 index 0000000000..70ce78b0dd --- /dev/null +++ b/translated/talk/20210209 Understanding Linus-s Law for open source security.md @@ -0,0 +1,91 @@ +[#]: collector: (lujun9972) +[#]: translator: (CanYellow) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding Linus's Law for open source security) +[#]: via: (https://opensource.com/article/21/2/open-source-security) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +理解开源安全中的李纳斯定律 +====== +李纳斯定律 (Linus's Law) 即“只要有足够多的眼睛关注,任何漏洞都无处隐藏” (given enough eyeballs, all bugs are shallow)。那么李纳斯定律是如何应用于开源软件安全的呢? + +![Hand putting a Linux file folder into a drawer][1] + +2021 年,有更多的原因使人们比以往更钟情 Linux 。在本系列中,我将分享 21 个使用 Linux 的不同原因。 这篇文章则讨论 Linux 对开源软件安全的影响。 + +开源软件的一个常被赞扬的优点是它的代码能够被任何人检查(安全专家通常称之为“代码审计”)。然而,如果你真的去问很多开源软件用户他们上一次检查代码是什么时候。你大概只能收获他们茫然的眼神或者是喃喃的低语。此外,对于一些相当大型的开源应用,高效地审查每一行代码也是困难的。 + +根据上述这些稍显不安的事实,我们不得不思考:如果没有人察看这些代码,它是开源还是闭源真的有关系吗? + + +### 你应该相信开源吗? + +计算机爱好者倾向于作出认为开源软件比其他软件更加安全的传统假设。我们通常不会讨论这意味者什么:比较的基础是什么(比什么更安全?),或者上述结论是如何得到的。这是一个危险的陈述,因为它表明只要你将一些东西称之为“开源”,它就自动如魔法般地继承了更高的安全性。这不是开源,事实上,这正是开源安全所非常反对的。 + +除非你已经亲自审计并理解了软件代码,否则就不应该假定一个应用程序是安全的。一但你做到了这一点,就可以给予它_最高信任_。_最高信任_不是对计算机而言的,而是对你本人而言的,至少在这一应用程序被渗透攻击之前,你信任它是因为你选择了相信它是安全的。 + +使用者本人是唯一可以对软件代码最高信任的人,因此任何人想要获得这样的享受都必须亲自审查代码。相信其他人的话是不管用的。 + +只有你已经亲自审计并理解了软件代码,你才可以对一个应用程序给予最高信任,最高信任的范围可以是从_根本不信任_到_相当信任_之间。然而我们并没有一个关于信任程度的标准对照表,这是一个你必须亲自做出的个人选择。如果你已经听说了一款应用程序的安全性是可以信任的,相比没有任何建议的情况而言,你可能更加信任它。 + +然而,因为无法审计专有(闭源)软件代码,你不可能给予它_最高信任_。 + +### 李纳斯定律 + +现实很骨感,并不是每个人都是程序员,同时也不是每个程序员都有时间检查数以万计的代码行。因此如果你没有亲自审查代码,你就只能选择相信(一定程度上)_亲自_审查了代码的人。 + +那么,有哪些人会审计代码呢? + +李纳斯定律声称_只要有足够的眼睛关注,任何漏洞都无处隐藏_,然而我们并不知道多少双眼睛是“足够”的。请不要低估这一数量,应用程序往往经过了远超你想象数量的人员审计。原始开发人员以及后续开发人员显然清楚他们自己写下的代码,不过开源软件往往都是团队成果,开源时间越长,阅读了代码的开发人员越多。新加入的开发人员也必须回顾项目代码的核心部分,因为他们必须学习基础代码以加入新的功能。 + +同时,为了使开源软件能够在 Linux 发行版上可用,负责开源软件打包分发的开发人员会加入多个项目。有时一个应用程序可能会在不熟悉项目代码的情况下打包,但是大多数时候,开源软件打包人员都是熟悉所打包的项目代码的。这不仅仅是因为他们不想在他们不信任的软件上签名,还由于他们可能不得不修改代码来使得程序能够正确编译。漏洞报告人员和漏洞修复人员一般也是熟悉基本代码的,因为他们需要尝试解决的问题小到运行异常,大到程序崩溃。当然,一些漏洞报告人员无意中揭示了代码漏洞不是通过亲自审查项目代码而是通过关注明显未按预期工作的现象。系统管理员通常都是通晓用户依赖的重要应用软件的代码的。最后,是安全研究人员,他们专门深入代码内部以揭露潜在的漏洞。 + +### 信任与透明 + +很多人先入为主的认为大型软件的审计是基本不可能的,因为它由数以万计的代码行组成。不要被软件运行所需的代码量欺骗了。我们不需要真的阅读数以万计的代码行。代码是高度结构化的,可被利用的代码漏洞仅仅只是其中的一行,不过它通常影响软件的全部功能。 + +当然,也有例外。有时仅仅一个系统调用或者链接一个有缺陷的库文件就可能引入一系列漏洞。幸运的是,多亏安全研究人员以及漏洞数据库所扮演的积极角色,这些错误相对而言是容易发现的。 + +一些人指向错误追踪系统,比如 [通用漏洞披露 (CVE)][2] 网站,并推断开源软件显而易见是不安全的。毕竟已经向公众公开了大量的安全风险,涉及许多开源项目。但是不要被数据欺骗了。只是因为我们不能发现闭源软件的漏洞,并不意味着闭源软件中不存在漏洞。事实上,已经有很多针对闭源软件的漏洞攻击提出了,闭源软件也是存在漏洞的。区别在于开发者(以及用户)可以查看开源软件的_所有的漏洞_从而降低漏洞的影响。这是扩大对开源软件信任的系统机制的一部分,却正是闭源软件软件所缺少的。 + +对于任何代码而言,可能永远没有“足够的眼睛”来发现漏洞,但是开发社区越壮大、越多样化,越有机会发现和修复代码中的缺陷。 + +### 信任与人 + +在开源社区中,参与同一项目的众多开发者已经发现“不安全”的漏洞却保持沉默的的可能性是微乎其微的,因为人们彼此间不会同意以这样的方式密谋。我们已经看到了在应对 COVID-19 的过程中,人类的行为是如何不一致了,在这里也一样: + + * 我们都识别出了漏洞(病毒)。 + * 我们知晓如何避免它传播(待在家里)。 + * 然而病毒还是在持续传播,因为总是有一个或者多个人偏离了消减疫情的计划。 + +开源软件中的漏洞也一样,如果有人发现了漏洞总会公之于众(当然,我们说的是“假如”能够发现)。 + +然而就专有软件而言,有很大可能参与项目的众多开发者即使注意到不安全的漏洞却仍然保持沉默,因为专有模式依赖于回报。如果一个开发者将漏洞泄漏出来,他可能只是伤害了该专有软件的声誉,进而降低软件的销售额;或者,在更糟糕的情况下,他可能因此而丢了工作。付费秘密开发软件的开发者倾向于不讨论软件的缺陷。如果你曾经是一名开发者,你可能曾经签署过 NDA (译者注:Non-Disclosure Agreement,保密协议),你可能被培训过商业秘密的重要性等等不一而足。专有软件鼓励在面对严重的秘密缺陷时保持沉默,更多时候甚至是强制要求沉默。 + + +### 信任与软件 + +不要信任未经你审计的软件。 + +如果你必须相信未经你审计的软件,那么选择相信已经面向那些更有可能将软件缺陷公之于众的开发者公开代码的软件。 + +开源软件并没有比专有软件继承更高的安全性,但是修复它的系统得到了更好的规划实施和人员配置。 + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/21/2/open-source-security + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[CanYellow](https://github.com/CanYellow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/yearbook-haff-rx-linux-file-lead_0.png?itok=-i0NNfDC (Hand putting a Linux file folder into a drawer) +[2]: https://cve.mitre.org