mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
Translated by qhwdw
This commit is contained in:
parent
c4706ba603
commit
96dcd6f510
@ -1,100 +0,0 @@
|
||||
Translating by qhwdw
|
||||
What’s behind the Intel design flaw forcing numerous patches?
|
||||
============================================================
|
||||
|
||||
### There's obviously a big problem, but we don't know exactly what.
|
||||
|
||||
|
||||
![](https://cdn.arstechnica.net/wp-content/uploads/2015/06/intel-48-core-larrabee-probably-640x427.jpg)
|
||||
|
||||
|
||||
Both Windows and Linux are receiving significant security updates that can, in the worst case, cause performance to drop by half, to defend against a problem that as yet hasn't been fully disclosed.
|
||||
|
||||
Patches to the Linux kernel have been trickling in over the past few weeks. Microsoft has been [testing the Windows updates in the Insider program since November][3], and it is expected to put the alterations into mainstream Windows builds on Patch Tuesday next week. Microsoft's Azure has scheduled maintenance next week, and Amazon's AWS is scheduled for maintenance on Friday—presumably related.
|
||||
|
||||
Since the Linux patches [first came to light][4], a clearer picture of what seems to be wrong has emerged. While Linux and Windows differ in many regards, the basic elements of how these two operating systems—and indeed, every other x86 operating system such as FreeBSD and [macOS][5]—handle system memory is the same, because these parts of the operating system are so tightly coupled to the capabilities of the processor.
|
||||
|
||||
### Keeping track of addresses
|
||||
|
||||
Every byte of memory in a system is implicitly numbered, those numbers being each byte's address. The very earliest operating systems operated using physical memory addresses, but physical memory addresses are inconvenient for lots of reasons. For example, there are often gaps in the addresses, and (particularly on 32-bit systems), physical addresses can be awkward to manipulate, requiring 36-bit numbers, or even larger ones.
|
||||
|
||||
Accordingly, modern operating systems all depend on a broad concept called virtual memory. Virtual memory systems allow both programs and the kernels themselves to operate in a simple, clean, uniform environment. Instead of the physical addresses with their gaps and other oddities, every program, and the kernel itself, uses virtual addresses to access memory. These virtual addresses are contiguous—no need to worry about gaps—and sized conveniently to make them easy to manipulate. 32-bit programs see only 32-bit addresses, even if the physical address requires 36-bit or more numbering.
|
||||
|
||||
While this virtual addressing is transparent to almost every piece of software, the processor does ultimately need to know which physical memory a virtual address refers to. There's a mapping from virtual addresses to physical addresses, and that's stored in a large data structure called a page table. Operating systems build the page table, using a layout determined by the processor, and the processor and operating system in conjunction use the page table whenever they need to convert between virtual and physical addresses.
|
||||
|
||||
This whole mapping process is so important and fundamental to modern operating systems and processors that the processor has dedicated cache—the translation lookaside buffer, or TLB—that stores a certain number of virtual-to-physical mappings so that it can avoid using the full page table every time.
|
||||
|
||||
The use of virtual memory gives us a number of useful features beyond the simplicity of addressing. Chief among these is that each individual program is given its own set of virtual addresses, with its own set of virtual to physical mappings. This is the fundamental technique used to provide "protected memory;" one program cannot corrupt or tamper with the memory of another program, because the other program's memory simply isn't part of the first program's mapping.
|
||||
|
||||
But these uses of an individual mapping per process, and hence extra page tables, puts pressure on the TLB cache. The TLB isn't very big—typically a few hundred mappings in total—and the more page tables a system uses, the less likely it is that the TLB will include any particular virtual-to-physical translation.
|
||||
|
||||
### Half and half
|
||||
|
||||
To make the best use of the TLB, every mainstream operating system splits the range of virtual addresses into two. One half of the addresses is used for each program; the other half is used for the kernel. When switching between processes, only half the page table entries change—the ones belonging to the program. The kernel half is common to every program (because there's only one kernel), and so it can use the same page table mapping for every process. This helps the TLB enormously; while it still has to discard mappings belonging to the process' half of memory addresses, it can keep the mappings for the kernel's half.
|
||||
|
||||
This design isn't completely set in stone. Work was done on Linux to make it possible to give a 32-bit process the entire range of addresses, with no sharing between the kernel's page table and that of each program. While this gave the programs more address space, it carried a performance cost, because the TLB had to reload the kernel's page table entries every time kernel code needed to run. Accordingly, this approach was never widely used on x86 systems.
|
||||
|
||||
One downside of the decision to split the virtual address space between the kernel and each program is that the memory protection is weakened. If the kernel had its own set of page tables and virtual addresses, it would be afforded the same protection as different programs have from one another; the kernel's memory would be simply invisible. But with the split addressing, user programs and the kernel use the same address range, and, in principle, a user program would be able to read and write kernel memory.
|
||||
|
||||
To prevent this obviously undesirable situation, the processor and virtual addressing system have a concept of "rings" or "modes." x86 processors have lots of rings, but for this issue, only two are relevant: "user" (ring 3) and "supervisor" (ring 0). When running regular user programs, the processor is put into user mode, ring 3\. When running kernel code, the processor is in ring 0, supervisor mode, also known as kernel mode.
|
||||
|
||||
These rings are used to protect the kernel memory from user programs. The page tables aren't just mapping from virtual to physical addresses; they also contain metadata about those addresses, including information about which rings can access an address. The kernel's page table entries are all marked as only being accessible to ring 0; the program's entries are marked as being accessible from any ring. If an attempt is made to access ring 0 memory while in ring 3, the processor blocks the access and generates an exception. The result of this is that user programs, running in ring 3, should not be able to learn anything about the kernel and its ring 0 memory.
|
||||
|
||||
At least, that's the theory. The spate of patches and update show that somewhere this has broken down. This is where the big mystery lies.
|
||||
|
||||
### Moving between rings
|
||||
|
||||
Here's what we do know. Every modern processor performs a certain amount of speculative execution. For example, given some instructions that add two numbers and then store the result in memory, a processor might speculatively do the addition before ascertaining whether the destination in memory is actually accessible and writeable. In the common case, where the location _is_ writeable, the processor managed to save some time, as it did the arithmetic in parallel with figuring out what the destination in memory was. If it discovers that the location isn't accessible—for example, a program trying to write to an address that has no mapping and no physical location at all—then it will generate an exception and the speculative execution is wasted.
|
||||
|
||||
Intel processors, specifically—[though not AMD ones][6]—allow speculative execution of ring 3 code that writes to ring 0 memory. The processors _do_ properly block the write, but the speculative execution minutely disturbs the processor state, because certain data will be loaded into cache and the TLB in order to ascertain whether the write should be allowed. This in turn means that some operations will be a few cycles quicker, or a few cycles slower, depending on whether their data is still in cache or not. As well as this, Intel's processors have special features, such as the Software Guard Extensions (SGX) introduced with Skylake processors, that slightly change how attempts to access memory are handled. Again, the processor does still protect ring 0 memory from ring 3 programs, but again, its caches and other internal state are changed, creating measurable differences.
|
||||
|
||||
What we don't know, yet, is just how much kernel memory information can be leaked to user programs or how easily that leaking can occur. And which Intel processors are affected? Again it's not entirely clear, but indications are that every Intel chip with speculative execution (which is all the mainstream processors introduced since the Pentium Pro, from 1995) can leak information this way.
|
||||
|
||||
The first wind of this problem came from researchers from [Graz Technical University in Austria][7]. The information leakage they discovered was enough to undermine kernel mode Address Space Layout Randomization (kernel ASLR, or KASLR). ASLR is something of a last-ditch effort to prevent the exploitation of [buffer overflows][8]. With ASLR, programs and their data are placed at random memory addresses, which makes it a little harder for attackers to exploit security flaws. KASLR applies that same randomization to the kernel so that the kernel's data (including page tables) and code are randomly located.
|
||||
|
||||
The Graz researchers developed [KAISER][9], a set of Linux kernel patches to defend against the problem.
|
||||
|
||||
If the problem were just that it enabled the derandomization of ASLR, this probably wouldn't be a huge disaster. ASLR is a nice protection, but it's known to be imperfect. It's meant to be a hurdle for attackers, not an impenetrable barrier. The industry reaction—a fairly major change to both Windows and Linux, developed with some secrecy—suggests that it's not just ASLR that's defeated and that a more general ability to leak information from the kernel has been developed. Indeed, researchers have [started to tweet][10] that they're able to leak and read arbitrary kernel data. Another possibility is that the flaw can be used to escape out of a virtual machine and compromise a hypervisor.
|
||||
|
||||
The solution that both the Windows and Linux developers have picked is substantially the same, and derived from that KAISER work: the kernel page table entries are no longer shared with each process. In Linux, this is called Kernel Page Table Isolation (KPTI).
|
||||
|
||||
With the patches, the memory address is still split in two; it's just the kernel half is almost empty. It's not quite empty, because a few kernel pieces need to be mapped permanently, whether the processor is running in ring 3 _or_ ring 0, but it's close to empty. This means that even if a malicious user program tries to probe kernel memory and leak information, it will fail—there's simply nothing to leak. The real kernel page tables are only used when the kernel itself is running.
|
||||
|
||||
This undermines the very reason for the split address space in the first place. The TLB now needs to clear out any entries related to the real kernel page tables every time it switches to a user program, putting an end to the performance saving that splitting enabled.
|
||||
|
||||
The impact of this will vary depending on the workload. Every time a program makes a call into the kernel—to read from disk, to send data to the network, to open a file, and so on—that call will be a little more expensive, since it will force the TLB to be flushed and the real kernel page table to be loaded. Programs that don't use the kernel much might see a hit of perhaps 2-3 percent—there's still some overhead because the kernel always has to run occasionally, to handle things like multitasking.
|
||||
|
||||
But workloads that call into the kernel a ton will see much greater performance drop off. In a benchmark, a program that does virtually nothing _other_ than call into the kernel saw [its performance drop by about 50 percent][11]; in other words, each call into the kernel took twice as long with the patch than it did without. Benchmarks that use Linux's loopback networking also see a big hit, such as [17 percent][12] in this Postgres benchmark. Real database workloads using real networking should see lower impact, because with real networks, the overhead of calling into the kernel tends to be dominated by the overhead of using the actual network.
|
||||
|
||||
While Intel systems are the ones known to have the defect, they may not be the only ones affected. Some platforms, such as SPARC and IBM's S390, are immune to the problem, as their processor memory management doesn't need the split address space and shared kernel page tables; operating systems on those platforms have always isolated their kernel page tables from user mode ones. But others, such as ARM, may not be so lucky; [comparable patches for ARM Linux][13] are under development.
|
||||
|
||||
<aside class="ad_native" id="ad_xrail_native" style="box-sizing: inherit;"></aside>
|
||||
|
||||
[][15][PETER BRIGHT][14]Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Brooklyn, NY.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
|
||||
|
||||
作者:[ PETER BRIGHT ][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://arstechnica.com/author/peter-bright/
|
||||
[1]:https://arstechnica.com/author/peter-bright/
|
||||
[2]:https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/?comments=1
|
||||
[3]:https://twitter.com/aionescu/status/930412525111296000
|
||||
[4]:https://lwn.net/SubscriberLink/741878/eb6c9d3913d7cb2b/
|
||||
[5]:https://twitter.com/aionescu/status/948609809540046849
|
||||
[6]:https://lkml.org/lkml/2017/12/27/2
|
||||
[7]:https://gruss.cc/files/kaiser.pdf
|
||||
[8]:https://arstechnica.com/information-technology/2015/08/how-security-flaws-work-the-buffer-overflow/
|
||||
[9]:https://github.com/IAIK/KAISER
|
||||
[10]:https://twitter.com/brainsmoke/status/948561799875502080
|
||||
[11]:https://twitter.com/grsecurity/status/947257569906757638
|
||||
[12]:https://www.postgresql.org/message-id/20180102222354.qikjmf7dvnjgbkxe@alap3.anarazel.de
|
||||
[13]:https://lwn.net/Articles/740393/
|
||||
[14]:https://arstechnica.com/author/peter-bright
|
||||
[15]:https://arstechnica.com/author/peter-bright
|
@ -0,0 +1,99 @@
|
||||
Intel 设计缺陷背后的原因是什么?
|
||||
============================================================
|
||||
|
||||
### 我们知道有问题,但是并不知道问题的详细情况。
|
||||
|
||||
|
||||
![](https://cdn.arstechnica.net/wp-content/uploads/2015/06/intel-48-core-larrabee-probably-640x427.jpg)
|
||||
|
||||
|
||||
最近 Windows 和 Linux 都发送了重大安全更新,为防范这个尚未完全公开的问题,在最坏的情况下,它可能会导致性能下降多达一半。
|
||||
|
||||
在过去的几周,Linux 内核陆续打了几个补丁。Microsoft [自 11 月份开始也内部测试了 Windows 更新][3],并且它预计在下周二的例行补丁中将这个改进推送到主流 Windows 构建版中。Microsoft 的 Azure 也在下周的维护窗口中做好了安排,而 Amazon 的 AWS 也安排在周五对相关的设施进行维护。
|
||||
|
||||
自从 Linux 第一个补丁 [KPTI:内核页表隔离的当前的发展][4] ,明确描绘了出现的错误以后。虽然 Linux 和 Windows 基于不同的考虑,对此持有不同的看法,但是这两个操作系统 — 当然还有其它的 x86 操作系统,比如 FreeBSD 和 [macOS][5] — 对系统内存的处理采用了相同的方式,因为对于操作系统在这一部分特性是与底层的处理器高度耦合的。
|
||||
|
||||
### 保持地址跟踪
|
||||
|
||||
在一个系统中的每个内存字节都是隐性编码的,这些数字是每个字节的地址。早期的操作系统使用物理内存地址,但是,物理内存地址由于各种原因,它并不很合适。例如,在地址中经常会有空隙,并且(尤其是 32 位的系统上)物理地址很难操作,需要 36 位的数字,甚至更多。
|
||||
|
||||
因此,现在操作系统完全依赖一个叫虚拟内存的概念。虚拟内存系统允许程序和内核一起在一个简单、清晰、统一的环境中各自去操作。而不是使用空隙和其它奇怪的东西的物理内存,每个程序和内核自身都使用虚拟地址去访问内存。这些虚拟地址是连续的 — 不用担心有空隙 — 并且合适的大小也更便于操作。32 位的程序仅可以看到 32 位的地址,而不用管物理地址是 36 位还是更多位。
|
||||
|
||||
虽然虚拟地址对每个软件几乎是透明的,但是,处理器最终还是需要知道虚拟地址引用的物理地址是哪个。因此,有一个虚拟地址到物理地址的映射,它保存在一个被称为页面表的数据结构中。操作系统构建页面表,使用一个由处理器决定的布局,并且处理器和操作系统在虚拟地址和物理地址之间进行转换时就需要用到页面表。
|
||||
|
||||
这个映射过程是非常重要的,它也是现代操作系统和处理器的重要基础,处理器有专用的缓存 — translation lookaside buffer(简称 TLB)— 它保存了一定数量的虚拟地址到物理地址的映射,这样就不需要每次都使用全部页面。
|
||||
|
||||
虚拟内存的使用为我们提供了很多除了简单寻址之外的有用的特性。其中最主要的是,每个程序都有了自己独立的一组虚拟地址,有了它自己的一组虚拟地址到物理地址的映射。这就是用于提供“内存保护”的关键技术,一个程序不能破坏或者篡改其它程序使用的内存,因为其它程序的内存并不在它的地址映射范围之内。
|
||||
|
||||
由于每个进程使用一个单独的映射,因此每个程序也就有了一个额外的页面表,这就使得 TLB 缓存很拥挤。TLB 并不大 — 一般情况下总共可以容纳几百个映射 — 而系统使用的页面表越多,TLB 能够包含的任何特定的虚拟地址到物理地址的映射就越少。
|
||||
|
||||
### 一半一半
|
||||
|
||||
为了更好地使用 TLB,每个主流的操作系统都将虚拟地址范围一分为二。一半用于程序;另一半用于内核。当进程切换时,仅有一半的页面表条目发生变化 — 仅属于程序的那一半。内核的那一半是每个程序公用的(因为只有一个内核)并且因此它可以为每个进程使用相同的页面表映射。这对 TLB 的帮助非常大;虽然它仍然会丢弃属于进程的那一半内存地址映射;但是它还保持着另一半属于内核的映射。
|
||||
|
||||
这种设计并不是一成不变的。在 Linux 上做了一项工作,使它可以为一个 32 位的进程提供整个地址范围,而不用在内核页面表和每个进程之间共享。虽然这样为程序提供了更多的地址空间,但这是以牺牲性能为代价的,因为每次内核代码需要运行时,TLB 重新加载内核的页面表条目。因此,这种方法并没有广泛应用到 x86 的系统上。
|
||||
|
||||
在内核和每个程序之间分割虚拟地址的这种做法的一个负面影响是,内存保护被削弱了。如果内核有它自己的一组页面表和虚拟地址,它将在不同的程序之间提供相同的保护;内核内存将是简单的不可见。但是使用地址分割之后,用户程序和内核使用了相同的地址范围,并且从原理上来说,一个用户程序有可能去读写内核内存。
|
||||
|
||||
为避免这种明显不好的情况,处理器和虚拟地址系统有一个 “Ring" 或者 ”模式“的概念。x86 处理器有许多 rings,但是对于这个问题,仅有两个是相关的:"user" (ring 3)和 "supervisor"(ring 0)。当运行普通的用户程序时,处理器将置为用户模式 (ring 3)。当运行内核代码时,处理器将处于 ring 0 —— supervisor 模式,也称为内核模式。
|
||||
|
||||
这些 rings 也用于从用户程序中保护内核内存。页面表并不仅仅有虚拟地址到物理地址的映射;它也包含关于这些地址的元数据,包含哪个 rings 可能访问哪个地址的信息。内核页面表条目被标记为仅 ring 0 可以访问;程序的条目被标记为任何 ring 都可以访问。如果一个处于 ring 3 中的进程去尝试访问标记为 ring 0 的内存,处理器将阻止这个访问并生成一个意外错误信息。运行在 ring 3 中的用户程序不能得到内核以及运行在 ring 0 内存中的任何东西。
|
||||
|
||||
至少理论上是这样的。大量的补丁和更新表明,这个地方已经被突破了。这就是最大的谜团所在。
|
||||
|
||||
### Ring 间迁移
|
||||
|
||||
这就是我们所知道的。每个现代处理器都执行一定数量的推测运行。例如,给一些指令,让两个数加起来,然后将结果保存在内存中,在查明内存中的目标是否可访问和可写入之前,一个处理器可能已经推测性地做了加法。在一些常见案例中,在位置是可写入的地方,处理器节省了一些时间,因为它以并行方式计算出内存中的目标是什么。如果它发现目标位置不可写入 — 例如,一个程序尝试去写入到一个没有映射的地址以及压根就不存在的物理位置— 然后它将产生一个意外错误,而推测运行就白做了。
|
||||
|
||||
Intel 处理器,尤其是 — [虽然不是 AMD 的][6] — 但允许对 ring 3 代码进行推测运行并写入到 ring 0 内存中的处理器上。处理器并不完全阻止这种写入,但是推测运行轻微扰乱了处理器状态,因为,为了查明目标位置是否可写入,某些数据已经被加载到缓存和 TLB 中。这又意味着一些操作可能快几个周期,或者慢几个周期,这取决于它们所需要的数据是否仍然在缓存中。除此之外,Intel 的处理器还有一些特殊的功能,比如,在 Skylake 处理器上引入的软件保护扩展(SGX)指令,它改变了一点点访问内存的方式。同样的,处理器仍然是保护 ring 0 的内存不被来自 ring 3 的程序所访问,但是同样的,它的缓存和其它内部状态已经发生了变化,产生了可测量的差异。
|
||||
|
||||
我们至今仍然并不知道具体的情况,到底有多少内核的内存信息泄露给了用户程序,或者信息泄露的情况有多容易发生。以及有哪些 Intel 处理器会受到影响?也或者并不完全清楚,但是,有迹象表明每个 Intel 芯片都使用了推测运行(是自 1995 年 Pentium Pro 以来的,所有主流处理器吗?),它们都可能会因此而泄露信息。
|
||||
|
||||
这个问题第一次被披露是由来自 [奥地利的 Graz Technical University][7] 的研究者。他们披露的信息表明这个问题已经足够破坏内核模式地址空间布局随机化(内核 ASLR,或称 KASLR)。ASLR 是防范 [缓冲区溢出][8] 漏洞利用的最后一道防线。启用 ASLR 之后,程序和它们的数据被置于随机的内存地址中,它将使一些安全漏洞利用更加困难。KASLR 将这种随机化应用到内核中,这样就使内核的数据(包括页面表)和代码也随机化分布。
|
||||
|
||||
Graz 的研究者开发了 [KAISER][9],一组防范这个问题的 Linux 内核补丁。
|
||||
|
||||
如果这个问题正好使 ASLR 的随机化被破坏了,这或许将成为一个巨大的灾难。ASLR 是一个非常强大的保护措施,但是它并不是完美的,这意味着对于黑客来说将是一个很大的障碍,一个无法逾越的障碍。整个行业对此的反应是 — Windows 和 Linux 都有一个非常重要的变化,秘密开发 — 这表明不仅是 ASLR 被破坏了,而且从内核泄露出信息的更普遍的技术被开发出来了。确实是这样的,研究者已经 [在 tweet 上发布信息][10],他们已经可以随意泄露和读取内核数据了。另一种可能是,漏洞可能被用于从虚拟机中”越狱“,并可能会危及 hypervisor。
|
||||
|
||||
Windows 和 Linux 选择的解决方案是非常相似的,将 KAISER 分为两个区域:内核页面表的条目不再是由每个进程共享。在 Linux 中,这被称为内核页面表隔离(KPTI)。
|
||||
|
||||
应用补丁后,内存地址仍然被一分为二:这样使内核的那一半几乎是空的。当然它并不是非常的空,因为一些内核片断需要永久映射,不论进程是运行在 ring 3 还是 ring 0 中,它都几乎是空的。这意味着如果恶意用户程序尝试去探测内核内存以及泄露信息,它将会失败 — 因为那里几乎没有信息。而真正的内核页面中只有当内核自身运行的时刻它才能被用到。
|
||||
|
||||
这样做就破坏了最初将地址空间分割的理由。现在,每次切换到用户程序时,TLB 需要实时去清除与内核页面表相关的所有条目,这样就失去了启用分割带来的性能提升。
|
||||
|
||||
影响的具体大小取决于工作负载。每当一个程序被调入到内核 — 从磁盘读入、发送数据到网络、打开一个文件等等 — 这种调用的成本可能会增加一点点,因为它强制 TLB 清除了缓存并实时加载内核页面表。不使用内核的程序可能会观测到 2 - 3 个百分点的性能影响 — 这里仍然有一些开销,因为内核仍然是偶尔会运行去处理一些事情,比如多任务等等。
|
||||
|
||||
但是大量调用进入到内核的工作负载将观测到很大的性能损失。在一个基准测试中,一个除了调入到内核之外什么都不做的程序,观察到 [它的性能下降大约为 50%][11];换句话说就是,打补丁后每次对内核的调用的时间要比不打补丁调用内核的时间增加一倍。基准测试使用的 Linux 的网络回环(loopback)也观测到一个很大的影响,比如,在 Postgres 的基准测试中大约是 [17%][12]。真实的数据库负载使用了实时网络可能观测到的影响要低一些,因为使用实时网络时,内核调用的开销基本是使用真实网络的开销。
|
||||
|
||||
虽然对 Intel 系统的影响是众所周知的,但是它们可能并不是唯一受影响的。其它的一些平台,比如 SPARC 和 IBM 的 S390,是不受这个问题影响的,因为它们的处理器的内存管理并不需要分割地址空间和共享内核页面表;在这些平台上的操作系统一直就是将它们的内核页面表从用户模式中隔离出来的。但是其它的,比如 ARM,可能就没有这么幸运了;[适用于 ARM Linux 的类似补丁][13] 正在开发中。
|
||||
|
||||
<aside class="ad_native" id="ad_xrail_native" style="box-sizing: inherit;"></aside>
|
||||
|
||||
[][15][PETER BRIGHT][14] 是 Ars 的一位技术编辑。他涉及微软、编程及软件开发、Web 技术和浏览器、以及安全方面。它居住在纽约的布鲁克林。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/
|
||||
|
||||
作者:[ PETER BRIGHT ][a]
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://arstechnica.com/author/peter-bright/
|
||||
[1]:https://arstechnica.com/author/peter-bright/
|
||||
[2]:https://arstechnica.com/gadgets/2018/01/whats-behind-the-intel-design-flaw-forcing-numerous-patches/?comments=1
|
||||
[3]:https://twitter.com/aionescu/status/930412525111296000
|
||||
[4]:https://linux.cn/article-9201-1.html
|
||||
[5]:https://twitter.com/aionescu/status/948609809540046849
|
||||
[6]:https://lkml.org/lkml/2017/12/27/2
|
||||
[7]:https://gruss.cc/files/kaiser.pdf
|
||||
[8]:https://arstechnica.com/information-technology/2015/08/how-security-flaws-work-the-buffer-overflow/
|
||||
[9]:https://github.com/IAIK/KAISER
|
||||
[10]:https://twitter.com/brainsmoke/status/948561799875502080
|
||||
[11]:https://twitter.com/grsecurity/status/947257569906757638
|
||||
[12]:https://www.postgresql.org/message-id/20180102222354.qikjmf7dvnjgbkxe@alap3.anarazel.de
|
||||
[13]:https://lwn.net/Articles/740393/
|
||||
[14]:https://arstechnica.com/author/peter-bright
|
||||
[15]:https://arstechnica.com/author/peter-bright
|
Loading…
Reference in New Issue
Block a user