Translated by qhwdw

This commit is contained in:
qhwdw 2017-12-14 15:43:05 +08:00
parent 9f29ef9842
commit e676f42b13
2 changed files with 178 additions and 178 deletions

View File

@ -1,178 +0,0 @@
Translating by qhwdw Internet protocols are changing
============================================================
![](https://blog.apnic.net/wp-content/uploads/2017/12/evolution-555x202.png)
When the Internet started to become widely used in the 1990s, most traffic used just a few protocols: IPv4 routed packets, TCP turned those packets into connections, SSL (later TLS) encrypted those connections, DNS named hosts to connect to, and HTTP was often the application protocol using it all.
For many years, there were negligible changes to these core Internet protocols; HTTP added a few new headers and methods, TLS slowly went through minor revisions, TCP adapted congestion control, and DNS introduced features like DNSSEC. The protocols themselves looked about the same on the wire for a very long time (excepting IPv6, which already gets its fair amount of attention in the network operator community.)
As a result, network operators, vendors, and policymakers that want to understand (and sometimes, control) the Internet have adopted a number of practices based upon these protocols wire footprint — whether intended to debug issues, improve quality of service, or impose policy.
Now, significant changes to the core Internet protocols are underway. While they are intended to be compatible with the Internet at large (since they wont get adoption otherwise), they might be disruptive to those who have taken liberties with undocumented aspects of protocols or made an assumption that things wont change.
#### Why we need to change the Internet
There are a number of factors driving these changes.
First, the limits of the core Internet protocols have become apparent, especially regarding performance. Because of structural problems in the application and transport protocols, the network was not being used as efficiently as it could be, leading to end-user perceived performance (in particular, latency).
This translates into a strong motivation to evolve or replace those protocols because there is a [large body of experience showing the impact of even small performance gains][14].
Second, the ability to evolve Internet protocols — at any layer — has become more difficult over time, largely thanks to the unintended uses by networks discussed above. For example, HTTP proxies that tried to compress responses made it more difficult to deploy new compression techniques; TCP optimization in middleboxes made it more difficult to deploy improvements to TCP.
Finally, [we are in the midst of a shift towards more use of encryption on the Internet][15], first spurred by Edward Snowdens revelations in 2015\. Thats really a separate discussion, but it is relevant here in that encryption is one of best tools we have to ensure that protocols can evolve.
Lets have a look at whats happened, whats coming next, how it might impact networks, and how networks impact protocol design.
#### HTTP/2
[HTTP/2][16] (based on Googles SPDY) was the first notable change — standardized in 2015, it multiplexes multiple requests onto one TCP connection, thereby avoiding the need to queue requests on the client without blocking each other. It is now widely deployed, and supported by all major browsers and web servers.
From a networks viewpoint, HTTP/2 made a few notable changes. First, its a binary protocol, so any device that assumes its HTTP/1.1 is going to break.
That breakage was one of the primary reasons for another big change in HTTP/2; it effectively requires encryption. This gives it a better chance of avoiding interference from intermediaries that assume its HTTP/1.1, or do more subtle things like strip headers or block new protocol extensions — both things that had been seen by some of the engineers working on the protocol, causing significant support problems for them.
[HTTP/2 also requires TLS/1.2 to be used when it is encrypted][17], and [blacklists ][18]cipher suites that were judged to be insecure — with the effect of only allowing ephemeral keys. See the TLS 1.3 section for potential impacts here.
Finally, HTTP/2 allows more than one hosts requests to be [coalesced onto a connection][19], to improve performance by reducing the number of connections (and thereby, congestion control contexts) used for a page load.
For example, you could have a connection for <tt style="box-sizing: inherit;">www.example.com</tt>, but also use it for requests for <tt style="box-sizing: inherit;">images.example.com</tt>. [Future protocol extensions might also allow additional hosts to be added to the connection][20], even if they werent listed in the original TLS certificate used for it. As a result, assuming that the traffic on a connection is limited to the purpose it was initiated for isnt going to apply.
Despite these changes, its worth noting that HTTP/2 doesnt appear to suffer from significant interoperability problems or interference from networks.
#### TLS 1.3
[TLS 1.3][21] is just going through the final processes of standardization and is already supported by some implementations.
Dont be fooled by its incremental name; this is effectively a new version of TLS, with a much-revamped handshake that allows application data to flow from the start (often called 0RTT). The new design relies upon ephemeral key exchange, thereby ruling out static keys.
This has caused concern from some network operators and vendors — in particular those who need visibility into whats happening inside those connections.
For example, consider the datacentre for a bank that has regulatory requirements for visibility. By sniffing traffic in the network and decrypting it with the static keys of their servers, they can log legitimate traffic and identify harmful traffic, whether it be attackers from the outside or employees trying to leak data from the inside.
TLS 1.3 doesnt support that particular technique for intercepting traffic, since its also [a form of attack that ephemeral keys protect against][22]. However, since they have regulatory requirements to both use modern encryption protocols and to monitor their networks, this puts those network operators in an awkward spot.
Theres been much debate about whether regulations require static keys, whether alternative approaches could be just as effective, and whether weakening security for the entire Internet for the benefit of relatively few networks is the right solution. Indeed, its still possible to decrypt traffic in TLS 1.3, but you need access to the ephemeral keys to do so, and by design, they arent long-lived.
At this point it doesnt look like TLS 1.3 will change to accommodate these networks, but there are rumblings about creating another protocol that allows a third party to observe whats going on— and perhaps more — for these use cases. Whether that gets traction remains to be seen.
#### QUIC
During work on HTTP/2, it became evident that TCP has similar inefficiencies. Because TCP is an in-order delivery protocol, the loss of one packet can prevent those in the buffers behind it from being delivered to the application. For a multiplexed protocol, this can make a big difference in performance.
[QUIC][23] is an attempt to address that by effectively rebuilding TCP semantics (along with some of HTTP/2s stream model) on top of UDP. Like HTTP/2, it started as a Google effort and is now in the IETF, with an initial use case of HTTP-over-UDP and a goal of becoming a standard in late 2018\. However, since Google has already deployed QUIC in the Chrome browser and on its sites, it already accounts for more than 7% of Internet traffic.
Read [Your questions answered about QUIC][24]
Besides the shift from TCP to UDP for such a sizable amount of traffic (and all of the adjustments in networks that might imply), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require encryption to operate at all; there is no unencrypted QUIC.
iQUIC uses TLS 1.3 to establish keys for a session and then uses them to encrypt each packet. However, since its UDP-based, a lot of the session information and metadata thats exposed in TCP gets encrypted in QUIC.
In fact, iQUICs current [short header][25] — used for all packets except the handshake — only exposes a packet number, an optional connection identifier, and a byte of state for things like the encryption key rotation schedule and the packet type (which might end up encrypted as well).
Everything else is encrypted — including ACKs, to raise the bar for [traffic analysis][26] attacks.
However, this means that passively estimating RTT and packet loss by observing connections is no longer possible; there isnt enough information. This lack of observability has caused a significant amount of concern by some in the operator community, who say that passive measurements like this are critical for debugging and understanding their networks.
One proposal to meet this need is the [Spin Bit][27] — a bit in the header that flips once a round trip, so that observers can estimate RTT. Since its decoupled from the applications state, it doesnt appear to leak any information about the endpoints, beyond a rough estimate of location on the network.
#### DOH
The newest change on the horizon is DOH — [DNS over HTTP][28]. A [significant amount of research has shown that networks commonly use DNS as a means of imposing policy][29] (whether on behalf of the network operator or a greater authority).
Circumventing this kind of control with encryption has been [discussed for a while][30], but it has a disadvantage (at least from some standpoints) — it is possible to discriminate it from other traffic; for example, by using its port number to block access.
DOH addresses that by piggybacking DNS traffic onto an existing HTTP connection, thereby removing any discriminators. A network that wishes to block access to that DNS resolver can only do so by blocking access to the website as well.
For example, if Google was to deploy its [public DNS service over DOH][31]on <tt style="box-sizing: inherit;">www.google.com</tt> and a user configures their browser to use it, a network that wants (or is required) to stop it would have to effectively block all of Google (thanks to how they host their services).
DOH has just started its work, but theres already a fair amount of interest in it, and some rumblings of deployment. How the networks (and governments) that use DNS to impose policy will react remains to be seen.
Read [IETF 100, Singapore: DNS over HTTP (DOH!)][1]
#### Ossification and grease
To return to motivations, one theme throughout this work is how protocol designers are increasingly encountering problems where networks make assumptions about traffic.
For example, TLS 1.3 has had a number of last-minute issues with middleboxes that assume its an older version of the protocol. gQUIC blacklists several networks that throttle UDP traffic, because they think that its harmful or low-priority traffic.
When a protocol cant evolve because deployments freeze its extensibility points, we say it has  _ossified_ . TCP itself is a severe example of ossification; so many middleboxes do so many things to TCP — whether its blocking packets with TCP options that arent recognized, or optimizing congestion control.
Its necessary to prevent ossification, to ensure that protocols can evolve to meet the needs of the Internet in the future; otherwise, it would be a tragedy of the commons where the actions of some individual networks — although well-intended — would affect the health of the Internet overall.
There are many ways to prevent ossification; if the data in question is encrypted, it cannot be accessed by any party but those that hold the keys, preventing interference. If an extension point is unencrypted but commonly used in a way that would break applications visibly (for example, HTTP headers), its less likely to be interfered with.
Where protocol designers cant use encryption and an extension point isnt used often, artificially exercising the extension point can help; we call this  _greasing_  it.
For example, QUIC encourages endpoints to use a range of decoy values in its [version negotiation][32], to avoid implementations assuming that it will never change (as was often encountered in TLS implementations, leading to significant problems).
#### The network and the user
Beyond the desire to avoid ossification, these changes also reflect the evolving relationship between networks and their users. While for a long time people assumed that networks were always benevolent — or at least disinterested — parties, this is no longer the case, thanks not only to [pervasive monitoring][33] but also attacks like [Firesheep][34].
As a result, there is growing tension between the needs of Internet users overall and those of the networks who want to have access to some amount of the data flowing over them. Particularly affected will be networks that want to impose policy upon those users; for example, enterprise networks.
In some cases, they might be able to meet their goals by installing software (or a CA certificate, or a browser extension) on their users machines. However, this isnt as easy in cases where the network doesnt own or have access to the computer; for example, BYOD has become common, and IoT devices seldom have the appropriate control interfaces.
As a result, a lot of discussion surrounding protocol development in the IETF is touching on the sometimes competing needs of enterprises and other leaf networks and the good of the Internet overall.
#### Get involved
For the Internet to work well in the long run, it needs to provide value to end users, avoid ossification, and allow networks to operate. The changes taking place now need to meet all three goals, but we need more input from network operators.
If these changes affect your network — or wont— please leave comments below, or better yet, get involved in the [IETF][35] by attending a meeting, joining a mailing list, or providing feedback on a draft.
Thanks to Martin Thomson and Brian Trammell for their review.
_Mark Nottingham is a member of the Internet Architecture Board and co-chairs the IETFs HTTP and QUIC Working Groups._
--------------------------------------------------------------------------------
via: https://blog.apnic.net/2017/12/12/internet-protocols-changing/
作者:[ Mark Nottingham ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.apnic.net/author/mark-nottingham/
[1]:https://blog.apnic.net/2017/11/17/ietf-100-singapore-dns-http-doh/
[2]:https://blog.apnic.net/author/mark-nottingham/
[3]:https://blog.apnic.net/category/tech-matters/
[4]:https://blog.apnic.net/tag/dns/
[5]:https://blog.apnic.net/tag/doh/
[6]:https://blog.apnic.net/tag/guest-post/
[7]:https://blog.apnic.net/tag/http/
[8]:https://blog.apnic.net/tag/ietf/
[9]:https://blog.apnic.net/tag/quic/
[10]:https://blog.apnic.net/tag/tls/
[11]:https://blog.apnic.net/tag/protocol/
[12]:https://blog.apnic.net/2017/12/12/internet-protocols-changing/#comments
[13]:https://blog.apnic.net/
[14]:https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/
[15]:https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46197.pdf
[16]:https://http2.github.io/
[17]:http://httpwg.org/specs/rfc7540.html#TLSUsage
[18]:http://httpwg.org/specs/rfc7540.html#BadCipherSuites
[19]:http://httpwg.org/specs/rfc7540.html#reuse
[20]:https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs
[21]:https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/
[22]:https://en.wikipedia.org/wiki/Forward_secrecy
[23]:https://quicwg.github.io/
[24]:https://blog.apnic.net/2016/08/30/questions-answered-quic/
[25]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#short-header
[26]:https://www.mjkranch.com/docs/CODASPY17_Kranch_Reed_IdentifyingHTTPSNetflix.pdf
[27]:https://tools.ietf.org/html/draft-trammell-quic-spin
[28]:https://datatracker.ietf.org/wg/doh/about/
[29]:https://datatracker.ietf.org/meeting/99/materials/slides-99-maprg-fingerprint-based-detection-of-dns-hijacks-using-ripe-atlas/
[30]:https://datatracker.ietf.org/wg/dprive/about/
[31]:https://developers.google.com/speed/public-dns/
[32]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#rfc.section.3.7
[33]:https://tools.ietf.org/html/rfc7258
[34]:http://codebutler.com/firesheep
[35]:https://www.ietf.org/

View File

@ -0,0 +1,178 @@
因特网协议正在发生变化
============================================================
![](https://blog.apnic.net/wp-content/uploads/2017/12/evolution-555x202.png)
在上世纪九十年代当因特网开始被广泛使用的时候大部分的通讯只使用几个协议IPv4 路由包TCP 转发这些包到连接上SSL后来的 TLS加密连接DNS 命名连接上的主机HTTP 是最常用的应用程序协议。
多年以来这些核心的因特网协议的变化几乎是可以忽略的HTTP 增加了几个新的报文头和方法TLS 缓慢地进行了一点小修改TCP 调整了拥塞控制,而 DNS 引入了像 DNSSEC 这样的特性。这些协议本身在很长一段时间以来都面向相同的 “线上(on the wire)” 环境(除了 IPv6它已经引起网络运营商们的大量关注
因此网络运营商、供应商、和政策制定者们他们想去了解并且有时是想去管理因特网基于上面的这些协议的“影响footpring”已经采纳了的大量的实践 — 是否打算去调试问题、改善服务质量、或者强制实施策略。
现在,核心因特网协议的重要改变已经开始了。虽然它们的目的是与因特网兼容(因为,如果不兼容的话,它们不会被采纳),但是它们可以破坏那些在协议方面进行非法使用的人的自由,或者假设那些事件不会改变。
#### 为什么我们需要去改变因特网
那里有大量的因素推动这些变化。
首先,核心因特网协议的限制越来越明显,尤其是考虑到性能的时候。由于在应用程序和传输协议方面的结构上的问题,网络不能被高效地使用,导致终端用户感受到性能问题(特别是,延迟)。
这就转化成进化或者替换这些协议的强烈的动机,因为有 [大量的经验表明,即便是很小的性能改善也会产生影响][14]。
第二,有能力去进化因特网协议 — 在任何层面上 — 随着时间的推移会变得更加困难,很大程度上要感谢上面所讨论的网络带来的意想不到的使用。例如,尝试去压缩响应的 HTTP 代理,使的部署一个新的压缩技术更困难;中间设备中的 TCP 优化使得部署一个对 TCP 的改善越来越困难。
最后,[我们正处在一个更多地使用加密技术的因特网变化中][15]首次激起这种改变的事件是2015 的 Edward Snowden 披露的信息(译者注:指的是美国中情局雇员斯诺登的事件)。那是一个单独讨论的话题,但是它的意义是,我们为确保协议可以进化,加密是其中一个很好的工具。
让我们来看一下都发生了什么,接下来会出现什么,它对网络有哪些影响,和它对网络协议的设计有哪些影响。
#### HTTP/2
[HTTP/2][16](基于 Google 的 SPDY 是第一个发生重大变化的 — 在 2015 年被标准化,它多路传输多个请求到一个 TCP 连接中,因此可以在客户端上不阻塞任何一个其它请求的情况下避免了请求队列。它现在已经被广泛部署,并且被所有的主流浏览器和 web 服务器支持。
从一个网络的角度来看HTTP/2 的一些显著变化。首先,适合一个二进制协议,因此,任何假定它是 HTTP/1.1 的设备都会被中断。
中断是在 HTTP/2 中另一个大的变化的主要原因;它有效地请求加密。这种改变的好处是避免了来自伪装的 HTTP/1.1 的中间人攻击,或者一些更狡滑的比如 “脱衣攻击” 或者阻止新的协议扩展 — 协议上的这两种情况都在工程师的工作中出现过,给他们带来了很明显的支持问题。
[当它被加密时HTTP/2 也请求使用 TLS/1.2][17],并且 [黑名单][18] 密码组合已经被证明不安全 — 它只对暂时的密钥有效果。关于潜在的影响可以去看 TLS 1.3 的相关章节。
最终HTTP/2 允许多于一个主机的请求去被 [合并到一个连接上][19],通过减少页面加载所使用的连接(和拥塞管理上下文)数量去提升性能。
例如,你可以为 <tt style="box-sizing: inherit;">www.example.com</tt> 有一个连接,也可以用这个连接去为 <tt style="box-sizing: inherit;">images.example.com</tt> 的请求所使用。[未来协议的扩展也可以允许另外的主机去被添加到连接][20],即便它们没有在最初的 TLS 证书中被列为可以使用。因此,假设连接上的通讯被限制了用途,那么在这种情况下它就不能被使用了。
值得注意的是尽管存在这些变化HTTP/2 并没有出现明显的互操作性问题或者来自网络的冲突。
#### TLS 1.3
[TLS 1.3][21] 仅通过了标准化的最后过程,并且已经被一些实现所支持。
不要被它只增加了版本号的名字所欺骗;它实际上是一个新的 TLS 版本,修改了很多 “握手”,它允许应用程序数据去从开始流出(经常被称为 0RTT。新的设计依赖短暂的密钥交换因此排除了静态密钥。
这引起了一些网络运营商和供应商的担心 — 尤其是那些需要清晰地知道那些连接中发生了什么的人。
例如,假设一个对可视性有监管要求的银行数据中心,通过在网络中嗅探通讯包并且使用他们的服务器上的静态密钥解密它,它们可以记录合法通讯和识别有害通讯,是否是一个来自外部的攻击,或者员工从内部去泄露数据。
TLS 1.3 并不支持那些窃听通讯的特定技术,因此,它也可以 [以短暂的密钥来防范一种形式的攻击][22]。然而,因为他们有监管要求去使用更现代化的加密协议并且去监视他们的网络,这些使网络运营商处境很尴尬。
关于是否规定要求静态密钥、替代方式是否有效、并且为了相对较少的网络环境而减弱整个因特网的安全是否是一个正确的解决方案有很多的争论。确实,仍然有可能对使用 TLS 1.3 的通讯进行解密,但是,你需要去访问一个短暂的密钥才能做到,并且,按照设计,它们不可能长时间存在。
在这一点上TLS 1.3 似乎不会去改变来适应这些网络,但是,关于去创建另外的协议去允许第三方去偷窥通讯内容 — 或者做更多的事情 — 对于这种使用情况,网络上到处充斥着不满的声音。
#### QUIC
在 HTTP/2 工作期间,可以很明显地看到 TCP 是很低效率的。因为 TCP 是一个按顺序发送的协议,丢失的包阻止了在缓存中的后面等待的包被发送到应用程序。对于一个多路协议来说,这对性能有很大的影响。
[QUIC][23] 是尝试去解决这种影响而在 UDP 之上重构的 TCP 语义(属于 HTTP/2 的流模型的一部分)像 HTTP/2 一样,它作为 Google 的一项成果被发起,并且现在已经进入了 IETF它最初是作为一个 HTTP-over-UDP 的使用案例,并且它的目标是在 2018 年成为一个标准。但是,因为 Google 在 Chrome 浏览器和它的网站上中已经部署了 QUIC它已经占有了因特网通讯超过 7% 的份额。
阅读 [关于 QUIC 的答疑][24]
除了大量的通讯(以及隐含的可能的网络调整)从 TCP 到 UDP 的转变之外Google QUICgQUIC和 IETF QUICiQUIC都要求完全加密这里没有非加密的 QUIC。
iQUIC 使用 TLS 1.3 去为一个会话创建一个密码,然后使用它去加密每个包。然而,因为,它是基于 UDP 的,在 QUIC 中许多会话信息和元数据在加密后的 TCP 包中被公开。
事实上iQUIC 当前的 [‘短报文头’][25] — 被用于除了握手外的所有包 — 仅公开一个包编号、一个可选的连接标识符、和一个状态字节,像加密密钥转换计划和包字节(它最终也可能被加密)。
其它的所有东西都被加密 — 包括 ACKs以提高 [通讯分析][26] 攻击的门槛。
然而,这意味着被动估算 RTT 和通过观察连接的丢失包将不再变得可能;因为这里没有足够多的信息了。在一些运营商中,由于缺乏可观测性,导致了大量的担忧,它们认为像这样的被动测量对于他们调试和了解它们的网络是至关重要的。
为满足这一需求,它们有一个提议是 [Spin Bit][27] — 在报文头中的一个 bit它是一个往返的开关因此可能通过观察它来估算 RTT。因为它从应用程序的状态中解耦的它的出现并不会泄露关于终端的任何信息也无法实现对网络位置的粗略估计。
#### DOH
可以肯定的即将发生的变化是 DOH — [DNS over HTTP][28]。[大量的研究表明,对网络实施策略的一个常用手段是通过 DNS 实现的][29](是否代表网络运营商或者一个更大的权威)。
使用加密去规避这种控制已经 [讨论了一段时间了][30],但是,它有一个不利条件(至少从某些立场来看)— 它可能从其它的通讯中被区别对待;例如,通过利用它的端口号被阻止访问。
DOH 将 DNS 通讯稍带在已经建立的 HTTP 连接上,因此,消除了任何的鉴别器。一个网络希望去阻止访问,仅需要去阻止 DNS 解析就可以做到阻止对特定网站的访问。
例如,如果 Google 在 <tt style="box-sizing: inherit;">www.google.com</tt> 上部署了它的 [基于 DOH 的公共 DNS 服务][31] 并且一个用户配置了它的浏览器去使用它,一个希望(或被要求的)被停止的网络,它将被 Google 有效的全部阻止(向他们提供的服务致敬!)。
DOH 才刚刚开始,但它已经引起很多人的兴趣和一些部署的声音。通过使用 DNS 来实施策略的网络(和政府机构)如何反应还有待观察。
阅读 [IETF 100, Singapore: DNS over HTTP (DOH!)][1]
#### 骨化和润滑
让我们返回到协议变化的动机,其中一个主题是吞吐量,协议设计者们遇到的越来越多的问题是怎么去假设关于通讯的问题。
例如TLS 1.3 有一个使用旧版本协议的中间设备的最后结束时间的问题。gQUIC 黑名单控制网络的 UDP 通讯,因为,它们认为那是有害的或者是低优先级的通讯。
当一个协议因为已部署而 “冻结” 它的可扩展点导致不能被进化,我们称它为 _已骨化_ 。TCP 协议自身就是一个严重骨化的例子,因此,很中间设备在 TCP 上做了很多的事情 — 是否阻止有无法识别的 TCP 选项的数据包,或者,优化拥塞控制。
有必要去阻止骨化,去确保协议可以被进化,以满足未来因特网的需要;否则,它将成为一个 ”公共的悲剧“,它只能是满足一些个别的网络行为的地方 — 虽然很好 — 但是将影响整个因特网的健康发展。
这里有很多的方式去阻止骨化如果被讨论的数据是加密的它并不能被任何一方所访问但是持有密钥的人阻止了干扰。如果扩展点是未加密的但是在一种可以打破应用程序可见性例如HTTP 报头)的方法被常规使用后,它不太可能会受到干扰。
协议设计者不能使用加密的地方和一个不经常使用的扩展点、人为发挥的可利用的扩展点;我们称之为 _润滑_ 它。
例如QUIC 鼓励终端在 [版本协商][32] 中使用一系列的诱饵值,去避免它永远不变化的假定实现(就像在 TLS 实现中经常遇到的导致重大问题的情况)。
#### 网络和用户
除了避免骨化的愿望外,这些变化也反映出了网络和它们的用户之间的进化。很长时间以来,人们总是假设网络总是很仁慈好善的 — 或者至少是公正的 — 这种情况是不存在的,不仅是 [无孔不入的监视][33],也有像 [Firesheep][34] 的攻击。
因此,因特网用户的整体需求和那些想去访问流经它们的网络的用户数据的网络之间的关系日益紧张。尤其受影响的是那些希望去对它们的用户实施策略的网络;例如,企业网络。
在一些情况中,他们可以通过在它们的用户机器上安装软件(或一个 CA 证书或者一个浏览器扩展来达到他们的目的。然而在网络不是所有者或者能够访问计算机的情况下这并不容易例如BYOD 已经很常用,并且物联网设备几乎没有合适的控制接口。
因此,在 IETF 中围绕协议开发的许多讨论,是去接触企业和其它的 ”叶子“ 网络之间偶尔的需求竞争,并且这对因特网的整体是有好处的。
#### 参与
为了让因特网在以后工作的更好,它需要为终端用户提供价值、避免骨化、并且允许网络去控制。现在发生的变化需要去满足所有的三个目标,但是,我们需要网络运营商更多的投入。
如果这些变化影响你的网络 — 或者没有影响 — 请在下面留下评论,或者更好用了,通过参加会议、加入邮件列表、或者对草案提供反馈来参与 [IETF][35] 的工作。
感谢 Martin Thomson 和 Brian Trammell 的评论。
_Mark Nottingham 是因特网架构委员会的成员和 IETF 的 HTTP 和 QUIC 工作组的共同主持人。_
--------------------------------------------------------------------------------
via: https://blog.apnic.net/2017/12/12/internet-protocols-changing/
作者:[Mark Nottingham][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://blog.apnic.net/author/mark-nottingham/
[1]:https://blog.apnic.net/2017/11/17/ietf-100-singapore-dns-http-doh/
[2]:https://blog.apnic.net/author/mark-nottingham/
[3]:https://blog.apnic.net/category/tech-matters/
[4]:https://blog.apnic.net/tag/dns/
[5]:https://blog.apnic.net/tag/doh/
[6]:https://blog.apnic.net/tag/guest-post/
[7]:https://blog.apnic.net/tag/http/
[8]:https://blog.apnic.net/tag/ietf/
[9]:https://blog.apnic.net/tag/quic/
[10]:https://blog.apnic.net/tag/tls/
[11]:https://blog.apnic.net/tag/protocol/
[12]:https://blog.apnic.net/2017/12/12/internet-protocols-changing/#comments
[13]:https://blog.apnic.net/
[14]:https://www.smashingmagazine.com/2015/09/why-performance-matters-the-perception-of-time/
[15]:https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46197.pdf
[16]:https://http2.github.io/
[17]:http://httpwg.org/specs/rfc7540.html#TLSUsage
[18]:http://httpwg.org/specs/rfc7540.html#BadCipherSuites
[19]:http://httpwg.org/specs/rfc7540.html#reuse
[20]:https://tools.ietf.org/html/draft-bishop-httpbis-http2-additional-certs
[21]:https://datatracker.ietf.org/doc/draft-ietf-tls-tls13/
[22]:https://en.wikipedia.org/wiki/Forward_secrecy
[23]:https://quicwg.github.io/
[24]:https://blog.apnic.net/2016/08/30/questions-answered-quic/
[25]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#short-header
[26]:https://www.mjkranch.com/docs/CODASPY17_Kranch_Reed_IdentifyingHTTPSNetflix.pdf
[27]:https://tools.ietf.org/html/draft-trammell-quic-spin
[28]:https://datatracker.ietf.org/wg/doh/about/
[29]:https://datatracker.ietf.org/meeting/99/materials/slides-99-maprg-fingerprint-based-detection-of-dns-hijacks-using-ripe-atlas/
[30]:https://datatracker.ietf.org/wg/dprive/about/
[31]:https://developers.google.com/speed/public-dns/
[32]:https://quicwg.github.io/base-drafts/draft-ietf-quic-transport.html#rfc.section.3.7
[33]:https://tools.ietf.org/html/rfc7258
[34]:http://codebutler.com/firesheep
[35]:https://www.ietf.org/