mirror of
https://github.com/LCTT/TranslateProject.git
synced 2024-12-26 21:30:55 +08:00
commit
6949d8fb82
@ -1,175 +0,0 @@
|
||||
Translating by qhwdw
|
||||
Splicing the Cloud Native Stack, One Floor at a Time
|
||||
======
|
||||
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
|
||||
|
||||
It’s easy to get confused, or simply lose track, when swimming deep in the oceans of any ecosystem. I know this for a fact because when I started at Packet last year, my English degree from Bryn Mawr didn’t quite come with a Kubernetes certification. :)
|
||||
|
||||
Due to its super fast evolution and massive impact, the cloud native ecosystem defies precedent. It seems that every time you blink, entirely new technologies (not to mention all of the associated logos) have become relevant...or at least interesting. Like many others, I’ve relied on the CNCF’s ubiquitous “[Cloud Native Landscape][1]” as a touchstone as I got to know the space. However, if there is one element that defines ecosystems, it is the people that contribute to and steer them.
|
||||
|
||||
That’s why, when we were walking back to the office one cold December afternoon, we hit upon a creative way to explain “cloud native” to an investor, whose eyes were obviously glazing over as we talked about the nuances that distinguished Cilium from Aporeto, and why everything from CoreDNS and Spiffe to Digital Rebar and Fission were interesting in their own right.
|
||||
|
||||
Looking up at our narrow 13 story office building in the shadow of the new World Trade Center, we hit on an idea that took us down an artistic rabbit hole: why not draw it?
|
||||
|
||||
![][2]
|
||||
|
||||
And thus began our journey to splice the Cloud Native Stack, one floor at a time. Let’s walk through it together and we can give you the “guaranteed to be outdated tomorrow” down low.
|
||||
|
||||
[[View a High Resolution JPG][3]] or email us to request a copy.
|
||||
|
||||
### Starting at the Very Bottom
|
||||
|
||||
As we started to put pen to paper, we knew we wanted to shine a light on parts of the stack that we interact with on a daily basis, but that is largely invisible to users further up: hardware. And like any good secret lab investing in the next great (usually proprietary) thing, we thought the basement was the perfect spot.
|
||||
|
||||
From the well established giants of the space like Intel, AMD and Huawei (rumor has it they employ nearly 80,000 engineers!), to more niche players like Mellanox, the hardware ecosystem is on fire. In fact, we may be entering a Golden Age of hardware, as billions of dollars are poured into upstarts hacking on new offloads, GPU’s, custom co-processors.
|
||||
|
||||
The famous software trailblazer Alan Kay said over 25 years ago: “People who are really serious about software should make their own hardware.” Good call Alan!
|
||||
|
||||
### The Cloud is About Capital
|
||||
|
||||
As our CEO Zac Smith has told me many times: it’s all about the money. And not just about making it, but spending it! In the cloud, it takes billions of dollars of capital to make computers show up in data centers so that developers can consume them with software. In other words:
|
||||
|
||||
|
||||
![][4]
|
||||
|
||||
We thought the best place for “The Bank” (e.g. the lenders and investors that make this cloud fly) was the ground floor. So we transformed our lobby into the Banker’s Cafe, complete with a wheel of fortune for all of us out there playing the startup game.
|
||||
|
||||
![][5]
|
||||
|
||||
### The Ping and Power
|
||||
|
||||
If the money is the grease, then the engine that consumes much of the fuel is the datacenter providers and the networks that connect them. We call them “power” and “ping”.
|
||||
|
||||
From top of mind names like Equinix and edge upstarts like Vapor.io, to the “pipes” that Verizon, Crown Castle and others literally put in the ground (or on the ocean floor), this is a part of the stack that we all rely upon but rarely see in person.
|
||||
|
||||
Since we spend a lot of time looking at datacenters and connectivity, one thing to note is that this space is changing quite rapidly, especially as 5G arrives in earnest and certain workloads start to depend on less centralized infrastructure.
|
||||
|
||||
The edge is coming y’all! :-)
|
||||
|
||||
![][6]
|
||||
|
||||
### Hey, It's Infrastructure!
|
||||
|
||||
Sitting on top of “ping” and “power” is the floor we lovingly call “processors”. This is where our magic happens - we turn the innovation and physical investments from down below into something at the end of an API.
|
||||
|
||||
Since this is a NYC building, we kept the cloud providers here fairly NYC centric. That’s why you see Sammy the Shark (of Digital Ocean lineage) and a nod to Google over in the “meet me” room.
|
||||
|
||||
As you’ll see, this scene is pretty physical. Racking and stacking, as it were. While we love our facilities manager in EWR1 (Michael Pedrazzini), we are working hard to remove as much of this manual labor as possible. PhD’s in cabling are hard to come by, after all.
|
||||
|
||||
![][7]
|
||||
|
||||
### Provisioning
|
||||
|
||||
One floor up, layered on top of infrastructure, is provisioning. This is one of our favorite spots, which years ago we might have called “config management.” But now it’s all about immutable infrastructure and automation from the start: Terraform, Ansible, Quay.io and the like. You can tell that software is working its way down the stack, eh?
|
||||
|
||||
Kelsey Hightower noted recently “it’s an exciting time to be in boring infrastructure.” I don’t think he meant the physical part (although we think it’s pretty dope), but as software continues to hack on all layers of the stack, you can guarantee a wild ride.
|
||||
|
||||
![][8]
|
||||
|
||||
### Operating Systems
|
||||
|
||||
With provisioning in place, we move to the operating system layer. This is where we get to start poking fun at some of our favorite folks as well: note Brian Redbeard’s above average yoga pose. :)
|
||||
|
||||
Packet offers eleven major operating systems for our clients to choose from, including some that you see in this illustration: Ubuntu, CoreOS, FreeBSD, Suse, and various Red Hat offerings. More and more, we see folks putting their opinion on this layer: from custom kernels and golden images of their favorite distros for immutable deploys, to projects like NixOS and LinuxKit.
|
||||
|
||||
![][9]
|
||||
|
||||
### Run Time
|
||||
|
||||
We had to have fun with this, so we placed the runtime in the gym, with a championship match between CoreOS-sponsored rkt and Docker’s containerd. Either way the CNCF wins!
|
||||
|
||||
We felt the fast-evolving storage ecosystem deserved some lockers. What’s fun about the storage aspect is the number of new players trying to conquer the challenging issue of persistence, as well as performance and flexibility. As they say: storage is just plain hard.
|
||||
|
||||
![][10]
|
||||
|
||||
### Orchestration
|
||||
|
||||
The orchestration layer has been all about Kubernetes this past year, so we took one of its most famous evangelists (Kelsey Hightower) and featured him in this rather odd meetup scene. We have some major Nomad fans on our team, and there is just no way to consider the cloud native space without the impact of Docker and its toolset.
|
||||
|
||||
While workload orchestration applications are fairly high up our stack, we see all kinds of evidence for these powerful tools are starting to look way down the stack to help users take advantage of GPU’s and other specialty hardware. Stay tuned - we’re in the early days of the container revolution!
|
||||
|
||||
![][11]
|
||||
|
||||
### Platforms
|
||||
|
||||
This is one of our favorite layers of the stack, because there is so much craft in how each platform helps users accomplish what they really want to do (which, by the way, isn’t run containers but run applications!). From Rancher and Kontena, to Tectonic and Redshift to totally different approaches like Cycle.io and Flynn.io - we’re always thrilled to see how each of these projects servers users differently.
|
||||
|
||||
The main takeaway: these platforms are helping to translate all of the various, fast-moving parts of the cloud native ecosystem to users. It’s great watching what they each come up with!
|
||||
|
||||
![][12]
|
||||
|
||||
### Security
|
||||
|
||||
When it comes to security, it’s been a busy year! We tried to represent some of the more famous attacks and illustrate how various tools are trying to help protect us as workloads become highly distributed and portable (while at the same time, attackers become ever more resourceful).
|
||||
|
||||
We see a strong movement towards trustless environments (see Aporeto) and low level security (Cilium), as well as tried and true approaches at the network level like Tigera. No matter your approach, it’s good to remember: This is definitely not fine. :0
|
||||
|
||||
![][13]
|
||||
|
||||
### Apps
|
||||
|
||||
How to represent the huge, vast, limitless ecosystem of applications? In this case, it was easy: stay close to NYC and pick our favorites. ;) From the Postgres “elephant in the room” and the Timescale clock, to the sneaky ScyllaDB trash and the chillin’ Travis dude - we had fun putting this slice together.
|
||||
|
||||
One thing that surprised us: how few people noticed the guy taking a photocopy of his rear end. I guess it’s just not that common to have a photocopy machine anymore?!?
|
||||
|
||||
![][14]
|
||||
|
||||
### Observability
|
||||
|
||||
As our workloads start moving all over the place, and the scale gets gigantic, there is nothing quite as comforting as a really good Grafana dashboard, or that handy Datadog agent. As complexity increases, the “SRE” generation are starting to rely ever more on alerting and other intelligence events to help us make sense of what’s going on, and work towards increasingly self-healing infrastructure and applications.
|
||||
|
||||
It will be interesting to see what kind of logos make their way into this floor over the coming months and years...maybe some AI, blockchain, ML powered dashboards? :-)
|
||||
|
||||
![][15]
|
||||
|
||||
### Traffic Management
|
||||
|
||||
People tend to think that the internet “just works” but in reality, we’re kind of surprised it works at all. I mean, a loose connection of disparate networks at massive scale - you have to be joking!?
|
||||
|
||||
One reason it all sticks together is traffic management, DNS and the like. More and more, these players are helping to make the interest both faster and safer, as well as more resilient. We’re especially excited to see upstarts like Fly.io and NS1 competing against well established players, and watching the entire ecosystem improve as a result. Keep rockin’ it y’all!
|
||||
|
||||
![][16]
|
||||
|
||||
### Users
|
||||
|
||||
What good is a technology stack if you don’t have fantastic users? Granted, they sit on top of a massive stack of innovation, but in the cloud native world they do more than just consume: they create and contribute. From massive contributions like Kubernetes to more incremental (but equally important) aspects, what we’re all a part of is really quite special.
|
||||
|
||||
Many of the users lounging on our rooftop deck, like Ticketmaster and the New York Times, are not mere upstarts: these are organizations that have embraced a new way of deploying and managing their applications, and their own users are reaping the rewards.
|
||||
|
||||
![][17]
|
||||
|
||||
### Last but not Least, the Adult Supervision!
|
||||
|
||||
In previous ecosystems, foundations have played a more passive “behind the scenes” role. Not the CNCF! Their goal of building a robust cloud native ecosystem has been supercharged by the incredible popularity of the movement - and they’ve not only caught up but led the way.
|
||||
|
||||
From rock solid governance and a thoughtful group of projects, to outreach like the CNCF Landscape, CNCF Cross Cloud CI, Kubernetes Certification, and Speakers Bureau - the CNCF is way more than “just” the ever popular KubeCon + CloudNativeCon.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
|
||||
|
||||
作者:[Zoe Allen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.packet.net/about/zoe-allen/
|
||||
[1]:https://landscape.cncf.io/landscape=cloud
|
||||
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
|
||||
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
|
||||
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
|
||||
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
|
||||
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
|
||||
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
|
||||
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
|
||||
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
|
||||
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
|
||||
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
|
||||
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
|
||||
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
|
||||
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
|
||||
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
|
||||
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
|
||||
[17]:https://assets.packet.net/media/images/7cpe-users.jpg
|
@ -0,0 +1,174 @@
|
||||
逐层拼接云原生栈
|
||||
======
|
||||
在 Packet,我们的价值(自动化的基础设施)是非常基础的。因此,我们花费大量的时间来研究我们之上所有生态系统中的参与者和趋势 —— 以及之下的极少数!
|
||||
|
||||
当你在任何生态系统的汪洋大海中徜徉时,很容易困惑或迷失方向。我知道这是事实,因为当我去年进入 Packet 工作时,从 Bryn Mawr 获得的英语学位,并没有让我完全得到一个 Kubernetes 的认证。 :)
|
||||
|
||||
由于它超快的演进和巨大的影响,云原生生态系统打破了先例。似乎每眨一次眼睛,之前全新的技术(更不用说所有相关的理念了)就变得有意义 ... 或至少有趣了。和其他许多人一样,我依据无处不在的 CNCF 的 “[云原生蓝图][1]” 作为我去了解这个空间的参考标准。尽管如此,如果有一个定义这个生态系统的元素,那它一定是贡献和控制他们的人。
|
||||
|
||||
所以,在 12 月份一个很冷的下午,当我们走回办公室时,我们偶然发现了一个给投资人解释“云原生”的创新方式,当我们谈到从 Aporeto 中区分 Cilium 的细微差别时,他的眼睛中充满了兴趣,以及为什么从 CoreDNS 和 Spiffe 到 Digital Rebar 和 Fission 的所有这些都这么有趣。
|
||||
|
||||
在新世贸中心的阴影下,看到我们位于 13 层的狭窄办公室,我突然想到一个好主意,把我们带到《兔子洞》的艺术世界中:为什么不把它画出来呢?
|
||||
|
||||
![][2]
|
||||
|
||||
于是,我们开始了把云原生栈逐层拼接起来的旅程。让我们一起探索它,并且我们可以给你一个“仅限今日”的低价。
|
||||
|
||||
[[查看高清大图][3]] 或给我们发邮件索取副本。
|
||||
|
||||
### 从最底层开始
|
||||
|
||||
当我们开始下笔的时候,我们知道,我们希望首先亮出的是每天都与之交互的栈的那一部分,但它对用户却是不可见的:硬件。就像任何投资于下一个伟大的(通常是私有的)东西的秘密实验室一样,我们认为地下室是最好的地点。
|
||||
|
||||
从大家公认的像 Intel、AMD 和华为(传言他们雇佣的工程师接近 80000 名)这样的巨头,到像 Mellanox 这样的细分市场参与者,硬件生态系统现在非常火。事实上,随着数十亿美元投入去攻克新的 offloads、GPU、定制协处理器,我们可能正在进入硬件的黄金时代。
|
||||
|
||||
著名的软件先驱 Alan Kay 在 25 年前说过:“对软件非常认真的人都应该去制造他自己的硬件” ,为 Alan 打 call!
|
||||
|
||||
### 云即资本
|
||||
|
||||
就像我们的 CEO Zac Smith 多次告诉我:所有都是钱的问题。不仅要制造它,还要消费它!在云中,数十亿美元的投入才能让数据中心出现计算机,这样才能让开发者使用软件去消费它。换句话说:
|
||||
|
||||
|
||||
![][4]
|
||||
|
||||
我们认为,对于“银行”(即能让云运转起来的借款人或投资人)来说最好的位置就是一楼。因此我们将大堂改造成银行家的咖啡馆,以便为所有的创业者提供幸运之轮。
|
||||
|
||||
![][5]
|
||||
|
||||
### 连通和动力
|
||||
|
||||
如果金钱是燃料,那么消耗大量燃料的引擎就是数据中心供应商和连接它们的网络。我们称他们为“动力”和“连通”。
|
||||
|
||||
从像 Equinix 这样处于核心的和像 Vapor.io 这样的接入新贵,到 Verizon、Crown Castle 和其它的处于地下(或海底)的“管道”,这是我们所有的栈都依赖但很少有人能看到的一部分。
|
||||
|
||||
因为我们花费大量的时间去研究数据中心和连通性,需要注意的一件事情是,这一部分的变化非常快,尤其是在 5G 正式商用时,某些负载开始不再那么依赖中心化的基础设施了。
|
||||
|
||||
接入即将到来! :-)
|
||||
|
||||
![][6]
|
||||
|
||||
### 嗨,它就是基础设施!
|
||||
|
||||
居于“连接”和“动力”之上的这一层,我们爱称为“处理器们”。这是奇迹发生的地方 —— 我们将来自下层的创新和实物投资转变成一个 API 尽头的某些东西。
|
||||
|
||||
由于这是纽约的一个大楼,我们让在这里的云供应商处于纽约的中心。这就是为什么你会看到(Digital Ocean 系的)鲨鱼 Sammy 和在 Google 之上的 “meet me” 的房间中和我打招呼的原因了。
|
||||
|
||||
正如你所见,这个场景是非常写实的。它就是一垛一垛堆起来的。尽管我们爱 EWR1 的设备经理(Michael Pedrazzini),我们努力去尽可能减少这种体力劳动。毕竟布线专业的博士学位是很难拿到的。
|
||||
|
||||
![][7]
|
||||
|
||||
### 供给
|
||||
|
||||
再上一层,在基础设施层之上是供给层。这是我们最喜欢的地方之一,它以前被我们称为“配置管理”。但是现在到处都是一开始就是不可改变的基础设施和自动化:Terraform、Ansible、Quay.io 等等类似的东西。你可以看出软件是按它的方式来工作的,对吗?
|
||||
|
||||
Kelsey Hightower 最近写道“呆在无聊的基础设施中是一个让人兴奋的时刻”,我不认为它说的是物理部分(虽然我们认为它非常让人兴奋),但是由于软件持续侵入到栈的所有层,可以保证你有一个疯狂的旅程。
|
||||
|
||||
![][8]
|
||||
|
||||
### 操作系统
|
||||
|
||||
供应就绪后,我们来到操作系统层。这就是我们开始取笑我们最喜欢的一个人的地方:注意上面 Brian Redbeard 的瑜珈姿势。:)
|
||||
|
||||
Packet 为我们的客户提供了 11 种主要的操作系统去选择,包括一些你在图中看到的:Ubuntu、CoreOS、FreeBSD、Suse、和各种 Red Hat 的作品。我们看到越来越多的人们在这一层上有了他们自己的看法:从定制的内核和为了不可改变的部署而使用的他们最喜欢的发行版,到像 NixOS 和 LinuxKit 这样的项目。
|
||||
|
||||
![][9]
|
||||
|
||||
### 运行时
|
||||
|
||||
我们玩的很开心,因此我们将运行时放在了体育馆内,并在 CoreOS 赞助的 rkt 和 Docker 的容器化之间进行了一场锦标赛。无论哪种方式赢家都是 CNCF!
|
||||
|
||||
我们认为快速演进的存储生态系统应该得到一些可上锁的储物柜。关于存储部分有趣的地方在于许多的新玩家尝试去解决持久性的挑战问题,以及性能和灵活性问题。就像他们说的:存储很简单。
|
||||
|
||||
![][10]
|
||||
|
||||
### 编排
|
||||
|
||||
在过去的这些年里,编排层所有都是关于 Kubernetes 的,因此我们选取了其中一位著名的布道者(Kelsey Hightower),并在这个古怪的会议场景中给他一个特写。在我们的团队中有一些主要的 Nomad 粉丝,并且如果没有 Docker 和它的工具集的影响,根本就没有办法去考虑云原生空间。
|
||||
|
||||
虽然负载编排应用程序在我们栈中的地位非常高,我们看到的各种各样的证据表明,这些强大的工具开始去深入到栈中,以帮助用户利用 GPU 和其它特定硬件的优势。请继续关注 —— 我们正处于容器化革命的早期阶段!
|
||||
|
||||
![][11]
|
||||
|
||||
### 平台
|
||||
|
||||
这是栈中我们喜欢的层之一,因为每个平台都有很多技能帮助用户去完成他们想要做的事情(顺便说一下,不是去运行容器,而是运行应用程序)。从 Rancher 和 Kontena,到 Tectonic 和 Redshift 都是像 Cycle.io 和 Flynn.io 一样是完全不同的方法 —— 我们看到这些项目如何以不同的方式为用户提供服务,总是激动不已。
|
||||
|
||||
关键点:这些平台是帮助去转化各种各样的云原生生态系统的快速变化部分给用户。很高兴能看到他们每个人带来的东西!
|
||||
|
||||
![][12]
|
||||
|
||||
### 安全
|
||||
|
||||
当说到安全时,今年是很忙的一年!我们尝试去展示一些很著名的攻击,并说明随着工作负载变得更加分散和更加便携(当然,同时攻击也变得更加智能),这些各式各样的工具是如何去帮助保护我们的。
|
||||
|
||||
我们看到一个用于不可信环境(Aporeto)和低级安全(Cilium)的强大动作,以及尝试在网络级别上的像 Tigera 这样的可信方法。不管你的方法如何,记住这一点:关于安全这肯定不够。:0
|
||||
|
||||
![][13]
|
||||
|
||||
### 应用程序
|
||||
|
||||
如何去表示海量的、无限的应用程序生态系统?在这个案例中,很容易:我们在纽约,选我们最喜欢的。;) 从 Postgres “房间里的大象” 和 Timescale 时钟,到鬼鬼崇崇的 ScyllaDB 垃圾桶和 chillin 的《特拉维斯兄弟》—— 我们把这个片子拼到一起很有趣。
|
||||
|
||||
让我们感到很惊奇的一件事情是:很少有人注意到那个复印他的屁股的家伙。我想现在复印机已经不常见了吧?
|
||||
|
||||
![][14]
|
||||
|
||||
### 可观测性
|
||||
|
||||
由于我们的工作负载开始到处移动,规模也越来越大,这里没有一件事情能够像一个非常好用的 Grafana 仪表盘、或方便的 Datadog 代理让人更加欣慰了。由于复杂度的提升,“SRE” 的产生开始越来越多地依赖警报和其它情报事件去帮我们感知发生的事件,以及获得越来越多的自我修复的基础设施和应用程序。
|
||||
|
||||
在未来的几个月或几年中,我们将看到什么样的公司进入这一领域 … 或许是一些人工智能、区块链、机器学习支撑的仪表盘?:-)
|
||||
|
||||
![][15]
|
||||
|
||||
### 流量管理
|
||||
|
||||
人们倾向于认为互联网“只是工作而已”,但事实上,我们很惊讶于它的工作方式。我的意思是,大规模的独立的网络的一个松散连接 —— 你不是在开玩笑吧?
|
||||
|
||||
能够把所有的这些独立的网络拼接到一起的一个原因是流量管理、DNS 和类似的东西。随着规模越来越大,这些参与者帮助让互联网变得更快、更安全、同时更具弹性。我们尤其高兴的是看到像 Fly.io 和 NS1 这样的新贵与优秀的老牌参与者进行竞争,最后的结果是整个生态系统都得以提升。让竞争来的更激烈吧!
|
||||
|
||||
![][16]
|
||||
|
||||
### 用户
|
||||
|
||||
如果没有非常棒的用户,技术栈还有什么用呢?确实,它们位于大量的创新之上,但在云原生的世界里,他们所做的远不止消费这么简单:他们设计和贡献。从像 Kubernetes 这样的大量的贡献者到越来越多的(但同样重要)更多方面,我们都是其中的非常棒的一份子。
|
||||
|
||||
在我们屋顶的客厅上的许多用户,比如 Ticketmaster 和《纽约时报》,而不仅仅是新贵:这些组织采用了一种新的方式去部署和管理他们的应用程序,并且他们自己的用户正在收获回报。
|
||||
|
||||
![][17]
|
||||
|
||||
### 最后的但并非是不重要的,成熟的监管!
|
||||
|
||||
在以前的生态系统中,基金会扮演了一个“在场景背后”的非常被动的角色。而 CNCF 不是!他们的目标(构建一个健壮的云原生生态系统),被美妙的流行动向所推动 —— 他们不仅已迎头赶上还一路领先。
|
||||
|
||||
从坚实的管理和一个经过深思熟虑的项目组,到提出像 CNCF 这样的蓝图,CNCF 横跨云 CI、Kubernetes 认证、和演讲者委员会 —— CNCF 已不再是 “仅仅” 受欢迎的 KubeCon + CloudNativeCon 了。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
|
||||
|
||||
作者:[Zoe Allen][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[qhwdw](https://github.com/qhwdw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://www.packet.net/about/zoe-allen/
|
||||
[1]:https://landscape.cncf.io/landscape=cloud
|
||||
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
|
||||
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
|
||||
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
|
||||
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
|
||||
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
|
||||
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
|
||||
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
|
||||
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
|
||||
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
|
||||
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
|
||||
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
|
||||
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
|
||||
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
|
||||
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
|
||||
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
|
||||
[17]:https://assets.packet.net/media/images/7cpe-users.jpg
|
Loading…
Reference in New Issue
Block a user