[#]: collector: (lujun9972) [#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (The Fargate Illusion) [#]: via: (https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html) [#]: author: (Lee Briggs https://leebriggs.co.uk/) The Fargate Illusion ====== I’ve been building a Kubernetes based platform at $work now for almost a year, and I’ve become a bit of a Kubernetes apologist. It’s true, I think the technology is fantastic. I am however under no illusions about how difficult it is to operate and maintain. I read posts like [this][1] one earlier in the year and found myself nodding along to certain aspects of the opinion. If I was in a smaller company, with 10/15 engineers, I’d be horrified if someone suggested managing and maintaining a fleet of Kubernetes clusters. The operational overhead is just too high. Despite my love for all things Kubernetes at this point, I do remain curious about the notion that “serverless” computing will kill the ops engineer. The main source of intrigue here is the desire to stay gainfully employed in the future - if we aren’t going to need OPS engineers in our glorious future, I’d like to see what all the fuss is about. I’ve done some experimentation in Lamdba and Google Cloud Functions and been impressed by what I saw, but I still firmly believe that serverless solutions only solve a percentage of the problem. I’ve had my eye on [AWS Fargate][2] for some time now and it’s something that developers at $work have been gleefully pointed at as “serverless computing” - mainly because with Fargate, you can run your Docker container without having to manage the underlying nodes. I wanted to see what that actually meant - so I set about trying to get an app running on Fargate from scratch. I defined the succes criteria here as something close-ish to a “production ready” application, so I wanted to have the following: * A running container on Fargate * With configuration pushed down in the form of environment variables * “Secrets” should not be in plaintext * Behind a loadbalancer * TLS enabled with a valid SSL certificate I approached this whole task from an infrastructure as code mentality, and instead of following the default AWS console wizards, I used terraform to define the infrastructure. It’s very possible this overcomplicated things, but I wanted to make sure any deployment was repeatable and discoverable to anyone else wanting to follow along. All of the above criteria is generally achieveable with a Kubernetes based platform using a few external add-ons and plugins, so I’m admittedly approaching this whole task with a comparitive mentality - because I’m comparing it with my common workflow. My main goal was to see how easy this was with Fargate, especially when compared with Kubernetes. I was pretty surprised with the outcome. ### AWS has overhead I had a clean AWS account and was determined to go from zero to a deployed webapp. Like any other infrastructure in AWS, I had to get the baseline infrastructure working - so I first had to define a VPC. I wanted to follow the best practices, so I carved the VPC up into subnets across availability zones, with a public and a private subnet. It occurred to me at this point that as long as this need was always there, I’d probably be able to find a job of some description. The notion that AWS is operationally “free” is something that has irked me for quite some time now. Many people in the developer community take for granted how much work and effort there is in setting up and defining a well designed AWS account and infrastructure. This is _before_ we even start talking about a multi-account architecture - I’m still in a single account here and I’m already having to define infrastructure and traditional network items. It’s also worth remembering here, I’ve done this quite a few times now, so I _knew_ exactly what to do. I could have used the default VPC in my account, and the pre-provided subnets, which I expect many people who are getting started might do. This took me about half an hour to get running, but I couldn’t help but think here that even if I want to run lambda functions, I still need some kind of connectivity and networking. Defining NAT gateways and routing in a VPC doesn’t feel very serveless at all, but it has to be done to get things moving. ### Run my damn container Once I had the base infrastructure up and running, I now wanted to get my docker container running. I started examining the Fargate docs and browsed through the [Getting Started][3] docs and something immediately popped out at me: > [][4] Hold on a minute, there’s at least THREE steps here just to get my container up and running? This isn’t quite how this whole thing was sold to me, but let’s get started. #### Task Definitions A task definition defines the actual container you want to run. The problem I ran into immediately here is that this thing is insanely complicated. Lots of the options here are very straightforward, like specifying the docker image and memory limits, but I also had to define a networking model and a variety of other options that I wasn’t really familiar with. Really? If I had come into this process with absolutely no AWS knowledge I’d be incredibly overwhelmed at this stage. A full list of the [parameters][5] can be found on the AWS page, and the list is long. I knew my container needed to have some environment variables, and it needed to expose a port. So I defined that first, with the help of a fantastic [terraform module][6] which really made this easier. If I didn’t have this, I’d be hand writing JSON to define my container definition. First, I defined some environment variables: ``` container_environment_variables = [ { name = "USER" value = "${var.user}" }, { name = "PASSWORD" value = "${var.password}" } ] ``` Then I compiled the task definition using the module I mentioned above: ``` module "container_definition_app" { source = "cloudposse/ecs-container-definition/aws" version = "v0.7.0" container_name = "${var.name}" container_image = "${var.image}" container_cpu = "${var.ecs_task_cpu}" container_memory = "${var.ecs_task_memory}" container_memory_reservation = "${var.container_memory_reservation}" port_mappings = [ { containerPort = "${var.app_port}" hostPort = "${var.app_port}" protocol = "tcp" }, ] environment = "${local.container_environment_variables}" } ``` I was pretty confused at this point - I need to define a lot of configuration here to get this running and I’ve barely even started, but it made a little sense - anything running a docker container needs to have _some_ idea of the configuration values of the docker container. I’ve [previously written][7] about the problems with Kubernetes and configuration management and the same problem seemed to be rearing its ugly head again here. Next, I defined the task definition from the module above (which thankfully abstracted the required JSON away from me - if I had to hand write JSON at this point I’ve have probably given up). I realised immediately I was missing something as I was defining the module parameters. I need an IAM role as well! Okay, let me define that: ``` resource "aws_iam_role" "ecs_task_execution" { name = "${var.name}-ecs_task_execution" assume_role_policy = < [][16] -------------------------------------------------------------------------------- via: https://leebriggs.co.uk/blog/2019/04/13/the-fargate-illusion.html 作者:[Lee Briggs][a] 选题:[lujun9972][b] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://leebriggs.co.uk/ [b]: https://github.com/lujun9972 [1]: https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/ [2]: https://aws.amazon.com/fargate/ [3]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html [4]: https://imgur.com/FpU0lds [5]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html [6]: https://github.com/cloudposse/terraform-aws-ecs-container-definition [7]: https://leebriggs.co.uk/blog/2018/05/08/kubernetes-config-mgmt.html [8]: https://github.com/kubernetes-incubator/external-dns [9]: https://github.com/jetstack/cert-manager [10]: https://github.com/terraform-aws-modules/terraform-aws-ecs [11]: https://kubernetes.io/docs/concepts/configuration/secret/ [12]: https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html [13]: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html [14]: https://twitter.com/briggsl/status/1116870900719030272 [15]: https://cloud.google.com/run/ [16]: https://imgur.com/QfFg225