Practical DevOps

Getting a Persistant Address to a ECS Fargate Container.

Should be easy, right? Well, it’s not. Here’s how I did it.

Niclas Gustafsson
ITNEXT
Published in
5 min readAug 6, 2020

--

Where is my ECS Container? — Photo by Jens Johnsson on Unsplash

So we had an EC2 instance that outlived itself, keys were lost (read about how we recovered the contents here) and needed to replace it.

Instead of reinstalling a whole new EC2 server for it, I thought, hey, why not just make a small docker container and deploy it on Fargate, it should be simple enough, right?

Well, it turns out that there were actual two complications, one of which I will talk about here and the other is a topic for another post: Hint, it will involve Bytesafe using a locked down/ freezed registry.

The other challenge, at first glance, seemed trivial.

Photo by Victor Garcia on Unsplash

I just needed to retain the same public address for my ECS Fargate container that was running the Statsd application. As one might guess this address was used throughout our environment, configuration in plenty of services and applications. And to keep changing that configuration t would be a nightmare.

And using an internal IP (which totally could make sense for a service like this) was unfortunately not possible as the service needed to be reachable from multiple VPCs without possibility of VPC peering due to IP overlap.

Now, before you all scream “Use AWS ECS Service Discovery” I would just like to point out… That service does not currently work for Public IPs, only internal ones. And if you made it so far to read all the way here, please go ahead and add your vote for adding this ECS feature on the link below 😃

ECS Service Discovery does not currently publish public IPs, only Internal ones unfortunately…

But until the good folks at AWS adds this feature, keep reading.

So what we will do is make ECS update Route53 with the public IP address every time the docker container is started. This way we can use the DNS name for the service in our applications and services.

(Yes, I had one of these back in the day) Photo by bert sz on Unsplash

I’ve seen some different variants of this solution while looking around, but the closest I got was to deploy a Lambda function that got triggered by state changes in the ECS life cycle. Like this one or this one. For me such solutions felt a bit...bloated for my use case. I wanted to have my solution well, contained…in the container. And not start using (and paying for) more services. (The cost of using a ALB/NLB for a small ECS Fargate task can easily surpass the cost of the actual ECS itself)

And Since I’m kind of old-school, I’m going to be using Bash and CLI tools to get this done. 😁

I’ll start out with my container, which is the offical Statsd container and from there we will modify our Dockerfile a bit to install the AWSCLI package and the jq tool as well as adding custom scripts that prepares and executes an UPSERT to Route53.

We will also make use of ECS variables to know which DNS record to modify. This way we will be able to run the same containers with different public IPs, one for our test environment, one for the. production etc.

Dockerfile

We will create a wrapper script and use that instead of our existing ENTRYPOINT configuration

The unchanged Dockerfile to start up statsd looked something along the lines of:

We are going to swap out the ENTRYPOINT above to a wrapper script that performs our Route53 magic and then starts the node application. And while we are at it, let’s add our dependencies, awscli and jq

‼️ Be mindful of which version of the awscli you are using. I just spent too much time troubleshooting why my container didn’t behave like expected. My first try using apt-get to install it gave me an older version which did not fail but rather leaving out some elements in the API response.

Entrypoint.sh is trivial:

It’s in update-route53.sh is where our magic happens.

  1. First we call the Task Metadata Endpoint to get our current TaskARN and Cluster ID.
  2. Using those, we call the ECS API to get our attached network interface (ENI).
  3. And lastly we call the EC2 API to get the details for the ENI, i.e. our public IP address.

When we know our public IP we construct an UPSERT record that we send to the Route53 API using the settings provided in the ECS Task definition.

Here is the full update-route53.sh:

ECS Task Configuration

Now, to pull off the above, we need some additional access rights to be able to get the information from the APIs and to be able to update Route53.

You might want to make the below a bit stricter depending on your setup and security requirements.

💡 Don’t forget to change the Resource below to the Route53 ZoneID before you add it to your ECS Task Role

Last step is to configure the ECS environment variables which will instruct the script which Route53 Zone to modify and which DNS record to create or update.

That’s it! Now you are able to reboot the ECS instance and once the new container starts up it will automatically update Route53 DNS entry with its public IP.

--

--