For a long time we didn't limit the amount of memory that you can use during your build on Codeship. There was the possibility of a bad build eating up all of our memory.
A few weeks ago that bad build happened, using up so much memory that it decreased performance and eventually killed the test server. Even though we measure the memory usage of the whole test server, we didn't have the data to figure out exactly which build caused the trouble.
[caption id="attachment_652" align="alignnone" width="885"]
Combined maximum and minimum memory usage of Amazon EC2 Instances.[/caption]
How to avoid builds eating up all of your memory
We couldn’t risk that this problem happened again and be a threat to other builds on that test server. We didn't have enough data about the memory limits at this point so we had to take an educated guess. My first assumption was the most conservative one. Each test server has 60G of memory and we run 22 builds on each instance, so each build can get a maximum of 2,5G memory. As LXC manages memory with cgroups, it is simple to set a memory limit for each container.
Setting lxc.cgroup.memory.limit_in_bytes in container config:
lxc.cgroup.memory.limit_in_bytes = 2560M
This worked, the bad build wasn't a threat anymore.
We got a few support request where people were asking about their builds being stuck. While the 2,5G are enough for 95% of the builds, the other 5% were hitting the limit. We needed to increase the memory limit to ensure that all builds are running fine. After 2 incremental steps The Codeship is sailing smooth with a 10GB memory limit for each build.
Monitoring In The Future
We wanted to have more data to improve the memory limit in the future. We started measuring the maximum memory usage for each build. We export it to Librato Metrics and save it as metadata of the build in our database. This allows to inspect the memory usage easily. We plan to show the memory usage to our users in the future.
LXC tracks the memory usage for each container and exposes that and many other values in the cgroup. Right before we shutdown the build, we read the memory usage from the cgroup. You can read the memory usage from the cgroup on the LXC Host.
/sys/fs/cgroup/memory/lxc/name_of_running_container/memory.max_usage_in_bytes
and send the data to Librato Metrics.
Metriks.timer('build.memory_usage).update(metrics[:max_memory])
Building a Metrics Infrastructure
It is important to back your actions with data. Data is your best argument. In case you don’t have any useful data to solve the problem, it is important that you can easily add more metrics to your infrastructure. Sit back with your Coworkers and think about good metrics for your product. Sometimes it’s not easy to spot them on the first glimpse. We often talk about our metrics.
We discuss new metrics but also talk about removing redundant metrics. It is very exciting to talk about data and what your Coworkers conclude from that data. We are able to add new metrics to our infrastructure in minutes. By building this metrics infrastructure you can handle any new challenges by quickly adding mesurements that help you decide on the next steps. This infrastructure can be the difference between life and death of your service, so make sure you have it in place.
Do you monitor? Which tools and services do you use for it? Let us know in the comments!