That particular topic could be quite trivial for some of the engineers who deliver java applications as docker containers, but… If you’ve never cared about that particular thing, you’d better do it. Overwise, you will be losing a lot of memory in the container as unused.

empty-memory

As usual the pet project for the investigation could be found here.

Application for testing

First, I will create a simple application exposing controller for triggering the needed load inside the application to boost the heap rise. For that case, I use my favorite FakeLoad.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@Slf4j
@RestController
public class LoadController {

    @Autowired
    FakeLoadExecutor fakeLoadExecutor;

    @PostMapping("/load/{s}/{m}")
    public void load(@PathVariable int s, @PathVariable int m) {
        FakeLoad fakeLoad = FakeLoads.create()
            .lasting(s, TimeUnit.SECONDS)
            .withMemory(m, MemoryUnit.MB);

        log.info("Loading for {} s with {} Mb mem load.", s, m);
        fakeLoadExecutor.execute(fakeLoad);
        System.gc();
    }
}

And FakeLoadExecutor is just a bean:

1
2
3
4
    @Bean
    FakeLoadExecutor fakeLoadExecutor() {
        return FakeLoadExecutors.newDefaultExecutor();
    }

By the way, for checking jvm runtime properties I want to use metrics endpoint from Spring Actuator, so let’s enable it too in application.properties:

1
2
management.endpoints.web.exposure.include=metrics
management.endpoint.metrics.enabled=true

Now we almost ready to go, but let’s first check how the application works on my local machine. To trace the memory consumption, I am using the VisualVM application.

And the load with for 200 Mb memory consumption for 10 seconds:

1
curl -X POST POST http://127.0.0.1:9000/load/10/200

produces this shape of heap memory: heap-consumption-shape

Building image

Having the application to test, it’s time to build image and test it inside the container. Although, Spring provides plugins for building images out of box (both in Maven and in Gradle). I have created just a simple Dockerfile for the simplicity case:

1
2
3
4
5
FROM openjdk:11-slim
VOLUME /tmp
ARG JAR_FILE=build/libs/*.jar
COPY ${JAR_FILE} app.jar
ENTRYPOINT ["sh", "-c", "java ${JAVA_OPTS} -jar /app.jar"]

Notice, the JAVA_OPTS environment variable usage, we would need it soon.

Building image:

1
2
gradle clean bootJar
docker build -t com.relaximus/mem-test .

Testing

Let’s just run with default settings and do some load:

1
docker run -p 9000:9000 -e JAVA_OPTS=-Dserver.port=9000 com.relaximus/mem-test

First, we need to check how many memory we use and have in maximum for the heap?

1
curl http://localhost:9000/actuator/metrics/jvm.memory.used?tag=area:heap
1
2
3
4
5
6
7
8
9
{ ...
 "measurements": [
  {
   "statistic": "VALUE",
   "value": 17670664
  }
 ],
  ...
}
1
curl http://localhost:9000/actuator/metrics/jvm.memory.max?tag=area:heap
1
2
3
4
5
6
7
8
9
{ ...
  "measurements": [
    {
      "statistic": "VALUE",
      "value": 517996542
    }
  ],
  ...
}
1
docker stats

first-docker-stats

Ok, what do we see from all of that? Our container has allocated about 2 Gb of memory, but the heap for the running java application inside has only a quarter of that available: 517 Mb, about 17 Mb currently being used. That’s bad news actually, we have implied as it’s usually happening, that this is a container only for our java application, we would like to have all the resources inside of the container being consumed. If, for instance, I’ll do a load for 600 Mb memory, the application should fail with the OutOfMemory exception, which is not nice having in mind that the container has almost 2Gb of memory available. Let’s try:

1
2
curl -X POST POST http://127.0.0.1:9000/load/20/600
2021-08-15 19:38:33.379  WARN 8 --- [pool-1-thread-6] c.m.fakeload.SimulationInfrastructure    : Memory Simulator died: java.lang.OutOfMemoryError: Java heap space, starting new one...

Simple solution

That’s actually not “rocket science”, just to configure the heap size for the application, we can do it via the JAVA_OPTS environment variable, which I noticed before. The problem here actually, is that it’s really easy to forget about that tiny but crucial moment when rolling out a new containerized application. Usually, all sorts of guides don’t bother too much about mentioning that, but the problem happens on prod with a real load, and fixing that is usually not pleasant at all.

Running container with the right settings (you can use any of your choice - using percentages for example):

1
2
docker run -p 9000:9000 -e JAVA_OPTS="-Dserver.port=9000 -Xmx1500m"  com.relaximus/mem-test
curl http://localhost:9000/actuator/metrics/jvm.memory.max?tag=area:heap
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
{
  ...
  "measurements": [
    {
      "statistic": "VALUE",
      "value": 1572863998
    }
  ],
  ...
}

As we can see this time I have 1.5 Gb of memmory available. Let’s try the same failed load from the previous attempt:

1
curl -X POST http://127.0.0.1:9000/load/20/600

and from parallel terminal during these 20 seconds let’s check the consumption endpoint:

1
curl http://localhost:9000/actuator/metrics/jvm.memory.used?tag=area:heap
1
2
3
4
5
6
7
8
9
{ ...
 "measurements": [
  {
   "statistic": "VALUE",
   "value": 646384216
  }
 ],
  ...
}

This time, no exceptions, just the evidence of 600 Mb consumption.

Conclusion

Ok, I am not sure if anyone has learned something new in this article. But at least that could be a good reminding to all of you who deliver containerized applications to check if you actually wasting memory. Good luck!