1

I have a java spring boot application with some open source schedulers in it. The main workflow of the application is: whenever we get the requeusts we create a task at some time as configured in the app (lets say 12 hrs from time of launch) and store the details of task as a blob in a table, say task_table. There is a poller in the application which gets started at the time of app startup and that keeps on polling the task_table to check if the task are due execution to be picked up. So once it picked up, the application will take 15 secs max to complete a task.

The poller has a seperate thread pools configured. There is a master agent who checks the table and distribute the picked tasks to the slave threads. I am using postgres sql as my database.

I have containarized my app as a docker containers and depolyed in aws ec2 instances. The main problem I'm seeing is when I run the app on a single container the tasks are getting executed at 15 as mentioned above. But when I scale the containers to 2,3,4 etc. I see constance increase in the task execution time. Say 2 containers it takes 20 secs, 3 containers 45 sec so on.

I have checked all the connection pool settings on the db side and IOPS values for read/write latency. Everything seems to be fine but I'm not able to find out how on auto scaling the containers the task takes more time? Because there is nothing shared across the containers other than DB.

Can someone help me with some inferences for the above problem??

I tried changing the ec2 instance types, RDS storage size, connection pool settings, IOPS values etc

5
  • What is your workload? Is your primary bottleneck waiting for network connections, or is it CPU/Memory/Disk intensive work? Is the given resource already maxed when you only have one container running? I'm also not sure if I'm understanding correctly, but it sounds like you're running multiple containers on the same EC2 instance? Commented Apr 27, 2023 at 20:18
  • I tried checking on the CPU/Memory percents while running. The app is memory hungry so I tried increasing the container memory to 5gb and assigned 4 VCPU's to my service. I'm using c5.xlarge ec2 class and one container per instance is only running. Commented Apr 28, 2023 at 4:20
  • If a single container consumes all of the memory on your instance, spawning more will likely cause the system to start swapping, which will slow things down a lot. It sounds like what you're really looking for, is to scale the amount of resources available to the server. AWS has many products that let you do automatic scaling, but EC2 instances are just virtual machines and can be a bit difficult to scale. You could perhaps take a look here, to find an appropriate product :) docs.aws.amazon.com/whitepapers/latest/docker-on-aws/… Commented Apr 28, 2023 at 9:49
  • Currently yes, I'm using aws ECS container auto scaling and scaling containers horizontally based on the CPU usage. So you are suggesting use different type of auto scaling mechanism? So that all the resources will be available when the new container comes up? Commented Apr 28, 2023 at 14:32
  • Also I tried deploying 1 container per instance, even then it didn't help much. Commented Apr 28, 2023 at 14:40

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.