4

I'm setting up a container with the following Dockerfile

# Start with project/baseline
FROM project/baseline    # => image with mongo / nodejs / sailsjs

# Create folder that will contain all the sources
RUN mkdir -p /var/project

# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/   # src contains a list of folder, each one being a sails app

# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh

The init.sh script is called when the container runs. It should start a couple of webapp and mongodb.

#!/bin/bash
PROJECT_PATH=/var/project

# Start mongodb
function start_mongo {
  mongod --fork --logpath /var/log/mongodb.log  # attempt to have mongo running in daemon
}

# Start services
function start {
  for service in $(ls);do
    cd $PROJECT_PATH/$service
    npm start  # Runs sails lift on each service
   done
}

# start mongodb
start_mongo

# start web applications defined in /var/project
start

Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:

$ sudo docker run -t -i projects/test about to fork child process, waiting until server is ready for connections. forked process: 10

and then it remains stuck.

How can mongo and the sails processes can be started and the container to remain in a running state ?

UPDATE

I now use this supervisord.conf file

[supervisord]
nodaemon=false

[program:mongodb]
command=/usr/bin/mongod

[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"

[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"

it is called in the Dockerfile like:

# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]

As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?

UPDATE 2

I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:

  • run nohup mongod &
  • wait 60 sec
  • run my node (forever) processes

The thing is, the container exit right after the forever processes are ran... how can it be kept active ?

1 Answer 1

3

If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.

I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.

Sign up to request clarification or add additional context in comments.

4 Comments

I'm trying with supervisor (very good tips though !) but I have a new problem. As I need mongodb to be started before my other processes could start, it seems supervisor does not wait that long to retry to run the node processes.
You could wrap your other services with a shell script that waits for mongodb to be available before starting the service. So supervisor starts your wrapper script, your wrapper script waits for mongodb and, when it's available, you wrapper script starts the additional service.
Any tips on how to check if mongo is up AND waiting for connection ?
I don't know anything about mongo, actually, but presumably you could write something to perform a query against the database. I see that the official mongodb docs have a section on monitoring that might provide some fodder for thought.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.