Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to delay container startup to support dependant services with a longer startup time #374

Closed
dancrumb opened this issue Aug 4, 2014 · 316 comments

Comments

@dancrumb
Copy link

dancrumb commented Aug 4, 2014

I have a MySQL container that takes a little time to start up as it needs to import data.

I have an Alfresco container that depends upon the MySQL container.

At the moment, when I use fig, the Alfresco service inside the Alfresco container fails when it attempts to connect to the MySQL container... ostensibly because the MySQL service is not yet listening.

Is there a way to handle this kind of issue in Fig?

@d11wtq
Copy link

d11wtq commented Aug 4, 2014

At work we wrap our dependent services in a script that check if the link is up yet. I know one of my colleagues would be interested in this too! Personally I feel it's a container-level concern to wait for services to be available, but I may be wrong :)

@nubs
Copy link

nubs commented Aug 4, 2014

We do the same thing with wrapping. You can see an example here: https://github.com/dominionenterprises/tol-api-php/blob/master/tests/provisioning/set-env.sh

@bfirsh
Copy link

bfirsh commented Aug 4, 2014

It'd be handy to have an entrypoint script that loops over all of the links and waits until they're working before starting the command passed to it.

This should be built in to Docker itself, but the solution is a way off. A container shouldn't be considered started until the link it exposes has opened.

@dancrumb
Copy link
Author

dancrumb commented Aug 4, 2014

@bfirsh that's more than I was imagining, but would be excellent.

A container shouldn't be considered started until the link it exposes has opened.

I think that's exactly what people need.

For now, I'll be using a variation on https://github.com/aanand/docker-wait

@silarsis
Copy link

silarsis commented Aug 4, 2014

Yeah, I'd be interested in something like this - meant to post about it earlier.

The smallest impact pattern I can think of that would fix this usecase for us would be to be the following:

Add "wait" as a new key in fig.yml, with similar value semantics as link. Docker would treat this as a pre-requisite and wait until this container has exited prior to carrying on.

So, my docker file would look something like:

db:
  image: tutum/mysql:5.6

initdb:
  build: /path/to/db
  link:
    - db:db
  command: /usr/local/bin/init_db

app:
  link:
    - db:db
  wait:
    - initdb

On running app, it will start up all the link containers, then run the wait container and only progress to the actual app container once the wait container (initdb) has exited. initdb would run a script that waits for the database to be available, then runs any initialisations/migrations/whatever, then exits.

That's my thoughts, anyway.

@dnephin
Copy link

dnephin commented Aug 5, 2014

(revised, see below)

@dsyer
Copy link

dsyer commented Aug 14, 2014

+1 here too. It's not very appealing to have to do this in the commands themselves.

@jcalazan
Copy link

+1 as well. Just ran into this issue. Great tool btw, makes my life so much easier!

@arruda
Copy link

arruda commented Aug 16, 2014

+1 would be great to have this.

@prologic
Copy link

+1 also. Recently run into the same set of problems

@chymian
Copy link

chymian commented Aug 19, 2014

+1 also. any statement from dockerguys?

@codeitagile
Copy link

I am writing wrapper scripts as entrypoints to synchronise at the moment, not sure if having a mechanism in fig is wise if you have other targets for your containers that perform orchestration a different way. Seems very application specific to me, as such the responsibility of the containers doing the work.

@prologic
Copy link

After some thought and experimentation I do kind of agree with this.

As such an application I'm building basically has a synchronous
waitfor(host, port) function that lets me waits for services the application
is depending on (either detected via environment or explicitly
configuration via cli options).

cheers
James

James Mills / prologic

E: [email protected]
W: prologic.shortcircuit.net.au

On Fri, Aug 22, 2014 at 6:34 PM, Mark Stuart [email protected]
wrote:

I am writing wrapper scripts as entrypoints to synchronise at the moment,
not sure if having a mechanism in fig is wise if you have other targets for
your containers that perform orchestration a different way. Seems very
application specific to me as such the responsibility of the containers
doing the work.


Reply to this email directly or view it on GitHub
#374 (comment).

@shuron
Copy link

shuron commented Aug 31, 2014

Yes some basic "depend's on" neeeded here...
so if you have 20 container, you just wan't to run fig up and everything starts with correct order...
However it also have some timeout option or other failure catching mechanisms

@ahknight
Copy link

Another +1 here. I have Postgres taking longer than Django to start so the DB isn't there for the migration command without hackery.

@dnephin
Copy link

dnephin commented Oct 23, 2014

@ahknight interesting, why is migration running during run ?

Don't you want to actually run migrate during the build phase? That way you can startup fresh images much faster.

@ahknight
Copy link

There's a larger startup script for the application in question, alas. For now, we're doing non-DB work first, using nc -w 1 in a loop to wait for the DB, then doing DB actions. It works, but it makes me feel dirty(er).

@dnephin
Copy link

dnephin commented Oct 23, 2014

I've had a lot of success doing this work during the fig build phase. I have one example of this with a django project (still a work in progress through): https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21

No need to poll for startup. Although I've done something similar with mysql, where I did have to poll for startup because the mysqld init script wasn't doing it already. This postgres init script seems to be much better.

@arruda
Copy link

arruda commented Oct 24, 2014

Here is what I was thinking:

Using the idea of moby/moby#7445 we could implement this "wait_for_helth_check" attribute in fig?
So it would be a fig not a Docker issue?

is there anyway of making fig check the tcp status on the linked container, if so then I think this is the way to go. =)

@docteurklein
Copy link

@dnephin can you explain a bit more what you're doing in Dockerfiles to help this ?
Isn't the build phase unable to influence the runtime?

@dnephin
Copy link

dnephin commented Nov 10, 2014

@docteurklein I can. I fixed the link from above (https://github.com/dnephin/readthedocs.org/blob/fig-demo/dockerfiles/database/Dockerfile#L21)

The idea is that you do all the slower "setup" operations during the build, so you don't have to wait for anything during container startup. In the case of a database or search index, you would:

  1. start the service
  2. create the users, databases, tables, and fixture data
  3. shutdown the service

all as a single build step. Later when you fig up the database container it's ready to go basically immediately, and you also get to take advantage of the docker build cache for these slower operations.

@docteurklein
Copy link

nice! thanks :)

@arruda
Copy link

arruda commented Nov 11, 2014

@dnephin nice, hadn't thought of that .

@oskarhane
Copy link

+1 This is definitely needed.
An ugly time delay hack would be enough in most cases, but a real solution would be welcome.

@dnephin
Copy link

dnephin commented Dec 5, 2014

Could you give an example of why/when it's needed?

@dacort
Copy link

dacort commented Dec 5, 2014

In the use case I have, I have an Elasticsearch server and then an application server that's connecting to Elasticsearch. Elasticsearch takes a few seconds to spin up, so I can't simply do a fig up -d because the application server will fail immediately when connecting to the Elasticsearch server.

@ddossot
Copy link

ddossot commented Dec 5, 2014

Say one container starts MySQL and the other starts an app that needs MySQL and it turns out the other app starts faster. We have transient fig up failures because of that.

@oskarhane
Copy link

crane has a way around this by letting you create groups that can be started individually. So you can start the MySQL group, wait 5 secs and then start the other stuff that depends on it.
Works in a small scale, but not a real solution.

@arruda
Copy link

arruda commented Dec 6, 2014

@oskarhane not sure if this "wait 5 secs" helps, in some cases in might need to wait more (or just can't be sure it won't go over the 5 secs)... it's isn't much safe to rely on time waiting.
Also you would have to manually do this waiting and loading the other group, and that's kind of lame, fig should do that for you =/

@Silex
Copy link

Silex commented Mar 8, 2017

@vladikoff: more info about version 3 at #4305

Basically, it won't be supported, you have to make your containers fault-tolerant instead of relying on docker-compose.

@shin-
Copy link

shin- commented Mar 21, 2017

I believe this can be closed now.

@slava-nikulin
Copy link

slava-nikulin commented May 11, 2017

Unfortunatelly, condition is not supported anymore in v3. Here is workaround, that I've found:

website:
    depends_on:
      - 'postgres'
    build: .
    ports:
      - '3000'
    volumes:
      - '.:/news_app'
      - 'bundle_data:/bundle'
    entrypoint: ./wait-for-postgres.sh postgres 5432

  postgres:
    image: 'postgres:9.6.2'
    ports:
      - '5432'

wait-for-postgres.sh:

#!/bin/sh

postgres_host=$1
postgres_port=$2
shift 2
cmd="$@"

# wait for the postgres docker to be running
while ! pg_isready -h $postgres_host -p $postgres_port -q -U postgres; do
  >&2 echo "Postgres is unavailable - sleeping"
  sleep 1
done

>&2 echo "Postgres is up - executing command"

# run the command
exec $cmd

@riuvshin
Copy link

@slava-nikulin custom entrypoint is a common practice, it is almost the only (docker native) way how you can define and check all conditions you need before staring your app in a container.

@mbdas
Copy link

mbdas commented May 11, 2017 via email

@patrickml
Copy link

patrickml commented Jun 22, 2017

I was able to do something like this
// start.sh

#!/bin/sh
set -eu

docker volume create --name=gql-sync
echo "Building docker containers"
docker-compose build
echo "Running tests inside docker container"
docker-compose up -d pubsub
docker-compose up -d mongo
docker-compose up -d botms
docker-compose up -d events
docker-compose up -d identity
docker-compose up -d importer
docker-compose run status
docker-compose run testing

exit $?

// status.sh

#!/bin/sh

set -eu pipefail

echo "Attempting to connect to bots"
until $(nc -zv botms 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to events"
until $(nc -zv events 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to identity"
until $(nc -zv identity 3000); do
    printf '.'
    sleep 5
done
echo "Attempting to connect to importer"
until $(nc -zv importer 8080); do
    printf '.'
    sleep 5
done
echo "Was able to connect to all"

exit 0

// in my docker compose file

  status:
    image: yikaus/alpine-bash
    volumes:
      - "./internals/scripts:/scripts"
    command: "sh /scripts/status.sh"
    depends_on:
      - "mongo"
      - "importer"
      - "events"
      - "identity"
      - "botms"

@usamaB
Copy link

usamaB commented Oct 30, 2017

I have a similar problem but a bit different. I have to wait for MongoDB to start and initialize a replica set.
Im doing all of the procedure in docker. i.e. creating and authentication replica set. But I have another python script in which I have to connect to the primary node of the replica set. I'm getting an error there.

docker-compose.txt
Dockerfile.txt
and in the python script im trying to do something like this
for x in range(1, 4): client = MongoClient(host='node' + str(x), port=27017, username='admin', password='password') if client.is_primary: print('the client.address is: ' + str(client.address)) print(dbName) print(collectionName) break

Am having difficulty in doing so, anyone has any idea?

@chaicode88
Copy link

@patrickml If I don't use docker compose, How I you do it with Dockerfile?
I need 'cqlsh' to execute my build_all.cql. However, 'cqlsh' is not ready...have to wait for 60 seconds to be ready.

cat Dockerfile

FROM store/datastax/dse-server:5.1.8

USER root

RUN apt-get update
RUN apt-get install -y vim

ADD db-scripts-2.1.33.2-RFT-01.tar /docker/cms/
COPY entrypoint.sh /entrypoint.sh

WORKDIR /docker/cms/db-scripts-2.1.33.2/
RUN cqlsh -f build_all.cql

USER dse

=============

Step 8/9 : RUN cqlsh -f build_all.cql
---> Running in 08c8a854ebf4
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
The command '/bin/sh -c cqlsh -f build_all.cql' returned a non-zero code: 1

vijayvepa pushed a commit to vijayvepa/AnsibleExamples that referenced this issue Dec 20, 2018
rpdelaney added a commit to CMS-Enterprise/easi-app that referenced this issue Apr 9, 2020
A race condition between these two containers was causing the database
to sometimes get cleaned after migrations had been run. Rather than hack
together scripts to track this state I'm simply removing the clean
service from the docker-compose configuration.

See also:
  * https://docs.docker.com/compose/startup-order/
  * docker/compose#374
abitrolly added a commit to abitrolly/cheat.sh that referenced this issue Jul 29, 2020
Because `docker-compose` is not capable of checking open ports
docker/compose#374
@henroFall
Copy link

Requires= var-lib-libvirt.mount var-lib-libvirt-images-ram.mount

@Lunar2kPS
Copy link

In case anyone comes back to this years later, read this wonderful page from the Docker Compose docs for updated info!

@skorokithakis
Copy link

There are also health checks built in now, which solve the problem better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests