-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
data-model-importer API fails to run via docker compose #19
Comments
As we don't want to maintain some sort of dependencies in the docker-compose or maintaining hard-coded delays, let's bring the local setup more to the deployment, where OpenShift transparently restarts services when they fail to bring up. Just add restart policy to the docker-compose:
Please open a PR for this. Thanks! |
@fridex It seems to me that the container isn't dying but the API inside the running container is retrying few times:
Do you think |
OK, so what is the issue here? If it successfully retries after some time, things should work as expected, right?
The restart policy is on the docker layer. If the container does not exit, it will not affect the current behaviour anyhow. |
@fridex It is the From the logs [2] I can observe that data model importer attempts to connect to gremlin-http few times, after which there are no more errors. [1] https://github.com/fabric8-analytics/fabric8-analytics-data-model/blob/master/scripts/entrypoint.sh#L4 |
And yes, the readiness probe also succeeds after waiting for a while |
OK, so I would say we can close this as not-a-bug. |
Yes, functionally this works. However it will be good to avoid the error that is generated for first few attempts. From the data integrity perspective, what happens if an analysis is invoked and the data-model-importer API is not yet up ( due to the failure discussed in this issue ). I understand that all the workers will be run and data will be put into S3 ( minio ), after which the data is put into graph. But the this last step will fail because data-model-importer API is not up yet. Will the ingestion into data-mode-importer be attempted again form the same analysis after a while? |
You can see errors even for other services as well.
No, the ingestion will not be re-scheduled. Note that this is setup for local development. It is your responsibility to have the system up when you want to test something. Adding delays in the data-importer service will not solve your issue - analyses can still be run and the results won't be synced using data-importer anyway. Moreover, having the error message there will help you know when the whole system is up. Anyway, there is a plan to remove data-importer service and rather do syncs using Selinon task after each analyses. There is no point to spend time on this. |
In that case we can close this issue. |
Start the services using Docker Compose as below:
Failure output:
The issue is that data-model-importer depends on gremlin-http to be up. By the time gremlin-http starts, the importer already had failed trying to connect.
We could use some sort of a delay mechanism to delay starting of data-model-importer until gremlin-http has already started.
Related thread and SO post:
The text was updated successfully, but these errors were encountered: