From 2a82ed142adc7640c506bd82dc11e5bbd730d28a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebasti=C3=A1n=20Ram=C3=ADrez?= Date: Fri, 1 Oct 2021 22:51:24 +0200 Subject: [PATCH] =?UTF-8?q?=F0=9F=93=9D=20Add=20Kubernetes=20warning,=20wh?= =?UTF-8?q?en=20to=20use=20this=20image?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 78 ++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 77 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index e5c5f0f..bcbbede 100644 --- a/README.md +++ b/README.md @@ -56,13 +56,89 @@ This image was created to be an alternative to [**tiangolo/uwsgi-nginx**](https: And to be the base of [**tiangolo/meinheld-gunicorn-flask**](https://github.com/tiangolo/meinheld-gunicorn-flask-docker). +## 🚨 WARNING: You Probably Don't Need this Docker Image + +You are probably using **Kubernetes** or similar tools. In that case, you probably **don't need this image** (or any other **similar base image**). You are probably better off **building a Docker image from scratch**. + +--- + +If you have a cluster of machines with **Kubernetes**, Docker Swarm Mode, Nomad, or other similar complex system to manage distributed containers on multiple machines, then you will probably want to **handle replication** at the **cluster level** instead of using a **process manager** in each container that starts multiple **worker processes**, which is what this Docker image does. + +In those cases (e.g. using Kubernetes) you would probably want to build a **Docker image from scratch**, installing your dependencies, and running **a single process** instead of this image. + +For example, using [Gunicorn](https://gunicorn.org/) you could have a file `app/gunicorn_conf.py` with: + +```Python +# Gunicorn config variables +loglevel = "info" +errorlog = "-" # stderr +accesslog = "-" # stdout +worker_tmp_dir = "/dev/shm" +graceful_timeout = 120 +timeout = 120 +keepalive = 5 +threads = 3 +``` + +And then you could have a `Dockerfile` with: + +```Dockerfile +FROM python:3.9 + +WORKDIR /code + +COPY ./requirements.txt /code/requirements.txt + +RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt + +COPY ./app /code/app + +CMD ["gunicorn", "--conf", "app/gunicorn_conf.py", "--bind", "0.0.0.0:80", "app.main:app"] +``` + +You can read more about these ideas in the [FastAPI documentation about: FastAPI in Containers - Docker](https://fastapi.tiangolo.com/deployment/docker/#replication-number-of-processes) as the same ideas would apply to other web applications in containers. + +## When to Use this Docker Image + +### A Simple App + +You could want a process manager running multiple worker processes in the container if your application is **simple enough** that you don't need (at least not yet) to fine-tune the number of processes too much, and you can just use an automated default, and you are running it on a **single server**, not a cluster. + +### Docker Compose + +You could be deploying to a **single server** (not a cluster) with **Docker Compose**, so you wouldn't have an easy way to manage replication of containers (with Docker Compose) while preserving the shared network and **load balancing**. + +Then you could want to have **a single container** with a **process manager** starting **several worker processes** inside, as this Docker image does. + +### Prometheus and Other Reasons + +You could also have **other reasons** that would make it easier to have a **single container** with **multiple processes** instead of having **multiple containers** with **a single process** in each of them. + +For example (depending on your setup) you could have some tool like a Prometheus exporter in the same container that should have access to **each of the requests** that come. + +In this case, if you had **multiple containers**, by default, when Prometheus came to **read the metrics**, it would get the ones for **a single container each time** (for the container that handled that particular request), instead of getting the **accumulated metrics** for all the replicated containers. + +Then, in that case, it could be simpler to have **one container** with **multiple processes**, and a local tool (e.g. a Prometheus exporter) on the same container collecting Prometheus metrics for all the internal processes and exposing those metrics on that single container. + +--- + +Read more about it all in the [FastAPI documentation about: FastAPI in Containers - Docker](https://fastapi.tiangolo.com/deployment/docker/), as the same concepts apply to other web applications in containers. + ## How to use -* You don't need to clone the GitHub repo. You can use this image as a base image for other images, using this in your `Dockerfile`: +You don't have to clone this repo. + +You can use this image as a base image for other images. + +Assuming you have a file `requirements.txt`, you could have a `Dockerfile` like this: ```Dockerfile FROM tiangolo/meinheld-gunicorn:python3.9 +COPY ./requirements.txt /app/requirements.txt + +RUN pip install --no-cache-dir --upgrade -r /app/requirements.txt + COPY ./app /app ```