使用docker部署django技术栈项目的方法步骤

2020-06-17 07:00:49易采站长站整理

ENTRYPOINT ["/bin/bash", "entrypoint.sh"]

web容器的其他文件:

compose/web/entrypoint.sh web容器的启动脚本,执行一些初始化或检测逻辑。
compose/web/gunicorn.conf gunicorn配置文件。
compose/web/gunicorn.sh gunicorn的启动脚本。

celery的Dockerfile:


FROM python:3.6
ENV PYTHONUNBUFFERED 1

RUN mkdir /code
WORKDIR /code

COPY ./requirements.txt /code/
COPY ./compose/celery/*.sh /code/
RUN pip install --no-cache-dir -r requirements.txt
&& rm -rf requirements.txt && sh init_env.sh

COPY . /code/
COPY ./compose/celery/*.sh /code/
RUN sed -i 's/r//' celery.sh
&& chmod +x celery.sh
&& sed -i 's/r//' celery-beat.sh
&& chmod +x celery-beat.sh

celery的其他文件:

compose/celery/celery.sh celery的启动脚本。
compose/celery/celery-beat.sh celery-beat的启动脚本。

编写Compose启动配置文件

docker-compose 配置如下:


version: '2'

services:
redis:
image: redis
ports:
- "6379:6379"

db:
restart: always
image: mysql:5.7.19
# command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
volumes:
- ./compose/mysql/:/etc/mysql/conf.d
- ./db:/var/lib/mysql
# for test
ports:
- "127.0.0.1:3307:3306"
# (HOST:CONTAINER)
env_file:
- .env

web:
# restart: always
build:
context: .
dockerfile: ./compose/web/Dockerfile
command: sh gunicorn.sh # ["/bin/bash", "gunicorn.sh"] ports:
- "8080:8002"
# (HOST:CONTAINER)
volumes:
- ./logs:/var/logs/
- ./collect_static:/code/collect_static
- ./static:/code/static
- ./templates:/code/templates
- ./uploads:/code/uploads
env_file: .env
depends_on:
- redis
- db

nginx:
restart: always
image: nginx:1.13.0
volumes:
- ./compose/nginx:/etc/nginx/conf.d/
- ./staticfiles:/code/staticfiles
- ./logs:/var/log/nginx
ports:
- "80:80"
# (HOST:CONTAINER)
depends_on:
- web

celery:
build:
context: .
dockerfile: ./compose/celery/Dockerfile
command: sh celery.sh
volumes:
- ./logs:/var/logs/
- ./uploads:/code/uploads
depends_on:
- redis
- db
env_file: .env

celery-beat:
build:
context: .
dockerfile: ./compose/celery/Dockerfile
command: sh celery-beat.sh
volumes:
- ./logs:/var/logs/
depends_on:
- redis
- db
env_file: .env

celery 的worker 和beat这里我们使用同一个镜像Dockerfile, 按照一个镜像一个进程的原则,启动两个容器来分别跑worker和beat进程。