Control Panel Local Setup
This guide walks you through setting up Cuemeet locally for development and testing purposes.
Prerequisites
Before you begin, make sure you have the following installed:
- Git
 - Docker and Docker Compose
 - Node.js (v14 or later) - Optional for local development without Docker
 - Python 3.10 or higher - Optional for local development without Docker
 
Installation Steps
- Clone the repository
 
git clone https://github.com/CueMeet/Meeting-Bots-Control-Panel.git
cd Meeting-Bots-Control-Panel
Project Folder Structure
Click to expand folder structure
.
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── api-backend
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── nest-cli.json
│   ├── package.json
│   ├── src
│   ├── test
│   ├── tsconfig.build.json
│   ├── tsconfig.json
│   └── yarn.lock
├── assets
│   ├── banner.png
│   └── cuemeet-logo.png
├── docker-compose.yml
├── pg-db
│   └── init-multiple-databases.sql
├── protos
│   └── worker_backend.transcript_management
└── worker-backend
    ├── Dockerfile
    ├── Makefile
    ├── README.md
    ├── api
    ├── manage.py
    ├── nltk
    ├── poetry.lock
    ├── pyproject.toml
    └── worker_backend
- Configure Environment Variables
 
Click to view/copy Backend API .env configuration
# Backend API Configuration
# Application
PORT=4000
NODE_ENV=development
CORS_ALLOWED_ORIGINS=*
# Database
DB_HOST=pg-db
DB_PORT=5432
DB_USERNAME=meetingbots_user
DB_PASSWORD=cuecard-meting-bots-secret
DB_DATABASE=meetingbots_db_backend_api
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
# AWS (Must be filled in from AWS setup steps)
AWS_ACCESS_KEY=  # Your AWS Access Key from AWS setup
AWS_SECRET_KEY=  # Your AWS Secret Key from AWS setup
## S3
AWS_BUCKET_REGION=  # Your S3 bucket region
AWS_MEETING_BOT_BUCKET_NAME=  # Your S3 bucket name
## ECS (Must match the AWS configurations)
AWS_ECS_CLUSTER_NAME=  # Your AWS ECS Cluster Name
AWS_SECURITY_GROUP=  # Your AWS Security Group ID
AWS_VPS_SUBNET=  # Your AWS Subnet ID
ECS_TASK_DEFINITION_GOOGLE=  # Task Definition for Google Meet bots
ECS_CONTAINER_NAME_GOOGLE=  # Container Name for Google Meet bots
ECS_TASK_DEFINITION_ZOOM=  # Task Definition for Zoom bots
ECS_CONTAINER_NAME_ZOOM=  # Container Name for Zoom bots
ECS_TASK_DEFINITION_TEAMS=  # Task Definition for Microsoft Teams bots
ECS_CONTAINER_NAME_TEAMS=  # Container Name for Microsoft Teams bots
# Meeting Bot
MEETING_BOT_RETRY_COUNT=2
# Worker Backend gRPC URL
WORKER_BACKEND_GRPC_URL=worker-grpc:5500
⚠️ Important: The AWS-related environment variables must be obtained from the AWS Setup Guide. Complete the AWS setup first and copy the relevant values into this file.
Click to view/copy Worker API .env configuration
# Worker API Configuration
DJANGO_SETTINGS_MODULE=worker_backend.settings
DJANGO_SECRET_KEY=8b1336ae5f72ec7e949e787054976962a85fb1ca935da5ca59ba0448eae178b1336ae5f7204
DEBUG=True
STATIC_URL=/static/
ALLOWED_HOSTS=*
CORS_ALLOWED_ORIGINS=*
## PG Database
DB_USERNAME=meetingbots_user
DB_PASSWORD=cuecard-meting-bots-secret
DB_NAME=meetingbots_db_worker
DB_HOST=pg-db
DB_PORT=5432
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=2
# AWS Configuration
AWS_ACCESS_KEY_ID= # Your AWS Access Key from AWS setup
AWS_SECRET_ACCESS_KEY= # Your AWS Access Key from AWS setup
## AWS S3
AWS_REGION= # Your S3 bucket region
AWS_STORAGE_BUCKET_NAME= # Your S3 bucket name
_SIGNED_URL_EXPIRY_TIME=60
## HIGHLIGHT
HIGHLIGHT_PROJECT_ID=""
HIGHLIGHT_ENVIRONMENT_NAME=""
## ASSEMBLY AI
ASSEMBLY_AI_API_KEY="" # https://www.assemblyai.com API KEY 
- Docker Compose Configuration
 
docker-compose.yml
services:
  backend-api:
    container_name: backend_rest
    build:
      context: ./api-backend
      dockerfile: Dockerfile
    ports:
      - "4000:4000"
    depends_on:
      - pg-db
      - redis
  worker-api:
    container_name: worker_rest
    build:
      context: ./worker-backend
      dockerfile: Dockerfile
    command: python manage.py migrate && gunicorn worker_backend.wsgi:application --workers 4 --bind 0.0.0.0:8000
    ports:
      - "8000:8000"
    depends_on:
      - pg-db
      - redis
      - backend-api
  worker-grpc:
    container_name: grpc-server
    restart: always
    image: cuemeet:worker-backend
    command: bash -c "echo 'Starting gRPC server...' && python manage.py grpcrunaioserver 0.0.0.0:5500 --max-workers 2 --verbosity 3"
    # ports: ## Uncomment this if you want to expose grpc port
    #   - "5500:5500"
    depends_on:
      - pg-db
      - redis
      - backend-api
      - worker-api
  redis:
    image: redis:alpine
    container_name: redis
    restart: always
    # ports: ## Uncomment this if you want to expose redis port
    #   - "6379:6379"
  pg-db:
    image: postgres:16
    container_name: postgres_db
    restart: always
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./pg-db/init-multiple-databases.sql:/docker-entrypoint-initdb.d/init-multiple-databases.sql
    # ports: ## Uncomment this if you want to expose postgres port
    #   - "5432:5432"
    env_file:
      - ./pg-db/.db.env
  celery_worker:
    container_name: celery_worker
    build:
      context: ./worker-backend
      dockerfile: Dockerfile
    command: celery -A worker_backend worker --loglevel=info --concurrency=4
    depends_on:
      - redis
      - pg-db
      - worker-api
  flower:
    container_name: flower
    build:
      context: ./worker-backend
      dockerfile: Dockerfile
    command: celery -A worker_backend flower --port=5555
    ports:
      - "5556:5555"
    depends_on:
      - redis
      - worker-grpc
  documentation:
    container_name: documentation
    build:
      context: ./documentation
      dockerfile: Dockerfile
    ports:
      - "6000:3000"
volumes:
  postgres_data:
    driver: local
- Start the Services
 
# Build and start all services
docker compose up -d
# Check service status
docker compose ps
# Stop the service
docker compose down
The services will be available at:
- Backend API: 
http://localhost:4000 - Worker API: 
http://localhost:8000 - Worker gRPC: 
worker-grpc:5500(internal only — accessible via Docker Compose network) - PostgreSQL: 
pg-db:5432(internal only — accessible via Docker Compose network) - Redis: 
redis:6379(internal only — accessible via Docker Compose network) 
Database Migrations
For the Backend API(Nest.js)
- Automatic Migrations are configured
 
For the Worker API (Django):
# Run migrations
python manage.py migrate
Troubleshooting
Common Issues
Container Startup Issues
- Ensure all required ports (4000, 8000) are available
 - Check if Docker daemon is running
 - Verify environment variables are set correctly
 
Database Connection Issues
- Ensure PostgreSQL container is running: 
docker-compose ps - Check logs: 
docker-compose logs pg-db - Verify database credentials in both .env files
 
Service Dependencies
- Backend API must be running for Worker API to function properly
 - Check service logs:
docker-compose logs backend-api
docker-compose logs worker-api 
Next Steps
Once you have your local setup working:
- Proceed to AWS Setup for production deployment
 - Explore the API Documentation
 - Configure your Bot Settings