CDeX
Event-driven competitive programming & assignment platform built as a NestJS. gRPC internal RPC, Kafka event streaming, Redis caching/pubsub, Postgres + MongoDB persistence, MinIO artifact storage, Judge0 for code execution, and realtime via WebSockets.
Timeline
Role
Status
In-progressTechnology Stack
CDeX — Architecture & Dev Guide
Overview
CDeX is event-driven competitive programming and assignment platform implemented as a NestJS monorepo (Nx). It emphasizes service isolation, horizontal scalability, low-latency realtime features (scoreboard/feeds), reliable async processing via Kafka, and reproducible local development with Docker Compose.
Core tech: NestJS (Nx) · gRPC (proto-first) · Kafka · Redis · PostgreSQL · MongoDB · MinIO · WebSockets (Socket.IO) · Judge0 · Docker · OAuth2 (Google via auth-edge)
Core features
- Fully isolated microservices: user, contest, submission, scoring, problem, assignment, realtime, orchestrator
- Real-time scoreboard, standings, problem feeds
- External problem-provider import (Codeforces, etc.)
- Admin tools for professors (create assignments, problems, contests)
- Multi-version problem support
- Analytics & event logging via Kafka
Architecture (ASCII diagram)
[Client (Browser / Mobile)]
└─ REST/GraphQL ─► [API Gateway / auth-edge (OAuth2 Google)]
├─ gRPC ─► user-svc (Postgres)
├─ gRPC ─► contest-svc (MongoDB)
├─ gRPC ─► problem-svc (MongoDB + MinIO)
└─ gRPC ─► submission-svc (Postgres)
[submission-svc] ── produce submission.created ──► [Kafka]
│
└─► orchestrator / judge-proxy ──► Judge0 API
│
└─ produce submission.result ──► [Kafka]
[Kafka] ──► scoring-svc (Postgres) ──► updates submission, leaderboard
└─► analytics-svc (consumers)
[realtime-svc (SocketIO)] ◄─ Redis (pub/sub / cache) ◄─ scoring-svc / api-gateway
[MinIO] stores code + testcases + artifacts
Microservice descriptions
- api-gateway / auth-edge — REST edge, OAuth2 (Google), JWT issuance, basic rate-limits, REST→gRPC translation. Returns immediate 202 accepted for long-running work.
- user-svc — Postgres: users, roles, profiles, OAuth links, gRPC identity lookups.
- contest-svc — MongoDB: contest metadata, schedules, rules, participant lists.
- problem-svc — MongoDB + MinIO: problem statements, multi-version testcases, tags, sanitized import pipeline (Codeforces importer).
- submission-svc — Postgres: accepts submissions, stores code reference (MinIO), emits submission.created events to Kafka.
- orchestrator / judge-proxy — Consumes submission events, batches/orchestrates judge jobs, posts to Judge0, publishes results.
- scoring-svc — Consumes judge results, computes scoring/penalties, updates Postgres, emits scoreboard updates.
- realtime-svc — Socket.IO or custom socket service; subscribes to Redis/Kafka/scoreboard updates and pushes to connected clients.
- integrations-svc — Scheduled importers for external providers (Codeforces), normalizes metadata into problem-svc.
- analytics-svc — Kafka consumer; aggregates metrics, funnels events to dashboards or warehouses.
API Gateway flow (REST → gRPC)
- Client calls REST endpoint on api-gateway with OAuth JWT.
- api-gateway validates JWT (auth-edge) and maps to internal user id.
- REST request translated into gRPC proto call (proto-first stubs).
- gRPC service responds; api-gateway converts to JSON and returns.
- For long-running tasks (submit), return HTTP 202 + location/status endpoint; actual processing proceeds via Kafka events.
Event flows (submission lifecycle)
1) submission.created
- Origin: submission-svc produces to topic 'submissions.created'
2) orchestrator / judge-proxy consumes
- Builds job payload, posts to Judge0 API
3) Judge0 executes -> judge result returned
- judge-proxy publishes 'submissions.result' with details
4) scoring-svc consumes 'submissions.result'
- Computes score, updates DB, publishes 'leaderboard.update'
5) realtime-svc listens -> pushes via WebSocket to clients
6) analytics-svc consumes all events for dashboards/logging
Local Development Setup (Docker Compose)
Run core infra locally. Example docker-compose (excerpt):
version: '3.8'
services:
postgres:
image: postgres:15
environment:
POSTGRES_PASSWORD: example
volumes:
- pgdata:/var/lib/postgresql/data
mongo:
image: mongo:6
volumes:
- mongodata:/data/db
redis:
image: redis:7-alpine
kafka:
image: confluentinc/cp-kafka:7.5.0
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
volumes:
- miniodata:/data
judge0:
image: judge0/judge0:latest
environment:
REDIS_HOST: redis
POSTGRES_HOST: postgres
Spin up:
docker-compose up -d
npm run dev:all # starts all services in watch mode
Proto-first design
All internal RPC is proto-first. Steps:
- Define service interface in
libs/protos/src/foo.proto. - Generate TypeScript stubs:
nx build protos. - Implement service in
apps/foo-svc/src/. - Version protos carefully (breaking changes require new major version).
Example:
syntax = "proto3";
package cdex.user;
service UserService {
rpc GetUser(GetUserRequest) returns (GetUserResponse);
}
message GetUserRequest { string user_id = 1; }
message GetUserResponse { string id = 1; string name = 2; }
Contribution guide
- Fork repo, create branch:
feat/<what-you-are-building>. - Add/modify protos in
/libs/protos— proto-first: update package, bump version when breaking. - Implement service changes in isolated service folder; keep migrations and env changes limited to service.
- Unit tests:
nx test <project>. Integration: run docker-compose infra thennx test <project> --integration. - Open PR: include description, event contract changes, migration steps, rollback plan.
- For infra changes (topics, schemas), update infra docs and add a migration script.
Environment variable schema (example)
# COMMON
NODE_ENV=development
LOG_LEVEL=info
# Postgres (user, submission, scoring)
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=cdex
POSTGRES_USER=postgres
POSTGRES_PASSWORD=secret
# MongoDB (contest, problem)
MONGO_URI=mongodb://mongo:27017/cdex
# Redis
REDIS_HOST=redis
REDIS_PORT=6379
# Kafka
KAFKA_BROKERS=kafka:9092
# MinIO
MINIO_ENDPOINT=minio:9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
# Judge0
JUDGE0_URL=http://judge0:3001
JUDGE0_API_KEY=
# Auth
OAUTH_GOOGLE_CLIENT_ID=
OAUTH_GOOGLE_CLIENT_SECRET=
JWT_SECRET=supersecret
# Service-specific
API_GATEWAY_PORT=8080
SUBMISSION_SVC_PORT=50051
SCORING_SVC_PORT=50052
