docs

Tech Stack Reference

Complete reference of tools and technologies used in my projects: Python, Django, DRF, Celery, PostgreSQL, Redis, OpenAI, LangChain, Docker, Telegram Bot API, and more.

TL;DR

This is a complete reference of the tools, libraries, frameworks, and services I use across client projects. Every item listed here is something I have used in production — not a list of things I have read about. The stack is deliberately focused on Python and Django because specialisation delivers better results than being a generalist across 10 languages.

Backend: Python, Django & Supporting Libraries

The backend stack is where I spend 80% of development time. Every component is chosen for reliability, ecosystem support, and developer productivity.

Python 3.11-3.12: The core language for everything I build. Python's readability, massive ecosystem, and dominance in AI/ML make it the optimal choice for the types of projects I deliver — web applications, APIs, chatbots, and automation tools. I use type hints throughout for code clarity and early error detection.

Django 5.x: The web framework for all full-stack and API projects. Django's "batteries included" philosophy means authentication, admin panel, ORM, migrations, form handling, and security features are built in. This saves 30-40% of development time compared to assembling the same functionality from micro-frameworks. I use Django for SaaS platforms, web applications, API backends, and admin systems.

Django REST Framework (DRF): The standard library for building REST APIs with Django. Serialisers, viewsets, authentication, pagination, filtering, and throttling — all handled by DRF. Used in every project that exposes an API, which is essentially every project.

Celery + Redis: Asynchronous task processing. Celery handles background jobs — sending emails, processing webhooks, generating reports, running scheduled tasks. Redis serves as both the message broker for Celery and the application cache. This combination handles thousands of background tasks per hour reliably.

Supporting libraries:

  • django-environ: Environment variable management for secure configuration
  • django-cors-headers: CORS handling for API projects with frontend separation
  • django-filter: Declarative filtering for DRF endpoints
  • whitenoise: Static file serving for simpler deployments (when Nginx is not used)
  • Pillow: Image processing for uploads, thumbnails, and media handling
  • requests / httpx: HTTP client for external API integrations

Databases: PostgreSQL & Redis

Data storage is chosen based on access patterns, consistency requirements, and operational simplicity.

PostgreSQL 16: The primary database for all projects. PostgreSQL offers the best combination of reliability, feature set, and Django integration. Key features I use regularly:

  • JSONB fields: Store semi-structured data (user preferences, integration configs, flexible schemas) without sacrificing query performance. Django's JSONField maps directly to PostgreSQL JSONB with full index support.
  • Full-text search: Built-in search capabilities using tsvector and tsquery. For most applications, PostgreSQL full-text search eliminates the need for Elasticsearch. Django's SearchVector and SearchQuery make it accessible without raw SQL.
  • Array fields: Store lists of values (tags, categories, permissions) as native arrays with containment and overlap queries.
  • Connection pooling: PgBouncer for applications with high connection counts. Django's persistent connections for simpler setups.
  • Backup strategy: pg_dump for daily logical backups. WAL archiving for point-in-time recovery on critical systems. Backups stored on DigitalOcean Spaces or Backblaze B2 with 30-day retention.

Redis 7: Used for three distinct purposes across projects:

  • Cache: Django cache backend for expensive query results, rendered template fragments, and API response caching. TTL-based expiration keeps cache fresh without manual invalidation.
  • Message broker: Celery task queue backend. Handles task distribution to workers reliably with support for task priorities and scheduled execution.
  • Session storage: Faster than database-backed sessions for high-traffic applications. Session data expires automatically based on TTL configuration.

AI & Machine Learning: OpenAI, LangChain & Vector Databases

AI capabilities are integrated into client projects as targeted features — chatbots, document analysis, content generation, and classification — not as standalone ML systems.

OpenAI API (GPT-4o, GPT-4o-mini): The primary AI provider for most projects. GPT-4o for tasks requiring high accuracy — customer support chatbots, contract analysis, content generation. GPT-4o-mini for high-volume, lower-complexity tasks — classification, summarisation, intent detection. Pricing is usage-based (pay per token), which keeps costs proportional to actual usage.

Anthropic Claude: Used for tasks requiring longer context windows (200K tokens), more nuanced analysis, or when clients prefer an alternative to OpenAI. Particularly strong for document analysis and structured data extraction from long documents.

LangChain: Framework for building AI-powered applications with chains, agents, and retrieval-augmented generation (RAG). I use LangChain for projects that require multi-step reasoning, tool use (AI calling external APIs), or combining multiple AI models. For simpler chatbots, direct OpenAI API calls without LangChain are more maintainable.

Vector databases:

  • Pinecone: Managed vector database for RAG chatbots. Stores document embeddings and performs similarity search for relevant context retrieval. Used for knowledge base chatbots where the AI needs to answer from company-specific documentation.
  • pgvector: PostgreSQL extension for vector similarity search. When the dataset is under 1 million vectors, pgvector eliminates the need for a separate vector database. I prefer this for smaller projects to reduce infrastructure complexity.
  • ChromaDB: Lightweight, open-source vector database used for development and prototyping. Sometimes used in production for self-hosted deployments where clients want to avoid external services.

Embeddings: OpenAI text-embedding-3-small for most projects (cost-effective, good quality). text-embedding-3-large for projects where retrieval accuracy is critical. Embeddings are generated during document ingestion and stored in the vector database for fast similarity search at query time.

DevOps: Docker, Nginx & CI/CD

The DevOps stack prioritises simplicity and reliability over cutting-edge tooling. Every component has years of production use and extensive documentation.

Docker & Docker Compose: All projects are containerised. Docker ensures the same environment in development and production — no "works on my machine" issues. Docker Compose orchestrates multi-container applications (web, worker, database, cache, proxy) with a single configuration file. I do not use Kubernetes unless the project specifically requires horizontal auto-scaling.

Nginx: Reverse proxy, SSL termination, static file serving, and basic security layer. Nginx sits in front of every production application. It handles tasks that application servers should not — SSL handshakes, serving static assets, rate limiting, request buffering for slow clients. Configuration is version-controlled alongside the application code.

GitHub Actions: CI/CD pipeline for automated testing and deployment. A typical pipeline: run linting (flake8/ruff), run test suite (pytest), build Docker image, push to container registry, deploy to production server via SSH. Pipeline runs on every push to the main branch. Average pipeline duration: 3-5 minutes.

Server provisioning: Initial server setup is semi-automated with shell scripts that install Docker, configure firewall (UFW), set up SSH key authentication, and install monitoring tools. Full provisioning from bare Ubuntu server to running application: 30-45 minutes.

Hosting providers:

  • Hetzner: Best price-to-performance for European projects. CX21 (2 vCPU, 4GB RAM) at EUR 5.39/month handles most applications. Data centres in Germany and Finland (GDPR-compliant).
  • DigitalOcean: Better managed services (databases, object storage, load balancers). Slightly more expensive but lower operational overhead. Data centres in Amsterdam and Frankfurt.
  • Fly.io: For globally distributed applications that need low latency worldwide. Edge deployment with automatic scaling.

Messaging: Telegram Bot API & WhatsApp Business API

Messaging integrations are a core part of many projects — chatbots, notification systems, community management tools, and customer support channels.

Telegram Bot API: The most developer-friendly messaging API available. Free, no per-message fees, no approval process, rich feature set. I use the python-telegram-bot library (v20+) for async bot development. Key capabilities I implement regularly:

  • Inline keyboards and callback queries for interactive conversations
  • Webhook mode for production (more efficient than polling)
  • Telegram Payments API integration with Stripe for in-chat purchases
  • Group and channel management (moderation bots, content publishing)
  • File handling — sending and receiving documents, images, videos up to 2GB
  • Custom web apps (Telegram Mini Apps) for complex UIs within the Telegram interface

WhatsApp Business API: For businesses whose customers prefer WhatsApp. Requires a Meta Business account and API access via a Business Solution Provider. Key differences from Telegram:

  • Conversation-based pricing: EUR 0.05-0.15 per 24-hour conversation window (cost varies by country)
  • Template messages: pre-approved message templates required for outbound communication outside the 24-hour window
  • Rich messages: text, images, documents, buttons, lists, and product catalogs
  • Integration via Twilio, MessageBird, or direct Cloud API

Shared architecture: Both Telegram and WhatsApp bots in my projects share the same backend architecture — a Django application with webhook endpoints, a conversation state machine, integration with the AI layer for intelligent responses, and a unified admin interface for managing conversations across channels. Adding a new messaging channel to an existing bot requires implementing the channel-specific adapter — the business logic, AI integration, and admin interface are shared.

Frequently Asked Questions

Why Python and Django instead of Node.js or Go?

Specialisation. I have delivered 15+ production projects with Python and Django. Deep expertise in one stack produces better results than surface-level knowledge of many. Django specifically saves 30-40% development time for web applications because authentication, admin, ORM, and security are built in. Python dominates the AI/ML ecosystem, so AI integrations are first-class — no bridging between languages.

Do you use any frontend frameworks?

For most projects, I use Django templates with HTMX for interactive elements — this covers 90% of use cases without the complexity of a JavaScript framework. For projects that specifically need a SPA (single-page application) or have an existing React/Vue frontend, I build the Django backend as a pure API and the frontend team handles the client side.

How do you keep dependencies updated and secure?

pip audit runs in the CI pipeline on every push to catch known vulnerabilities. Dependabot (GitHub) creates automated PRs for dependency updates. I review and merge updates weekly for minor versions and evaluate major version upgrades monthly. Pinned dependencies in requirements.txt ensure reproducible builds. Security patches are applied within 24-48 hours of disclosure.

Questions About the Tech Stack?

If you are evaluating whether this stack is right for your project, I am happy to discuss alternatives and trade-offs.

Ask a Technical Question

or message directly: Telegram · Email