// hi, i'm
John Itopa ISAH
DevOps & Cloud Engineer
DevOps & Cloud Engineer with hands-on experience designing and deploying production-grade infrastructure on Kubernetes. I build end-to-end systems — from containerised microservices and CI/CD pipelines to secret management, TLS automation and observability — with a strong focus on reliability, scalability and operational simplicity.
Projects
Things I've built
Portfolio CMS — Cloud-Native Portfolio Platform
A production-grade microservices portfolio platform designed, built and deployed from scratch on a bare-metal Kubernetes cluster. Architecture: 4 fully containerised services — a public-facing Next.js 14 portfolio (user-ui), a private Next.js 14 CMS (admin-ui), a Node.js/Express REST API, and a PostgreSQL 16 database — all communicating over an internal Kubernetes network. Infrastructure & DevOps: Multi-stage CI/CD pipelines via GitHub Actions automatically build, lint, typecheck and push Docker images to GHCR on every release. Secrets are managed at runtime via the Infisical Kubernetes operator, eliminating hardcoded credentials. TLS certificates are auto-provisioned via cert-manager and Let's Encrypt. Ingress is handled by Nginx with routing rules for 3 subdomains. Email & Notifications: Professional custom-domain email at [email protected] via Zoho Mail Lite with full SPF, DKIM and DMARC configuration. Contact form submissions trigger 3 parallel notifications — an owner alert email, an automated sender acknowledgement, and an SMS via Twilio — all fire-and-forget using Promise.allSettled so no notification failure can block a user's form submission. Security: JWT-based admin authentication, bcrypt password hashing, rate limiting on public endpoints, runtime secret injection, and a dedicated password reset flow with time-limited single-use tokens. DNS: Fully managed on Cloudflare with records for A, MX, CNAME, SPF, DKIM and DMARC across the apex domain and subdomains.
ShopNow — E-Commerce Microservices Platform
A production-grade e-commerce platform built as a cloud-native microservices architecture — from zero to Kubernetes in production. The platform consists of three services: a Django REST API backend, a Next.js customer storefront, and a Next.js admin dashboard — all containerised with Docker and orchestrated on Kubernetes. Key highlights: - Full e-commerce flow — product browsing, Redis-backed basket, Stripe payments, order management, and email activation - Zero secrets in git — Infisical operator syncs encrypted secrets directly into the Kubernetes cluster - GitOps deployment — ArgoCD watches the Helm chart in git and auto-deploys on every push - Automatic HTTPS — cert-manager provisions and renews Let's Encrypt TLS certificates - CI/CD pipelines — GitHub Actions tests, builds, scans with Trivy, and pushes Docker images to Docker Hub - Data safety — PostgreSQL StatefulSet with PVC using Retain reclaim policy survives pod crashes and helm uninstalls - Next.js API proxy — server-side rewrites replace NEXT_PUBLIC_API_URL so the same Docker image works on any host without rebuilding
k8s-mcp-assistant — Kubernetes MCP Server
A read-only Kubernetes Model Context Protocol (MCP) server that gives AI assistants like Claude full cluster inspection capabilities through natural language. Built entirely in Python, the server exposes 23 MCP tools covering every major Kubernetes read operation — pods, deployments, statefulsets, daemonsets, replicasets, jobs, cronjobs, services, events, nodes, namespaces, cluster version and all API resources. Key design decisions: - Read-only by default — no mutations ever possible - Namespace-restricted or cluster-wide access via environment variable configuration - Kubeconfig loaded once at startup (cached) rather than per request, eliminating repeated I/O overhead - Consistent JSON response envelope {ok, data} or {ok, error_type, message} across all tools - Full Pydantic response models for every resource type - 85 unit tests using unittest.mock — no live cluster required to run the test suite The server was developed and tested against both a local minikube cluster and an AWS EKS cluster. During development it was used to diagnose real cluster issues — including an ImagePullBackOff caused by nodes in private subnets without NAT Gateway or ECR VPC endpoints, and a CNI IP exhaustion failure on the aws-node DaemonSet. Integrated with Claude Desktop via the MCP protocol, allowing Claude to inspect any Kubernetes cluster in plain English — listing pods, reading logs, describing nodes, checking deployment rollout status, and diagnosing failures — all without leaving the chat interface.
Portfolio MCS — Observability & Monitoring Stack
Production-grade observability stack running in a dedicated monitoring namespace on a 3-node Minikube cluster, providing full-stack visibility into the Portfolio MCS application from infrastructure to business metrics. Prometheus serves as the time-series data store with a 5Gi PVC, 30-day retention, and 15-second scrape intervals across 5 target groups: the Node.js API (×2 replicas on port 4000), node-exporter DaemonSet (×3 nodes), kube-state-metrics, postgres-exporter, and Prometheus itself — all targets consistently UP. Grafana exposes 6 purpose-built dashboards at grafana.johnisah.com: System Overview (at-a-glance health for all services), API Performance (request rates, P95/P99 latency, error rates per route), Database Health (cache hit ratio, active connections, query duration, dead tuples), Kubernetes Infrastructure (per-node CPU/memory, pod restarts, PVC utilisation), Business Metrics (contact form submissions, auth events, content activity), and Alerts & SLO (error budget burn rate, availability SLO tracking). Alertmanager routes alert emails to [email protected] via Zoho SMTP with 5 configured rules: APIDown, HighErrorRate (≥5% threshold), DBCacheHitLow, NodeHighMemory, and PodCrashLooping. The Node.js API exposes 14 custom business metrics via prom-client through a dedicated metrics middleware, tracking HTTP request totals and duration histograms, active connections, database query counts, and contact form submission rates — all accessible at /metrics on port 4000. All secrets (Grafana admin credentials, SMTP password, PostgreSQL DSN for the monitoring user) are managed zero-trust via the Infisical operator using InfisicalSecret CRDs — nothing stored in git. Kubernetes NetworkPolicies enforce that only Prometheus pods in the monitoring namespace can reach port 4000 on the API pods.
Skills
Technologies I work with
CI/CD
Cloud
Cloud Native Computing
Infrastructure as Code
Experience
Where I've worked
DevOps Engineer Intern
Quandela
Management and orchestration of Kubernetes workloads using Helm, including configuration of deployments, services, and Ingress, as well as monitoring with Prometheus and Grafana to improve application availability, reliability, and scalability.
Design and implementation of CI/CD pipelines with GitLab CI and Argo CD to automate integration, testing, Docker image builds, and continuous deployment of containerized applications following a GitOps approach.
Provisioning and management of cloud infrastructure on AWS and OVH using Terraform and Ansible, covering network, compute, and security resources to ensure environment consistency, reproducibility, and deployment automation.
Software Engineer
Dataville Research LLC
Design and development of a fullstack application with Angular for the frontend and Flask (Python) for the backend, exposing REST APIs and deployed on Google Cloud Platform (GCE) with Nginx, DNS, and HTTPS (SSL) configuration, and integration of a PostgreSQL database containerised with Docker and Docker Compose.
Development of web interfaces with Angular and collaboration in an Agile environment (Jira), contributing to continuous integration (CI/CD), debugging, incident resolution, and continuous application improvement.
Contact
Let's work together
Open to DevOps roles, freelance infrastructure projects, and collaboration. I typically respond within 24 hours.
[email protected]
GitHub
github.com/johnitopaisah
linkedin.com/in/johnitopaisah