← Back to All Roles
About the Role
We're looking for an AI/ML Engineer who bridges the gap between research and production. At Juju-Tech, AI isn't a side project — it's a core service we deliver to enterprise clients. You'll design and build end-to-end ML systems: from data pipelines and model training to deployment, monitoring, and ongoing improvement.
You'll work directly with clients to understand their data and business goals, then architect AI solutions that actually work in the real world — not just in Jupyter notebooks.
What You'll Do
- Design, train, and deploy machine learning models for diverse client use cases (NLP, computer vision, forecasting, anomaly detection)
- Fine-tune and deploy large language models (LLMs) for client-specific applications
- Build retrieval-augmented generation (RAG) systems and AI-powered product features
- Engineer robust data pipelines using Airflow, Spark, or dbt to feed ML systems
- Implement MLOps infrastructure: model registry, experiment tracking, automated retraining
- Monitor production models for drift, performance degradation, and fairness
- Collaborate with software engineers to integrate AI into applications via clean APIs
- Communicate complex model behavior and limitations clearly to non-technical stakeholders
- Stay current with state-of-the-art research and evaluate applicability to client problems
- Contribute to Juju-Tech's internal AI platform and shared tooling
What We're Looking For
- 4+ years of experience in machine learning engineering or applied AI research
- Deep proficiency in Python and ML frameworks (PyTorch, TensorFlow, JAX)
- Hands-on experience deploying models to production (REST APIs, batch pipelines, streaming)
- Strong understanding of NLP fundamentals and transformer architecture
- Experience with LLM fine-tuning (LoRA, RLHF, instruction tuning) or prompt engineering
- Familiarity with MLOps tools: MLflow, Weights & Biases, Kubeflow, SageMaker
- Solid software engineering skills — you write clean, tested, maintainable code
- Experience with cloud infrastructure for ML workloads (GPU instances, spot fleets)
- Strong statistical foundations and ability to design rigorous evaluations
Nice to Have
- Published research or contributions to major ML conferences (NeurIPS, ICML, ICLR)
- Experience with multimodal models (vision-language, audio-language)
- Knowledge of vector databases (Pinecone, Weaviate, pgvector) and semantic search
- Experience with distributed training frameworks (DeepSpeed, FSDP, Megatron-LM)
- Background in a specific domain (healthcare, finance, manufacturing) is a plus
- AWS Machine Learning Specialty certification
Our AI/ML Stack
PyTorchHuggingFaceLangChain
MLflowWeights & BiasesApache Airflow
KubeflowPineconeAWS SageMaker
FastAPISparkdbt
Interview Process
- Step 1: Recruiter screen (30 min)
- Step 2: Technical screen — ML fundamentals and system design (60 min)
- Step 3: Take-home ML challenge — build a small end-to-end system (3–4 hours, paid)
- Step 4: Deep-dive panel — model design, MLOps, and production tradeoffs (90 min)
- Step 5: Values and culture interview with leadership (30 min)
- Step 6: Offer