Skip to Content

ML/AI Engineer

Karachi, Pakistan

About US

Autonomous Technologies is a Canadian consultancy building AI-powered data solutions for startups and agencies in North America and Europe. Our team in Karachi works on everything from real-time prediction systems for sports analytics to GenAI applications for enterprise clients. We’re engineers first — we build, we ship, we iterate.

The Role

We’re looking for someone who can take ML from Jupyter notebook experiments to production systems that run reliably at scale. This isn’t a research role — it’s an applied engineering role where models need to work in the real world with messy data, latency constraints, and actual users.

You’ll work across a range of ML and AI applications: from classical ML for prediction and ranking to GenAI / LLM applications with RAG, fine-tuning, and agent orchestration.

What You’ll Do

  • Design, train, and deploy machine learning models for real-world applications (prediction, ranking, classification, NLP)
  • Build and maintain ML pipelines: feature engineering, training, evaluation, deployment, and monitoring
  • Develop GenAI / LLM applications using RAG architectures, prompt engineering, and API integration (OpenAI, Claude, open-source models)
  • Work with data engineers to design feature stores and ensure data quality for ML workloads
  • Monitor model performance in production: track drift, accuracy degradation, and trigger retraining when needed
  • Collaborate with international product teams to translate business problems into ML solutions
  • Write clean, documented, testable Python code — not just notebook spaghetti

What We’re Looking For

  • 2+ years building ML systems in production (not just training models in notebooks)
  • Strong Python: scikit-learn, PyTorch or TensorFlow, Pandas, NumPy. Familiarity with FastAPI or Flask for model serving.
  • Solid understanding of ML fundamentals: bias-variance, cross-validation, feature selection, model evaluation metrics
  • Experience with at least one cloud ML service: GCP Vertex AI, AWS SageMaker, or Azure ML
  • GenAI exposure: LangChain, LlamaIndex, vector databases (Pinecone, Weaviate, ChromaDB), embedding models, prompt engineering
  • Familiarity with experiment tracking: MLflow, Weights & Biases, or similar
  • Clear English communication for working with international teams async

 


Must Have

  • Bachelor Degree or Higher
  • 2+ years building ML systems in production
  • Strong Python
  • Solid understanding of ML fundamentals
  • Experience with at least one cloud ML service
  • GenAI exposure: LangChain, LlamaIndex, vector databases
  • Familiarity with experiment tracking

Nice to have

  • Experience deploying models with Docker + cloud infrastructure (GKE, ECS, Lambda)
  • Familiarity with MLOps: CI/CD for ML, automated retraining, model registries
  • Contributions to open-source ML projects or active Kaggle participation
  • Experience with real-time prediction systems (sub-100ms inference)

Why Autonomous


  • Market-competitive pay
  • Diverse applied ML work: sports predictions, marketing optimization, GenAI applications, data enrichment
  • Work directly with international clients and product teams — not through 3 layers of project managers
  • Technically-led team that does real code reviews and architecture discussions