ML Infrastructure Engineer, Forward-Deployed
Imagine a future where anyone can train and run large-scale AI workloads instantly - without worrying about infrastructure bottlenecks.
At Verda, we’re building a fully featured European cloud computing platform designed for high-performance AI workloads. Our mission is to make powerful compute accessible, scalable, and efficient for the teams building the future of AI.
We’re ambitious, curious, and pragmatic builders. We operate with low hierarchy, high ownership, and a strong bias for action. We’ve already achieved a lot, but we’re just getting started.
Now it’s your chance to join the ride. Join Verda while it’s still being built - not once it’s finished!
Your responsibilities
In this role, you will work closely with strategic GPU customers, embedding directly with their teams to help get training and inference workloads running efficiently on Verda. You will collaborate with ML engineers and researchers to troubleshoot, optimize, and guide them in getting the most out of our infrastructure.
At the same time, you will contribute to building and improving our internal ML platform on Kubernetes, including job scheduling, workflow orchestration, and training infrastructure. You will also help evolve our inference stack, working on model packaging, serving frameworks, and performance optimization.
A key part of your role will be translating customer needs into scalable platform features, helping prioritize what we build to serve the broadest set of users. You will work closely with infrastructure and engineering teams to continuously improve performance, reliability, and developer experience across our platform.
Your key competencies
Strong ML engineering background with hands-on experience training, fine-tuning, or optimizing models at scale
Proficiency with PyTorch (JAX is a plus)
Experience with software or infrastructure engineering, including CI/CD or GitOps workflows
Strong programming skills in Python (additional languages such as Rust are a plus)
Comfortable working in Linux environments, including debugging GPU performance issues (CUDA, drivers, networking, filesystems)
Experience working directly with customers or stakeholders, with the ability to guide, collaborate, and challenge when needed
Ability and willingness to travel to customer sites when needed
Nice to have
Experience with Kubernetes (operators, CRDs, job scheduling, GPU scheduling)
Familiarity with systems such as Kueue, Flyte, Ray, or Slurm
Experience deploying inference workloads using vLLM, SGLang, TensorRT-LLM, or Triton
Knowledge of GPU networking and performance tuning (e.g., InfiniBand, NVLink, NCCL)
Research background (PhD or equivalent)
Experience in forward-deployed, solutions engineering or consulting roles
Why Verda
Cash + equity compensation along with various fringe benefits
Profitable operations with rapid, sustained growth
31 nationalities, with 6 different ones on the management team
An opportunity to work at the intersection of infrastructure and cutting-edge AI workloads, collaborating directly with leading ML teams
Practicalities
Location: Helsinki (hybrid) or remote in Europe
Employment type: Full-time and permanent
What's next
We’re building fast and this role needs the right person behind it. There’s no artificial deadline, but when we find who we’re looking for, we move.
If this sounds like your next move, apply now.
Please submit your application through our Careers page. We don’t accept applications sent by email.
- Department
- Research & Development
- Role
- Machine Learning Engineer
- Locations
- Helsinki
- Remote status
- Hybrid
About Verda
Verda (formerly DataCrunch) is a technology company building the next generation of cloud infrastructure for AI – compute that's instant, on-demand and at scale. Headquartered in Helsinki, the company operates globally across Europe, the US and Asia. Verda employs over 100 people from nearly 30 nationalities and has raised over $200M in total funding from investors including Lifeline Ventures, byFounders, J12 Ventures, Skaala, Varma and Tesi, alongside leading financial institutions.