Data Center Deployment Engineer - GPU & HPC
Build what others maintain. Data Center Deployment Engineer — racking, cabling, and commissioning frontier GPU clusters across greenfield AI datacenters in Europe and beyond.
Imagine a future where everyone has instant, low-cost access to intelligence. We’re building a fully featured European AI cloud - with everything one needs to train, experiment with, and deploy AI models. In addition, our GPUs run on 100% renewable energy.
We’re ambitious, curious, and gutsy doers. We practice a low hierarchy across the company and high morale in our teams. We’ve already achieved a lot, yet we’re only getting started. Now it’s your chance to join the ride. We offer more than just the job - we offer a career-defining opportunity to be part of building something big!
Join Verda while it’s still being built - not once it’s finished.
Why Verda
Cash + equity compensation along with various fringe benefits (e.g., healthcare, lunch, wellbeing, etc.).
Profitable operations with rapid, sustained growth.
31 nationalities, with 6 different ones on the management team.
An opportunity with make a clear impact and work alongside world-class engineers, researchers, and partners across the global AI ecosystem.
About the role
Most infrastructure engineers spend their careers maintaining what someone else built.
This role is different. As a Data Center Deployment Engineer, you'll be standing up AI datacenters from scratch. New countries, new sites, greenfield every time and we need the people who want to be first through the door.
You fly in before the hardware arrives, prep the space, take delivery, rack and stack, cable up, bring the InfiniBand fabric online, commission the GPU clusters, and you don't leave until everything's running. Then you document it, train the next person, and do it again somewhere else.
In two years, you'll have personally led 1,000+ GPU watercooled cluster deployments across multiple countries. The kind of CV line that makes other engineers stop mid-scroll.
What the work actually looks like
Good week: hardware lands on time, pre-routed cables drop straight in, cluster comes up clean on the first validation pass. You're on a flight home by Friday.
Bad week: delivery arrives with three servers DOA, the IB switch is running firmware from 2022, and you're debugging fabric errors at 2am in a datacenter in Osaka while the go-live clock is ticking. No one's coming to save you. You own it, you fix it, you write the postmortem.
If that second scenario makes you lean forward rather than lean back, keep reading.
Upcoming deployments
We have active deployments kicking off mid-June across Europe and the US, with the UK following in Q3. These aren't hypotheticals. The hardware is already on order.
What we need from you
You've personally racked, cabled, and commissioned servers. Not managed people who do it.
You've touched InfiniBand in production. You know what a degraded link looks like, you've run ibdiagnet, you've fixed it
GPU server deployments at scale — DGX, HGX, or comparable hyperscale hardware
Comfortable with high-density power and cooling environments
Willing to be on a plane with a few days notice and stay on-site for 2+ weeks at a time
Nice to have:
Hands-on with liquid cooling — in-rack or in-row CDUs
NVIDIA reference architecture familiarity
What you get
A small team, no micromanagement, full ownership of your work
The work itself: frontier AI infrastructure, built by you, from zero
One honest note
This role involves significant travel. Multiple multi-week trips per year, increasing as we open more sites. Go-lives don't respect timezones. There's no playbook for every situation. If you want a stable routine and a permanent desk, there are great jobs out there for that — this isn't one of them. If you want the kind of work you'll still be talking about in ten years, we'd like to talk now.
To apply, answer the questions below. No cover letter needed — we'll read every response.
- Department
- Data Center Operations
About Verda
Verda (formerly DataCrunch) is a technology company building the next generation of cloud infrastructure for AI – compute that's instant, on-demand and at scale. Headquartered in Helsinki, the company operates globally across Europe, the US and Asia. Verda employs over 100 people from nearly 30 nationalities and has raised over $200M in total funding from investors including Lifeline Ventures, byFounders, J12 Ventures, Skaala, Varma and Tesi, alongside leading financial institutions.