← Back to Tech Blog

From Cloud to Metal: Our Infrastructure Evolution

We ditched our VPS setup and moved to bare metal servers. Not because we're some massive operation, but because the VPS bills were getting annoying and we wanted more control. Here's what actually happened.

Why We Made the Switch

When we started BitwareLabs, we did the sensible thing: grabbed a VPS from a well-known provider. Easy setup, predictable costs, and we could focus on building stuff instead of managing servers.

But as our projects got some traction and we started running more experiments, a few things became irritating:

  • Bills kept growing: Every time we needed more resources, the monthly cost jumped significantly
  • Performance was inconsistent: Some days things ran fast, other days not so much - classic shared infrastructure problems
  • We wanted our data local: Running AI research means we prefer keeping everything on our own hardware
  • Limited flexibility: We couldn't just throw in a GPU or add weird hardware configurations when we wanted to experiment

The Decision: Going Bare Metal

So we looked at our options. Keep scaling up on VPS and watch the bills grow, or bite the bullet and get our own hardware. The math wasn't super complex - after a certain point, owning servers becomes cheaper than renting them, especially when you want decent specs.

Plus, there's something appealing about having full control over your stack. No surprise maintenance windows, no wondering why performance randomly tanked, and no worrying about where your data actually lives.

The Migration Process

Moving from VPS to bare metal was pretty straightforward, but it did take some planning. Here's roughly how it went:

Migration Steps

  • Ordered the hardware: Found a decent datacenter, picked out some servers that wouldn't bankrupt us
  • Set everything up: Installed operating systems, got networking working, made sure we could actually access the boxes
  • Moved services gradually: Started with non-critical stuff, then moved the important services once we were confident
  • Killed the old VPS: Once everything was running smoothly, cancelled the cloud subscriptions

The whole process took about a month, mostly because we were being cautious and didn't want to break anything important. The actual technical migration was pretty boring - just moving Docker containers and databases from one set of servers to another.

What We're Running Now

Hardware

We kept the hardware choices simple. Nothing exotic, just solid servers that can handle our workloads without breaking the bank:

Our Server Setup

  • Main application server: AMD EPYC with plenty of RAM and fast NVMe storage
  • Database server: Similar setup but optimized for database workloads
  • AI compute box: Threadripper with a couple of RTX 4090s for when we need GPU power

Nothing too fancy - just reliable hardware that does the job.

Software

Software stack is pretty standard:

  • Debian 12 - reliable, boring, works
  • Docker - makes moving stuff around easy
  • Basic monitoring - so we know when things break
  • Regular backups - because hardware eventually fails

Nothing revolutionary here. We just wanted something that works without constant babysitting.

Things That Changed

You're Responsible for Everything

With VPS, if something breaks, you complain to support. With your own hardware, if something breaks, you fix it or the service stays broken. This means you need to actually understand your infrastructure.

No More Magic Scaling

Can't just click a button to double your RAM anymore. If you need more resources, you either buy more hardware or optimize what you have. For us, this isn't really a problem since our workloads are pretty predictable.

Upfront Costs

Buying servers means paying for hardware upfront instead of spreading the cost over time. This is actually cheaper in the long run, but it does require having some cash available.

How It Worked Out

Overall, pretty good. Monthly costs are lower, performance is more consistent, and we have full control over everything. The servers just sit there and work, which is exactly what we wanted.

Things We Like About It

  • Lower monthly bills: Hardware pays for itself pretty quickly
  • Predictable performance: No more wondering why things are slow today
  • Complete control: We can configure anything however we want
  • Data sovereignty: Our data lives on our hardware in a datacenter we chose
  • Learning experience: You learn a lot when you're responsible for the whole stack

Things We Learned

A few months in, here's what we figured out:

  • Have backups: Hardware fails eventually, so make sure you can restore everything
  • Keep it simple: The more complex your setup, the more things can break
  • Document stuff: When something goes wrong, you'll be glad you wrote down how to fix it
  • Know your limits: If you're not comfortable with server administration, this might not be for you

Should You Do This?

Depends on your situation. Makes sense if:

  • Your VPS bills are getting annoying
  • You have steady, predictable workloads
  • Someone on your team is comfortable with server stuff
  • You care about data privacy and control

Probably skip it if:

  • Your traffic is super variable
  • You need to scale globally fast
  • Nobody on your team wants to deal with servers
  • You prefer paying someone else to worry about infrastructure

What's Next?

For now, we're just running the servers and focusing on our actual projects. The infrastructure is working fine, so we're not planning any major changes.

We might add more storage or throw in another GPU if we need it, but the current setup handles everything we're doing without issues.

Bottom Line

VPS is great when you're starting out, but at some point owning your hardware makes more sense - both financially and for control. It's not rocket science, just requires being comfortable with basic server administration.


Have questions about our infrastructure or considering a similar migration? Feel free to reach out. We're always happy to share experiences and learn from others in the community.