A few months ago we set out to fill a gap in our product strategy. We didn’t want a cheap server with outdated specs and ticking time-bomb components. We needed a Packet-worthy server at a fraction of the cost of our Type 1. Enter the Type 0.
A few months ago, as we brought Packet’s bare metal cloud service out of beta and the feedback was pouring in, we asked ourselves: what do users want from us that we don’t have? What would help more developers bring workload over to shiny, brand new Packet? What did we need to put together in the next 90 days that would set us up for widespread adoption?
At the time we had two server configurations available: a general purpose app server and a high I/O server (our Type 1 and Type 3 machines). We had socialized plans for a storage heavy server, as well as a virtualization-type box and these were both waiting in the wings. We were already working on a multi-tenant block storage service, and finishing up our elastic addressing product.
So we asked users, friends, and some respected critics what they thought. Much to our surprise, nearly everyone asked us to hurry up and expand our lineup to include not something fancier or more powerful - but something smaller and (importantly) cheaper! They loved our Type 1 server, but it was overkill for running a test cluster, building some images, powering a load balancer, acting as a Chef master or developing a PoC.
In short: they were asking us to replace their collection of Digital Ocean droplets and small EC2 instances with some cost-effective bare metal. We needed a small server, stat!
Much like IKEA’s famous approach to furniture design, we decided to set the price for this server first and work backwards: $.05 per hour, or about $33 per month. The requirements otherwise were simple: it should be x86 based, 100% single tenant, built in a production grade manner with highly available power and networking, and would have to punch above its weight in a broad range of use cases. We didn’t want a cheap server with outdated specs and ticking time-bomb components. We needed a Packet-worthy server at a fraction of the cost of our Type 1 configuration. Enter the Type 0.
It was super fun to design this server, aiming for maximum density, awesome power efficiency and excellent performance with the smallest footprint. Here’s what we ended up with for each node:
- Intel Atom “Avaton” C2550 processor (4 cores @ 2.4Ghz)
- 8GB of DDR3 RAM
- 1 x 80GB Enterprise SSD
- 1Gbps Network Port in a HA configuration
Not bad, right?! While this processor isn’t going to set your world on fire, but the specs line up with Digital Ocean’s $80/mo droplet and - by golly - there is no hypervisor on it. No co-tenancy, no variance, no hypervisor and all the benefits of Packet’s high performing network and automation (think elastic addressing, cloud-config, Terraform integration - you get the idea).
$0.05 per hour sounds great - but how does it stack up against the competition? Here is a sampling against some of the major virtualized cloud options:
|Packet||Digital Ocean||AWS||Profit Bricks|
|Catchy Name||Type 0||$80/mo||c3.xlarge||Virtual Data Center|
|RAM||8 GB||8 GB||7.5 GB||8 GB|
|Storage||80 GB SSD||80 GB SSD||2 x 40 GB SSD||80 GB Block|
Okay, interesting! So looks like our little Tiny is a great buy, but how does it perform?
The first thing you’ll notice when powering up a Packet Type 0 server is the consistency -- this is not a VM with a shared CPU or disk subsystem. As such, consistent performance is something you can count on. However, to grab some quick benchmarks, I ran ServerBear tests on our Type 0 and Digital Ocean’s 4 core / 8 GB instance.
Here’s what I got:
On a Unix Bench, the Type 0 hit 1,746. This is less than the screaming 6,603 of Packet’s Type 1 server, but well above the 1,054 of a Digital Ocean 4 core / 8 GB VM that costs over twice as much. The main difference is the clock speed of the Avoton: with a respectable 2.4ghz, it performs well for threaded workloads.
Disk IO is also very good -- with 50,000 read IOPS and 26,000 write IOPS on the Intel 80GB SSD. Again, this compares nicely to the $0.11/hr Digital Ocean VM that came in at 33,911 read IOPS and 5,800 write IOPS in the same FIO test.
Of course, direct comparisons are very difficult, especially against multi-tenant virtualization solutions. With virtualization, the effect of other operations going on against the CPU and disk systems (presumably from other tenants) seem to vary the test results dramatically between runs. Of course, this is the case in the real world, not just testing world, so it has some real value as you evaluate options.
The most interesting question, in my opinion, is what will people want to do with a $.05/hour bare metal server? I asked a few of our beta users to help me out - can’t wait to hear what else you guys come up with!
Orchestrate all the things! - The low cost Type 0 works great as the controller node for all your container orchestration needs. Gotta put rancher and kubernetes somewhere to manage your high performance Type 1 or 3 compute pool!
PoC playground - Want to test out CoreOS with that new containerized application you’ve been working on, but not so interested in spending $1,200 / mo on dedicated gear to play with it? Spin up and down a cluster of Type 0’s and feel the love.
WordPress Website + EasyEngine - Developing that brand new WooCommerce site and need a place to dev and launch with a bit more power than a Digital Ocean droplet? Leverage EasyEngine or ServerPilot to get going.
Private OwnCloud - Don’t want to hand over your world to Dropbox? Spin up a Type 0 and run your own install with OwnCloud! Need more space? Attach it to our upcoming block storage service.
Dedicated BuildKite Server Farm - want to control your own build cycle destiny without blowing the bank? Drop Buildkite on one or more Type 0’s and bask in the glory of speedy, redundant builds!
=Ready, Set, Deploy!=
With the beta sticker removed, you can currently deploy Type 0’s with Debian and Ubuntu (CentOS and CoreOS coming in a week or so). We can’t wait to hear what you do with it, so make sure to drop us a line or shout out your feedback on Twitter.