
Today, and after years of hard work, we’re ecstatic to announce the launch of Unikraft Cloud, a radical, new cloud platform based on years of research and providing exponentially better scalability and efficiency (think running millions of strongly isolated instances on a few servers instead of an entire datacenter). In support of this, we’re also super proud to announce our latest $6M seed raise with Heavybit as lead investor, with the participation of Vercel Ventures, Mango Capital, Firestreak Ventures, Fly VC and First Momentum Ventures, along with a host of world class angels.
With that in place, it might be worth saying a few words about where we started. It may seem antithetical for a startup to say that our journey began close to 10 years ago: certainly our path hasn’t been a typical one to say the least (and I’m not even sure it’s been one I’d recommend either!).
Read More – Expedition Therapeutics Raises $165M in Funding
We come from a research and open source background, back then looking into how to make software packet processing fast in software, at 10-40Gb/s (this used to be blazingly fast!); this work was a precursor to the now well-established Intel DPDK framework. From there, being the optimization geeks we are, we started moving towards doing packet processing within a virtual machine efficiently, essentially creating an operating system whose only purpose was to do packet processing, and wrapping a virtual machine around that. For those familiar with the Xen hypervisor, the beginnings of this we did with mini-os, a very simple, reference OS implementation to which we slapped the Click modular router software out of MIT, something we ended up calling ClickOS — long story short, this sort of work eventually became known as Network Function Virtualization.
As we started getting into virtualization, this got us thinking about cloud infrastructure and how it was built: no-one could argue, even back then, against the power of the cloud and its amazing functionality, but was it built efficiently, and did, and do, its components scale well, without having to throw obscene amounts of money, hardware, and electricity at the problem?
The answer for us back then, and still today, is a resounding no, and the relatively recent explosion in AI agents and AI-generated workloads is putting severe strain on how scalable legacy cloud infra actually is (again, without throwing silly amounts of money at the problem).
But I’m getting ahead of myself; getting back to the story of our beginnings, our first port of call when it came to building a radically more efficient and scalable cloud infra platform were the images themselves: GB-sized images being deployed to run applications that only needed MBs to run — no wonder virtual machines (VMs) had (still have?) a reputation for being chunky and resource hungry.
Debunking the myth that VMs, by definition, are heavyweight, was one of our initial missions; we went as far as publishing a paper at SOSP, the top systems conference in the world, cheekily titled “My VM is Faster (and Safer) than your Container”, to try to highlight that VMs need not be heavyweight, and that containers are not safe for production deployments in the cloud (that seems obvious today, but it was far from it back then); in that work we were getting VMs to cold start in as little as 2 milliseconds, roughly comparable to fork
/exec
on Linux.
Read More – Web AI: Three Founders on a Mission to Teach the Internet to Think