This includes technologies like hyper-efficient manufacturing, which uses computer vision for automated error detection to cut down on waste, and electric self-driving vehicles for passengers and freight which are safer and less carbon-intensive.
But there is a catch. Today, these technologies are bottlenecked by legacy data and storage architectures. This means that models can take hours (even days) to run. Moreover, the datacenters these large models depend on could consume ~7% of the world’s energy by 2030, up from 1–2% today¹. We urgently need to find ways to make the AI infrastructure stack far more efficient.
This is why we couldn’t be more excited to announce that we are leading WEKA’s $135 million Series D round. We’re excited for the opportunity to partner with an exceptional team but also because now, more than ever, we need faster and more efficient models to build a more sustainable world.
Legacy data and storage architecture is not up to scratch for the most important workloads.
Compute, networking and storage are the three foundational building blocks underpinning every enterprise data canter. In the past 20 years, performance bottlenecks were associated primarily with compute and networking, so that’s where much of innovation has focused. However, next generation workloads like AI, ML and HPC can only move as fast as their weakest links. Today, these workloads are also powered by costly GPUs, which are underutilized up to 70% of the time, resulting in data scientists waiting hours and even days for a new training model to run. As great leaps continue to be made in accelerated compute and networking, great leaps also need to be made in storage.
From a sustainability perspective, this is a huge problem. Not only do underutilized GPUs consume enormous amounts of energy while they remain idle, but stalled AI and HPC deployments are slowing the pace of critical research and business innovation.
Read the FULL ARTICLE here.