Energy density is exploding
Essentially the most speedy bodily problem is the sheer quantity of electrical energy required to coach fashions. Vayner notes that just some years in the past, an ordinary information heart rack capability was roughly 5 kilowatts (kW). By 2022, discussions shifted to 50 kW per rack, and in the present day, densities are reaching 130 kW per rack, with future projections hitting as excessive as 600 kW. This exponential progress is pushed by the shift towards high-performance GPU clusters, akin to NVIDIA’s H100s, that are important for coaching giant fashions.
The shift from coaching to inference
Whereas coaching fashions requires large, centralized compute energy with excessive “East-West” interconnectivity, the precise utilization of those fashions—inference—requires a distributed method. Vayner compares this evolution to the normal Content material Supply Community (CDN) mannequin. Simply as CDNs have been constructed to distribute video and static content material nearer to customers to scale back latency, networks should now distribute compute energy to deal with real-time AI interactions.
For functions like voice assistants or future real-time video technology, latency is important. That is creating a brand new position for CDNs, reworking them from content material distributors into platforms enabling real-time, distributed AI inferencing.
The definition of “edge” is altering
Traditionally, the “edge” was outlined by geography—putting servers in Tier 2 or Tier 3 cities to be nearer to the consumer. Nevertheless, energy is changing into a much bigger constraint than connectivity. As a result of high-end GPUs devour a lot vitality and generate a lot warmth (requiring liquid cooling), placing them in conventional “edge” places, like workplace constructing closets, is changing into inconceivable. Consequently, the “edge” is now outlined by the place enough energy and cooling could be secured, reasonably than simply bodily proximity.
Enterprise adoption and time-to-market
Enterprises are transferring past public SaaS experiments towards constructing non-public AI options to guard their information safety. Nevertheless, constructing proprietary infrastructure from scratch is dangerous as a result of velocity of {hardware} innovation. Vayner factors out that if an organization spends a yr constructing an information heart, their GPUs could also be out of date by the point they launch. Consequently, enterprises are more and more turning to turnkey options that supply managed infrastructure and orchestration, permitting them to give attention to enterprise worth reasonably than {hardware} upkeep.
As Vayner concludes, whereas the market is presently hyped, AI workloads will finally grow to be a commodity workload built-in into on a regular basis life, very like normal CPU-based functions are in the present day.
