Meta builds a 1700W superchip and custom MTIA chips while ditching Nvidia, AMD, Intel, and ARM for inference

zeeforce
4 Min Read




  • Meta’s 1700W superchip delivers 30 PFLOPs and 512GB of HBM memory
  • MTIA 450 and 500 prioritize inference over pre-training workloads
  • Future MTIA generations will support GenAI inference and ranking workloads

Meta is advancing its AI infrastructure with a portfolio of custom MTIA chips designed specifically for inference workloads across its apps.

The company is developing a 1700W superchip capable of 30 PFLOPs and 512GB of HBM, integrated within the same MTIA infrastructure to handle inference tasks at scale.





Source link

Share This Article
Leave a comment
Optimized by Optimole
Verified by MonsterInsights