Whoa, okay, here we go. So, it turns out Huawei’s got this new thing, the CloudMatrix 384 AI cluster. And the word on the street is that the big tech players in China are super into it. Something about it going toe-to-toe with NVIDIA’s stuff, which is… unexpected? Nah, maybe not. Anyway, Huawei isn’t just chillin’—they’re doubling down on AI, especially since it seems NVIDIA’s grip in China might be slipping a bit. Yep, Huawei’s pushing full steam ahead.
Now, get this—apparently, ten big clients have hopped on board with this new AI party, even though no one’s really spilling the beans about who these clients are. I guess it’s all hush-hush. But, from what I gather, these are big fish, loyal customers of Huawei. We’ve talked CloudMatrix 384 a bunch before—well, maybe you haven’t, but it’s been around, okay? The gist is: Huawei’s AI might just rival NVIDIA’s top-tier GB200 NVL72. So, maybe it’s a big “see ya!” to relying on outside tech for China. Intriguing, right?
Oh, and there’s some visual chart thing too, probably supposed to impress us with graphs and numbers. Pretty standard, but those always confuse me.
Anyway, speaking specs, this CloudMatrix beast has 384 Ascend 910C chips in some “all-to-all topology” dance. Whatever that means, but it sounds fancy. Huawei crammed in five times more chips compared to NVIDIA’s GB200, imagine that! It clocks in at 300 PetaFLOPS of BF16 computing. I mean, that’s a lot—double what the GB200 NVL72 does, though it drinks up a crazy amount of power. Like, 3.9 times more. Not exactly eco-friendly, huh?
Price tag shocker: $8 million for one of these shiny boxes. That’s triple what you’d pay for NVIDIA’s thingamajig. So, clearly, Huawei’s not pitching a budget save-your-pennies deal. It’s more like, “Hey, we built this ourselves, take that, Western tech!” Ah, the audacity. Anyway, that’s the lowdown as far as I managed to piece it together.