On March 18, 2024, NVIDIA introduced the Blackwell GPU. Boasting 208 billion transistors, NVIDIA’s latest creation processes trillion-parameter AI models 30X faster than previous technologies, all at reduced power.
This development prompts one key question: Has NVIDIA transcended Moore’s Law?
Moore’s Law predicts that the number of transistors on a chip should double approximately every two years, maintaining a low cost. Traditionally, this doubling was achieved by miniaturising transistors.
From 2012 to 2021 (Exhibit 1), the transistors and computational power growth generally aligned with Moore’s Law, doubling roughly every two years. Regardless, NVIDIA’s Blackwell GPU has shattered expectations, increasing computational speed by over a thousand times in just 8 years, vastly outpacing the gradual growth predicted by Moore’s Law.
NVIDIA’s Blackwell platform marks a significant leap in AI technology, particularly in handling generative AI at an unprecedented scale. Its B200 and GB200 models demonstrate remarkable efficiency and power, significantly reducing resources and time needed for large-scale AI model training, like GPT-4. Compared to competitors like Intel’s Ponte Vecchio and AMD’s MI300 series, Blackwell stands out for its superior performance and efficiency, highlighting NVIDIA’s leadership in meeting the demands of advanced AI processing.
Developing chipsets is a complex challenge, constrained by technological and raw material limitations. Overcoming these challenges requires sustained innovation across four critical areas:
In light of NVIDIA’s advancements, leaders must proactively engage in several strategic activities:
In summary, as the tech landscape continues to evolve rapidly, driven by advancements like NVIDIA’s Blackwell, organisations must remain agile, innovative, and forward-thinking to thrive.