In this Article
The second half of 2025 brought us an unusual global deficit of operational memory and SSD. It is not merely a cyclical downturn or a transition to new technologies, but rather a prolonged crisis spanning multiple years. This recession obviously stems from the explosive growth of AI infrastructure demand.
Every AI server requires tens and hundreds of gigabytes. It is much more than an average web server or PC. And what do we get as a consequence of that? Manufacturers of chips prioritize HBM (High Bandwidth Memory) and LPDDR (Low-Power Double Data Rate) for AI while reducing the volume of traditional DRAM. In modern terms, it is called RAMpocalypse.
Does it just sound deadly, or is it something we need to address right now? Read the following for the answer.
* Before reading the main part, we recommend skimming this article and this review card.
What exactly happened?
2023 was not so far ago, but back then, the price of SSDs was on the cheap. The culprit of this change was the AI data center boom that transformed the market by late 2024. AI systems, also known as Large Language Models, are basically trained on massive datasets. They process so much data that it’s easy for them to understand the context and reply as a human. LLMs are typically deployed on GPU clusters, which are an interrelated network of computing nodes, each equipped with one or two GPUs. Each GPU cluster needs terabytes of memory, and AI projects now operate on thousands of such nodes. They aim to handle complex tasks with turbo speed. However, running LLMs takes plenty of DRAM and flash storage. Companies are moving away from training-heavy LLM workloads toward inference-focused deployments. Because inference requires frequent access to large model files, demand for nearline storage has grown.
The Stargate project
The project aims to provide the US with continued leadership in the field of artificial intelligence. It’s believed that this project will “reindustrialize” the United States, creating strategic potential to protect the national security of America and its allies. Among the primary investors are SoftBank, OpenAI, and Oracle. Companies such as Microsoft, NVIDIA, and Arm will also participate, but they will apparently be technical partners.
What is the project about? First of all, several data centers for artificial intelligence are expected to be created in the United States. The companies’ joint venture will begin with a large data center in Texas and eventually expand to other states. The creators promise that this will create “hundreds of thousands” of jobs and “ensure American leadership in the field of AI. Stargate is expected to take five to six years to complete.
The deal was signed between OpenAI, Samsung, and SK Hynix for 900,000 DRAM wafers per month. It is nearly 40% of the world’s entire production. Such a situation will cause a great problem in the future, making traditional PC and server markets struggle to compete. Even mechanical hard drives, which may seem unrelated to AI, are seeing prices climb 10–15%.
More factories, more memory?
In theory, yes. But in reality, financial and structural constraints complicate rapid expansion.
While higher prices signal the need for capacity expansion, leading DRAM manufacturers are choosing not to scale traditional DRAM production. Instead, they are channeling investment into high-bandwidth memory (HBM). It is better for higher margins but consumes more wafers.
Demand is coming from every direction. AI developers, hyperscale cloud providers, and enterprise data centers are all competing for the memory supply. Expanding production is neither quick nor cheap. Building new fabs would require gazillions in capital. The price fluctuates between $10 billion and 20+ billion. This isn’t something that can be completed in a single day. Such projects usually take 4–5 years. Even then, results are uncertain. The situation is also getting more complex due to shortages of skilled engineers.
As a result, overall standard DRAM output remains limited. Manufacturers are prioritizing AI and hyperscale data-center customers. Under these conditions, memory shortages and elevated prices are likely to persist through at least 2027.
Predictions for the future
- First of all, this AI’s voracious appetite will cause soaring prices for operational memory. Deficits and high price tags can stay for a while. Amid the ongoing memory market crisis, companies such as Apple, Lenovo, and Dell are to raise prices on their devices in the first half of 2026. Of course, it will also touch gaming PCs and laptops; they’ll see an inflation too. As hardware costs rise, organizations turn to proxies to scrape data and run AI processes without overloading memory-constrained systems.
- A gradual resolution of the crisis can begin only after 2027–2028, when a new memory technology disrupts the cycle. Until then, expect SSDs, DRAM, and HDDs to remain expensive, scarce, and increasingly reserved for the AI elite.
- AI demand is unlikely to decrease significantly as large models, data centers, and computing services continue to grow. This means that memory as a resource will remain a «hot» topic even in the coming years.
- Innovations in memory architecture (such as CPU/GPU memory fusion or new types of RAM) can affect the situation in the long run, but this will not solve acute deficits overnight.

