MemVerge in the News

Cadence Collaborates with MemVerge to Increase Resiliency and Cost-Optimization of Long-Running High-Memory EDA Jobs on AWS Spot Instances

In a move that promises significant cost savings and enhanced efficiency for design engineers, the Cadence and MemVerge collaboration solves this challenge by implementing a transparent, low-overhead incremental checkpoint/restore solution that makes these EDA jobs resilient (hot restart) to Spot pre-emptions or without needing to change the underlying EDA application.

read more

Memcon 2024 Key Takeaways

Memory Tiering is hard. Like, really hard. It’s one of the biggest gaps in the market in terms of solutions, but Memverge and Kove are both taking different approaches to solving it.

read more

What role does CXL play in AI? Depends on who you ask

At GTC, MemVerge, Micron and Supermicro demonstrated how CXL can increase large language model GPU utilization without the addition of more processing units. CXL does this by expanding the GPU memory pool to increase high-bandwidth memory usage at a lower cost than scaling out the infrastructure with more GPUs or more HBM. The tradeoff, however, is performance.

read more

MemVerge uses CXL to drive Nvidia GPU utilization higher

CXL v3.0 can pool external memory and share it with processors via CXL switches and coherent caches. MemVerge’s software virtually combines a CPU’s memory with a CXL-accessed external memory tier and is being demonstrated at Nvidia’s GTC event. Micron and MemVerge report they are “boosting the performance of large language models (LLMs) by offloading from GPU HBM to CXL memory.”

read more

The promise of CXL still hangs in the balance

“… servers operate as one of three pillars — compute — with the other two being networking and storage. AI is changing this,” Fan said. “While the three [pillars] will continue to exist … a new center of gravity has emerged where AI workloads take place.”

read more

Introducing Memory Fabric Forum (formerly known as CXL Forum)

The focus of the Memory Fabric Forum in 2024 is to engage the IT community. We’re looking to IT pros to deploy PoCs this year but survey data shows their awareness of CXL is very low. In response, we’re rolling out new digital channels to reach a community that will soon appreciate the availability of abundant, composable fabric-attached memory for their memory-hungry apps.

read more

Rest In Pieces: Servers and CXL

…talk inevitably turns to increasing bandwidth and capacity for CPUs, GPUs, and FPGAs and lowering the overall cost of main memory that gives the systems of the world a place to think. And that leads straight to the CXL protocol, of course.

read more