MemVerge Enables Tencent Cloud to Accelerate Data Warehousing
With MemVerge, One of the World’s Leading Cloud Providers Benefits from a Fully-managed, High-performance Petabyte-level Cloud Data Warehousing Solution
MILPITAS, Calif. — Aug. 7, 2019 — Today at the 2019 Flash Memory Summit, MemVerge, the inventor of Memory-Converged Infrastructure (MCI), announced that customer Tencent Holdings Limited is accelerating its data warehousing with the help of MemVerge’s breakthrough technology. Alongside Intel, MemVerge is working closely with Tencent to apply persistent memory technology to accelerate the Tencent Sparkling Data Warehouse Suite, a high-performance petabyte-level solution allowing for an easy-to-use enterprise-grade distributed cloud data warehouse.
“The memory-speed processing power of MemVerge’s Memory-Converged Infrastructure has proven unique value in accelerating speed of OLAP services and combined with the fact that we can store data on the same system,” said Long Wang, vice president of Tencent Cloud and general manager of Big Data and AI Services. “MemVerge’s technology has huge potential to be a compelling part of the foundation for our next-generation cloud-based data warehouse to provide our services to customers more effectively well into the future.”
Tencent is an internet-based technology and cultural enterprise headquartered in Shenzhen, China. Founded in 1998, Tencent’s mission is to “improve the quality of life through Internet value-added services.” This will be done via the delivery of integrated internet solutions to billions of netizens through its “user oriented” business philosophy.
Tencent Cloud runs an elastic cloud-native Spark application built on Tencent Cloud Sparkling Data Warehouse Suite service, and Intel’s non-volatile memory technology provides end users with speed and flexibility. MemVerge’s Distributed Memory Objects (DMO) software layer builds a high-performance Spark solution with a non-volatile, memory-based converged infrastructure platform. In terms of the Tencent network topology, the fused memory platform is the host of the Spark data IO. It is deployed on a separate host cluster and separately from the Spark compute cluster, and is interconnected. Data is exchanged with the Spark compute cluster through a high-speed general-purpose Ethernet network. MemVerge DMO technology helps to increase performance and makes the Spark cluster more elastic, as well as enhances Sparkling to gracefully decommission nodes by decoupling shuffle data to the MemVerge system with the goal of accelerating the data warehousing processes.
“Performance, ease of use and the ability to provide seamless integration is part of the distributed memory objects software layer that MemVerge can provide,” Charles Fan, MemVerge CEO and founder said. “Our collaboration with Tencent is proving successful because our goals to overcome crippling barriers and eliminate troublesome data bottlenecks are well aligned.”
MemVerge’s first-of-its-kind MCI system offers a new architecture for the enterprise, offering both higher capacity computing memory and faster storage at the same time. MemVerge offers a long-lasting system design that provides greater data center reliability and tackles memory, storage and IO bottlenecks to help massive and complex applications operate smoothly, at memory speed.
MemVerge, the inventor of Memory-Converged Infrastructure (MCI), is the first to eliminate all boundaries between memory and storage to power the world’s most demanding data-centric enterprise workloads. Leveraging Intel® Optane™ DC persistent memory and architected to integrate seamlessly with existing applications, the MemVerge MCI system offers 10X the memory size and 10X the data I/O speed compared to current state-of-the-art computing and storage solutions. Its unique distributed memory objects (DMO) technology provides a logical convergence layer that harnesses Intel’s new memory-storage medium to let data-intensive workloads such as AI, machine learning (ML), big data analytics, IoT and data warehousing run flawlessly at memory speed with guaranteed data consistency across multiple systems. Offering large-scale memory and sub-microsecond response time, MemVerge solves a massive problem in the era of machine-generated data, namely how to process and derive insights from the enormous amount and variety of data in real time, handling small and large files with equal ease. Enterprises using MemVerge no longer contend with failed or painfully slow jobs due to performance bottlenecks, system crashes or worn out flash drives—they can now train AI models faster, analyze bigger states, complete more queries in less time and run complex workloads more predictably with fewer resources. Based in San Jose and backed by Gaorong Capital, Jerusalem Venture Partners, LDV Partners, Lightspeed Venture Partners and Northern Light Venture Capital, MemVerge is used for AI and data science workloads by leading innovators globally including LinkedIn, Tencent Cloud and JD.com.