You have heard of Hadoop, you have heard of Google File System (GFS) and Map Reduce, but have you heard of the Cosmos System and how Microsoft depends on it for our big data needs? Cosmos is our blend of herbs and spices that keep some of the world’s most popular services running, scalable and performing optimally. Storing and processing petabytes of data is a daunting task – how do you create your own solution and not use a canned, short-term fix for the masses?
Only a handful of companies operate at this scale, possess world-class engineering capability, and have a major customer focu
Who are the people driving this groundbreaking technology? Ed Harris, Solom Heddaya and Jingren Zhou are the Cosmos brain trust. I recently had the opportunity to sit down with Jingren and Ed to learn about their work, inspiration and the secret ingredient that makes a developer eligible for this exclusive group.
On the Cosmos team, what makes a successful developer?
Harris: A Cosmos developer is someone who revels in data-driven software engineering. When you are running at 20,000 machine scale, failures and retries are omnipresent. For success, you must be able to visualize the entire system in your head at the macro level, but also drill into specific events and metrics to find out what really happened at the micro level.
When and how did Cosmos originate?
Harris: Cosmos originated as a distributed file system back in ’05. We knew we needed to store petabytes of data to be a viable search engine. A few months later, the Dryad project merged in to give an execution model for data analysis.
Zhou: In 2007, we started developing SCOPE, a SQL-like scripting language, as we continued to improve job-scheduling efficiency based on Dryad framework. Now the job scheduling is integrated into the SCOPE system.
How is the Cosmos System different from Hadoop or GFS (Google File System)?
Harris: At the file-system level, we are currently similar to GFS/HDFS. However, we are in the process of redesigning the storage layer to allow orders of magnitude of additional scalability, using some of the techniques pioneered in MSR’s Flat Datacenter Storage prototype. SCOPE is a significant step up from Map/Reduce because of its declarative model and its built-in query optimizer. SCOPE brings the power of a parallel database to ordinary mortals via a language that looks very similar to SQL.
Zhou: SCOPE combines benefits from both traditional parallel databases and MapReduce execution engines to allow easy programmability and deliver massive scalability and high performance through advanced optimization. Similar to parallel databases, the system has a SQL-like declarative scripting language with no explicit parallelism, while being amenable to efficient parallel execution on large clusters. An optimizer is responsible for converting scripts into efficient execution plans for the distributed computation engine. A physical execution plan consists of a directed acyclic graph of vertices. Execution of the plan is orchestrated by a job manager who schedules execution on available machines and provides fault tolerance and recovery, much like MapReduce systems. SCOPE is being used daily for a variety of data analysis and data mining applications over tens of thousands of machines at Microsoft, powering Bing and other online services.
In terms of scale, can you give us an idea of what Cosmos sees on a regular basis?
Cosmos writes 15PB of data per day. Contrast that with the last-released Library of Congress figure of 5TB/month. That means Cosmos adds more data per second than the Library of Congress does per day.