Facebook's OPEN Compute Project

Discussion in 'DSL & Info Tech News' started by JamesCooper, Apr 9, 2011.


  1. JamesCooper

    JamesCooper Member




    Building Efficient Data Centers with the Open Compute Project

    by Jonathan Heiliger on Thursday, April 7, 2011 at 10:45am​

    [​IMG]

    A small team of Facebook engineers spent the past two years tackling a big challenge: how to scale our computing infrastructure in the most efficient and economical way possible.

    Working out of an electronics lab in the basement of our Palo Alto, California headquarters, the team designed our first data center from the ground up; a few months later we started building it in Prineville, Oregon. The project, which started out with three people, resulted in us building our own custom-designed servers, power supplies, server racks, and battery backup systems.

    Because we started with a clean slate, we had total control over every part of the system, from the software to the servers to the data center. This meant we could:
    • Use a 480-volt electrical distribution system to reduce energy loss.
    • Remove anything in our servers that didn’t contribute to efficiency.
    • Reuse hot aisle air in winter to both heat the offices and the outside air flowing into the data center.
    • Eliminate the need for a central uninterruptible power supply.
    The result is that our Prineville data center uses 38 percent less energy to do the same work as Facebook’s existing facilities, while costing 24 percent less.

    [​IMG]

    Releasing Open Hardware

    Inspired by the model of open source software, we want to share the innovations in our data center for the entire industry to use and improve upon. Today we’re also announcing the formation of theOpen Compute Project, an industry-wide initiative to share specifications and best practices for creating the most energy efficient and economical data centers.

    As a first step, we are publishing specifications and mechanical designs for the hardware used in our data center, including motherboards, power supply, server chassis, server rack, and battery cabinets. In addition, we’re sharing our data center electrical and mechanical construction specifications. This technology enabled the Prineville data center to achieve an initial power usage effectiveness (PUE) ratio of 1.07, compared with an average of 1.5 for our existing facilities.

    Everyone has full access to these specifications, which are available at http://opencompute.org/. We want you to tell us where we didn’t get it right and suggest how we could improve. And opening the technology means the community will make advances that we wouldn’t have discovered if we had kept it secret.

    Having efficient software and servers means we can support more people on Facebook and offer them new and real-time social experiences -- such as the ability to see comments appear the instant they are written or see friends of friends appear dynamically as you search.

    In developing the Open Compute Project we rethought our previous assumptions about infrastructure and efficiency in order to generate a better outcome. The result: energy-efficient technology saves money, both on capital and operational costs.

    Starting the Dialogue

    The ultimate goal of the Open Compute Project, however, is to spark a collaborative dialogue. We’re already talking with our peers about how we can work together on Open Compute Project technology. We want to recruit others to be part of this collaboration -- and we invite you to join us in this mission to collectively develop the most efficient computing infrastructure possible.

    To get a behind-the-scenes look at the birth of the project, watch this video:

    [​IMG]

    Jonathan is Vice President of Technical Operations at Facebook.
     
  2. JamesCooper

    JamesCooper Member




    Facebook’s Open Compute: The Data Center is the New Server and the Rise of the Taiwanese Tigers


    [​IMG]
    Today Facebook took the great step of openly talking about their server and datacenter designs at the level of detail where they can actually be replicated by others. Another reason why I call it “great?” Well, it’s interesting that the sourcing and design of these was done by Facebook and with Taiwanese component makers. Nothing new for many of us working in the industry, but it’s something that’s often not discussed in the press when talking about US server companies.
    If you take a look at the Facebook Open Compute server page and listen to the video with Frank Frankovsky you’ll hear a few company names mentioned. Many of them might not be familiar to you. Frank is the Director of Hardware Design and Supply Chain at Facebook, and used to be at Dell DCS (the datacenter solutions group) where he was the first technologist. One last piece of trivia: He was the technologist that covered Joyent too. We’ve been lucky enough to have bought servers from him and Steve six years ago and went out for sushi when he was down here interviewing.

    So who made the boxes?
    • The chassis is made by MiTAC-SYNNEX based here in the US, where they’re just calledSynnex. MiTAC-SYNNEX also owns Tyan (who made a lot of Sun stuff) and Magellan GPSas part of their 40+ companies and brands.
    • The power supply is made by Delta Electronics. Delta is where Ben Jai went to after he was at Google for over 7 years. Ben was the first hardware engineer at Google and was responsible for a number of Google’s server designs.
    • The motherboard is made by Quanta.
    Synnex, Quanta and Delta were already the source suppliers and simply able to iterate on a design faster and move it into production. Because this is actually what they do. These aren’t little companies either: They each earn US$20-60 billion per year in revenue.

    Greater China is also the largest producer and consumer of rare earth metals. Think about it.

    And then another big Taiwanese company worth mentioning is Inventec. They’re actually the “biggest server ODM and one of the top 4 Notebook makers worldwide.” What is an ODM you ask? According to Wikipedia, “an original design manufacturer (ODM) is a company which designs and manufactures a product which is specified and eventually branded by another firm for sale.”

    The phenomenon is this.

    Taiwanese were OEMs of components. Then they became ODMs of components (Delta, for example, has around 90% or more of the market share of power supplies worldwide), and OEMs for full systems (e.g. servers).

    What’s happened over the last few years?

    They’ve become ODMs for servers, storage and networking. The old “boxes.” And they’re able to collaborate with companies like us, Facebook, Google, Amazon et al.

    Why have they moved up the chain?

    Because the new box is the datacenter (used to be the PC, now it’s DC), the walls of a datacenter are the chassis and the PC-style servers are just a component in that box, no different than a power supply or a motherboard.

    Source: http://joyeur.com/2011/04/07/facebo...-server-and-the-rise-of-the-taiwanese-tigers/
     
  3. klyster

    klyster Member




    Love it...
     
                                 

Share This Page