Open Compute Wants to Overhaul Data Center Racks

The Open Rack design, which was unveiled today at the Open Compute Summit in San Antonio, also offers an innovative approach to power management. The design features busbars supplying 12-volt power to servers, eliminating the need for individual power supplies for each server. Open Rack also offers standard interfaces for mechanical and electrical components.

via Open Compute Wants to Overhaul Data Center Racks » Data Center Knowledge.

Coolest jobs in tech (literally): running a South Pole data center

That mission demands a level of reliability that many less remote data centers cannot provide. Raytheon Polar Services held the National Science Foundation’s Antarctic programs support contract until April. As Dennis Gitt, a former director of IT and communications services for the company puts it, a failure anywhere in the Antarctic systems could lose data from events in space that may not be seen again for millennia.

via Coolest jobs in tech (literally): running a South Pole data center.

With a maximum population of 150 at the base during the Austral summer, South Pole IT professionals-in-residence are limited to a select few. And they don’t get to stay long—most of the WIPAC IT team only stays for a few months in the summer, during which they have to complete all planned IT infrastructure projects.

U.S. tells court MegaUpload users are out of luck

Carpathia wants the court to help pay the costs of preserving MegaUpload’s data, which it claims is more than $500,000 and growing, or protect Carpathia from civil claims, should it decide to delete the information. Carpathia has said that in most cases where a customer can no longer pay for service, the servers are wiped and used elsewhere. Should that happen in this case, potentially millions of former MegaUpload users around the world would lose data — though how much content was legally obtained is unclear.

via U.S. tells court MegaUpload users are out of luck | Media Maverick – CNET News.

Carpathia Hosting seem to be truly innocent victims here.  Somehow I predict the US taxpayer will end up footing the bill for all of this and Carpathia Hosting can start learning the art of cost plus billing.

Mystery Men Forge Servers For Giants of Internet

Hyve Solutions was created to serve the world’s “large-scale internet companies” — companies increasingly interested in buying servers designed specifically for their sweeping online operations. Because their internet services are backed by such an enormous number of servers, these companies are looking to keep the cost and the power consumption of each system to a minimum. They want something a little different from the off-the-shelf machines purchased by the average business. “What we saw was a migration from traditional servers to more custom-built servers,” says Hyve senior vice president and general manager Steve Ichinaga. “The trend began several years ago with Google, and most recently, Facebook was added to the ranks of companies who want this kind of solution.”

via Mystery Men Forge Servers For Giants of Internet | Wired Enterprise | Wired.com.

Hyve is a place where internet giants can go if they want Open Compute servers. But even before Hyve was created, Synnex was working for the big internet names. It has long provided custom machines for Rackspace — the San Antonio, Texas company that offers infrastructure services across the net as a scale rivaled only by Amazon

Super-secret Google builds servers in the dark

Google is one of many big-name Web outfits that lease data center space from Equinix—a company whose massive computing facilities serve as hubs for the world’s biggest Internet providers. All the big Web names set up shop in these data centers, so that they too can plug into the hub. The irony is that they must also share space with their biggest rivals, and this may cause some unease with companies that see their hardware as a competitive advantage best hidden from others.

via Super-secret Google builds servers in the dark.

Google declined to comment on Sharp’s little anecdote. But the tale is not surprising. Google designs its own servers and its own networking gear, and though it still leases space in third-party data centers such as the Equinix facility, it’s now designing and building its own data centers as well. These designs are meant to improve the performance of the company’s Web services but also save power and money. More so than any other outfit, Google views its data center work as an important advantage over competitors.

Estimate: Amazon Cloud Backed by 450,000 Servers

How many servers does it take to power Amazon’s huge cloud computing operation? Like many large Internet companies, Amazon doesn’t disclose details of its infrastructure, including how many servers it uses. But a researcher estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe.

Huan Liu, a research manager at Accenture Technology Labs, analyzed Amazon’s EC2 compute service using internal and external IP addresses, which he extrapolated to come up with estimates for the number of racks in each data center location. Liu then applied an assumption of 64 blade servers per rack – four 10U chassis, each holding eight blades – to arrive at the estimate.

via Estimate: Amazon Cloud Backed by 450,000 Servers » Data Center Knowledge.

Sweet Mother of God.

How Web giants store big—and we mean big—data

The Great Disk Drive in the Sky: How Web giants store big—and we mean big—data.

The need for this kind of perpetually scalable, durable storage has driven the giants of the Web—Google, Amazon, Facebook, Microsoft, and others—to adopt a different sort of storage solution: distributed file systems based on object-based storage. These systems were at least in part inspired by other distributed and clustered filesystems such as Red Hat’s Global File System and IBM’s General Parallel Filesystem.

And one more blurb…

Google wanted to turn large numbers of cheap servers and hard drives into a reliable data store for hundreds of terabytes of data that could manage itself around failures and errors. And it needed to be designed for Google’s way of gathering and reading data, allowing multiple applications to append data to the system simultaneously in large volumes and to access it at high speeds.

ARM Discloses Technical Details Of The Next Version Of The…

“The current growth trajectory of data centers, driven by the viral explosion of social media and cloud computing, will continue to accelerate. The ability to handle this data increase with energy-efficient solutions is vital,” said Vinay Ravuri, vice president and general manager of AppliedMicro’s Processor Business Unit. “The ARM 64-bit architecture provides the right balance of performance, efficiency and cost to scale to meet these growing demands and we are very excited to be a leading partner in implementing solutions based on the ARMv8 architecture.”

via ARM Discloses Technical Details Of The Next Version Of The… – ARM.

Managed DNS Advanced Feature:Active Failover

Datacenter and/or server failures are no fun for anyone, especially those responsible for website operations. If you’ve protected yourself by using Active Failover — an advanced feature available for DynECT Managed DNS users — your site will remain live and accessible without any of your visitors knowing the difference.

via Managed DNS Advanced Feature:Active Failover – Dyn.