Janitor Monkey detects AWS instances, EBS volumes, EBS volume snapshots, and auto-scaling groups. Each of these resource types has distinctive rules for marking unused resources. For example, an EBS volume is marked as a cleanup candidate if it has not been attached to any instance for 30 days. Janitor Monkey determines whether a resource should be a cleanup candidate by applying a set of rules on it. If any of the rules determines that the resource is a cleanup candidate, Janitor Monkey marks the resource and schedules a time to clean it up.
Tag Archives: data centers
Building Amazon cloud apps that span the world is now much easier
Despite being in “the cloud,” Amazon Web Services has always required developers to know what they’re doing. Customers still have to manage a lot of the infrastructure even though they’re not monitoring physical servers and storage. But that doesn’t mean everything has to be a hassle, so Amazon simplifying something as important as disaster recovery is a big step in the right direction.
via Building Amazon cloud apps that span the world is now much easier | Ars Technica.
Alcatel-Lucent Has a Top-Secret SDN Startup!
Overlay networks as proposed by companies such as Nicira Networks Inc. , now owned by VMware Inc. (NYSE: VMW), are “an important step, but what if you had a data center that had to serve 10,000 customers, and every customer had a complex topology? That’s the real world, and that’s not easy,” Alwan says.
Plexxi’s SDN Really Flattens the Data Center
It’s all run by a controller that’s centralized but also includes a federated piece distributed to each switch. The setup is similar to the way OpenFlow gets deployed, but the inner workings are very different (and no, OpenFlow itself isn’t supported yet). Plexxi uses algorithms and a global view of the network to decide how to configure the network.
In other words, rather than programming route tables, the controller looks at the needs of the workloads and calculates how the network ought to be getting used. Some of this can even happen automatically.
zBox4 Construction
NYC Data Centers Struggle to Recover After Sandy
The fight now is to keep those generators fueled while pumps clear the basement areas, allowing the standard backup generators to begin operating. It’s also unclear whether the critical elements of infrastructure (power and communications) will both be up and running in time to restore services.
Below is a list of some of the data centers and services in the area, and how they’re faring:
Phoenix NAP’s Response to Kasim Reed Shows Its Unreliability as a Data Center
Instead of blowing off the letter as patently contradicted by section 230, Phoenix NAP took the entire Lipstick Alley web site off line without any notice. In response to a strong protest, Phoenix NAP acknowledged that its failure to give notice was a mistake in process, but it had no sympathy for Lipstick Alley’s legal rights; PhoenixNAP told me that it takes claims of defamation seriously and, without regard to the merits of the dispute, its customers must “resolve the issue with the complaining party.”
via Phoenix NAP’s Response to Kasim Reed Shows Its Unreliability as a Data Center (CL&P Blog).
Tier 1 Carriers Tackle Telco SDN
That approach tallies with the new network vision laid out by Michel’s colleague, Axel Clauberg, DT’s vice president of IP Architecture and Design, earlier this year. That vision sees SDN protocols being deployed in data centers and access networks but not in telecom operator core networks. (See DT Unveils New Network Vision.)
How Google Cools Its Armada of Servers
Here’s how the airflow works: The temperature in the data center is maintained at 80 degrees, somewhat warmer than in most data centers. That 80-degree air enters the server, inlet and passes across the components, becoming warmer as it removes the heat. Fans in the rear of the chassis guide the air into an enclosed hot aisle, which reaches 120 degrees as hot air enters from rows of racks on either side. As the hot air rises to the top of the chamber, it passes through the cooling coil and is cooled to room temperature, and then exhausted through the top of the enclosure. The flexible piping connects to the cooling coil at the top of the hot aisle and descends through an opening in the floor and runs under the raised floor.
via How Google Cools Its Armada of Servers » Data Center Knowledge.
Google’s custom servers also have a bare bones look and feel, with components exposed for easy access as they slide in and out of racks. This provides easy access for admins who need to replace components, but also avoids the cost of cosmetic trappings common to OEM servers.
Is a Wireless Data Center Possible?
In a new paper, a team of researchers from Cornell and Microsoft concluded that a data-center operator could replace hundreds of feet of cable with 60-GHz wireless connections—assuming that the servers themselves are redesigned in cylindrical racks, shaped like prisms, with blade servers addressing both intra- and inter-rack connections.
via Is a Wireless Data Center Possible?.
Although many 60-GHz technologies are under consideration (IEE 802.15.3c and 802.11ad, WiGig, and others), the authors picked a Georgia Tech design with bandwidth of between 4-15Gbps and and effective range of less than or equal to 10 meters. Beam-steering wasn’t used because of the latencies involved in reinstating a dropped connection, although both time and frequency multiplexing were. (Because the team couldn’t actually build the design, they chose Terabeam/HXI 60-GHz transceivers for a conservative estimate.)