Computer cluster

Computer cluster – Wikipedia, the free encyclopedia.

A computer cluster is a group of linked computers, working together closely thus in many respects forming a single computer. The components of a cluster are commonly, but not always, connected to each other through fast local area networks. Clusters are usually deployed to improve performance and availability over that of a single computer, while typically being much more cost-effective than single computers of comparable speed or availability.[1]

And now I will put together clusters for high availability and load balancing.  This is where VMs come in handy.   I think I  prefer loosely coupled clusters, clusters where individual nodes are separated on different power grids.

It’s always nice to have a succinct definition handy to stay focused.

Texas Advanced Computing Center

When completed, Stampede will comprise several thousand Dell “Zeus” servers with each server having dual 8-core processors from the forthcoming Intel® Xeon® Processor E5 Family (formerly codenamed “Sandy Bridge-EP”) and each server with 32 gigabytes of memory. This production system will offer almost 2 petaflops of peak performance, which is double the current top system in XD, and the real performance of scientific applications will see an even greater performance boost due to the newer processor and interconnect technologies. The cluster will also include a new innovative capability: Intel® Many Integrated Core (MIC) co-processors codenamed “Knights Corner,” providing an additional 8 petaflops of performance. Intel MIC co-processors are designed to process highly parallel workloads and provide the benefits of using the most popular x86 instruction set. This will greatly simplify the task of porting and optimizing applications on Stampede to utilize the performance of both the Intel Xeon processors and Intel MIC co-processors.

via Texas Advanced Computing Center.

$1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud

The cluster, announced publicly this week, was created for an unnamed “Top 5 Pharma” customer, and ran for about seven hours at the end of July at a peak cost of $1,279 per hour, including the fees to Amazon and Cycle Computing. The details are impressive: 3,809 compute instances, each with eight cores and 7GB of RAM, for a total of 30,472 cores, 26.7TB of RAM and 2PB (petabytes) of disk space. Security was ensured with HTTPS, SSH and 256-bit AES encryption, and the cluster ran across data centers in three Amazon regions in the United States and Europe. The cluster was dubbed “Nekomata.”

via $1,279-per-hour, 30,000-core cluster built on Amazon EC2 cloud.

The MySQL Cluster API Developer Guide

This guide provides information for developers wishing to develop applications against MySQL Cluster. Application interfaces covered include the low-level C++-language NDB API for the MySQL NDBCLUSTER storage engine, the C-language MGM API for communicating with and controlling MySQL Cluster management servers, and the MySQL Cluster Connector for Java, which is a a collection of Java APIs introduced in MySQL Cluster NDB 7.1 for writing applications against MySQL Cluster, including JDBC, JPA, and ClusterJ.

via MySQL :: The MySQL Cluster API Developer Guide.

Adding MySQL Cluster Data Nodes Online

The redistribution for NDBCLUSTER tables already existing before the new data nodes were added is not automatic, but can be accomplished using simple SQL statements in mysql or another MySQL client application. However, all data and indexes added to tables created after a new node group has been added are distributed automatically among all cluster data nodes, including those added as part of the new node group.

via MySQL :: MySQL Cluster :: 5.12.1 Adding MySQL Cluster Data Nodes Online: General Issues.

ClusterLabs

Pacemaker keeps your applications running when they or the machines they’re running on fail. However it can’t do this without connectivity to the other machines in the cluster – a significant problem in its own right.

Rather than re-implement the wheel, Pacemaker supports existing implimentations such as Heartbeat. Heartbeat provides:

  • a mechansigm to reliably send messages between nodes,
  • notifications when machines appear and disappear
  • a list of machines that are up that is consistent throughout the cluster

Heartbeat was also the first stack supported by the Pacemaker codebase.

via FAQ – ClusterLabs.

Linux Server Cluster for Load Balancing

The Linux Virtual Server is a highly scalable and highly available server built on a cluster of real servers, with the load balancer running on the Linux operating system. The architecture of the server cluster is fully transparent to end users, and the users interact as if it were a single high-performance virtual server. For more information, click here.

via The Linux Virtual Server Project – Linux Server Cluster for Load Balancing.

Set up a Web server cluster in 5 easy steps

This article illustrates the robust Apache Web server stack with 6 Apache server nodes (though 3 nodes is sufficient for following the steps outlined here) as well as 3 Linux Virtual Server (LVS) directors. We used 6 Apache server nodes to drive higher workload throughputs during testing and thereby simulate larger deployments

via Set up a Web server cluster in 5 easy steps.