Rensselaer Polytechnic Institute and Lawrence Livermore Scientists Set a New Simulation Speed Record on the Sequoia Supercomputer

The records were set using the ROSS (Rensselaer’s Optimistic Simulation System) simulation package developed by Carothers and his students, and using the Time Warp synchronization algorithm originally developed by Jefferson.

“The significance of this demonstration is that direct simulation of ‘planetary scale’ models is now, in principle at least, within reach,” Barnes said. “‘Planetary scale’ in the context of the joint team’s work means simulations large enough to represent all 7 billion people in the world or the entire Internet’s few billion hosts.”

via RPI: News & Events – Rensselaer Polytechnic Institute and Lawrence Livermore Scientists Set a New Simulation Speed Record on the Sequoia Supercomputer.

Maybe they can get SimCity modeled correctly.

Fast Database Emerges from MIT Class, GPUs and Student’s Invention

During the class in the spring of 2012 he learned the graphics programming language CUDA, and that opened the doors for tweaking GPUs to divide advanced computations across the GPUs massively parallel architecture.

He knew he had something when he wrote an algorithm to connect millions of points on a map, joining the data together spatially. The performance of his GPU-based computations compared to the same operation done with CPU power on PostGIS, the GIS module for the open-source database PostgreSQL, was “mind-blowing,” he said.

via Fast Database Emerges from MIT Class, GPUs and Student’s Invention.

A New Approach to Databases and Data Processing

The simplest way to handle more data using more cores (whether on a single machine or in cluster) is to partition it into disjoint subsets, and work on each subset in isolation. In the database world this is called sharding, and it makes scaling relatively simple; only downside is -— it makes writing compellingly complex applications hard.

via Parallel Universe • A New Approach to Databases and Data Processing — Simulating 10Ks of Spaceships on My Laptop.

Amdahl’s law

Amdahl’s law is a model for the relationship between the expected speedup of parallelized implementations of an algorithm relative to the serial algorithm, under the assumption that the problem size remains the same when parallelized. For example, if for a given problem size a parallelized implementation of an algorithm can run 12% of the algorithm’s operations arbitrarily quickly while the remaining 88% of the operations are not parallelizable, Amdahl’s law states that the maximum speedup of the parallelized version is 1/1 – 0.12 = 1.136 times as fast as the non-parallelized implementation.

via Amdahl’s law – Wikipedia, the free encyclopedia.

About the OpenMP ARB and OpenMP.org

The OpenMP Application Program Interface (API) supports multi-platform shared-memory parallel programming in C/C++ and Fortran on all architectures, including Unix platforms and Windows NT platforms. Jointly defined by a group of major computer hardware and software vendors, OpenMP is a portable, scalable model that gives shared-memory parallel programmers a simple and flexible interface for developing parallel applications for platforms ranging from the desktop to the supercomputer.

via OpenMP.org » About the OpenMP ARB and OpenMP.org.

IBM Parallel Sysplex

In computing, a Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 systems to share a workload for high performance and high availability.

via IBM Parallel Sysplex – Wikipedia, the free encyclopedia.

Education | parallel.illinois.edu

For this reason, education is among the primary missions of the Parallel Computing Institute. With offerings ranging from complete curricula in parallel computing through the departments of Electrical and Computer Engineering and in Computer Science, to just-in-time workshops and seminars, PCI offers a broad selection of options for students and professionals and collaborates with organizations such as the CUDA Center of Excellence, the Universal Parallel Computing Research Center, the Cloud Computing Testbed, and NCSA, among others, to make their offerings widely available.

via Education | parallel.illinois.edu.