New 25 GPU Monster Devours Passwords In Seconds

In a test, the researcher’s system was able to churn through 348 billion NTLM password hashes per second. That renders even the most secure password vulnerable to compute-intensive brute force and wordlist (or dictionary) attacks. A 14 character Windows XP password hashed using LM NTLM (NT Lan Manager), for example, would fall in just six minutes, said Per Thorsheim, organizer of the Passwords^12 Conference.

via Update: New 25 GPU Monster Devours Passwords In Seconds | The Security Ledger.

Hadoop Corona

Hadoop Corona is the next version of Map-Reduce. The current Map-Reduce has a single Job Tracker that reached its limits at Facebook. The Job Tracker manages the cluster resource and tracks the state of each job. In Hadoop Corona, the cluster resources are tracked by a central Cluster Manager. Each job gets its own Corona Job Tracker which tracks just that one job. The design provides some key improvements:

via hadoop-20/src/contrib/corona at master · facebook/hadoop-20 · GitHub.

Intel wants to micromanage tablet makers in the name of battery life

Intel even wants to dictate the components in displays—it wants manufacturers to begin putting small amounts of RAM into their display panels to make them capable of storing static images. That way, if a user is reading a document or webpage but not interacting with anything on the screen, the computer could display a static image of the screen rather than continuously refreshing it for no reason.

via Intel wants to micromanage tablet makers in the name of battery life | Ars Technica.

ARM Information Center

Welcome to the ARM Infocenter. The Infocenter contains all ARM non-confidential Technical Publications, including:

Via ARM Information Center.

Engineers Build Supercomputer Using Raspberry Pi, Lego

The team is wanting to see “Iridis-Pi” become an inspiration for students to enable them “to apply high-performance computing and data handling techniques to tackle complex engineering and scientific challenges.”

via Engineers Build Supercomputer Using Raspberry Pi, Lego – ParityNews.com: …Because Technology Matters.

Howto is here.

Steps to make a Raspberry Pi Supercomputer

The steps to make a Raspberry Pi supercomputer can be downloaded here: Raspberry Pi Supercomputer (PDF).

You can also follow the steps here Raspberry Pi Supercomputer (html).

Low-Power Slab Server Pairs ARM with Linux

While Baserock Linux was first developed around the X86-64 platform, its developers planned the leap to the ARM platform. Each Slab CPU node consists of a Marvell quad-core 1.33-GHz Armada XP ARM chip, 2 GB of ECC RAM, a Cogent Computer Systems CSB1726 SoM, and a 30 GB solid-state drive. The nodes are connected to the high-speed network fabric, which includes two links per compute node driving 5 Gbits/s of bonded bandwidth to each CPU, with wire-speed switching and routing at up to 119 million packets per second.

via Low-Power Slab Server Pairs ARM with Linux.

Rootbeer GPU Compiler Lets Almost Any Java Code Run On the GPU

Programs don’t magically become faster when they are run on GPUs. E.g. Linear Algebra algorithms work really well on CPUs, but if ported 1 to 1 ( as this would ) to a GPU their performance is just abysmal. Usually one needs to use a specially designed algorithm that can actually use the massive parallelism of a GPU and not get stuck e.g. trying to synchronize or doing other kinds of communication. GPUs really like doing the same operation on independent data, which is basically what happens when rendering an image, they are not really designed to have operations that need information of all other data, or neighbouring data in a grid…. . Just because something works on a GPU does not mean its efficient, thus the performance could be much worse using a GPU .

Also balancing CPU and GPU usage is even harder ( maybe impossible ? ) as you cannot predict what kind of System you will run your software on, thus usually these days the CPU feeds the GPU with data ( with the current Tesla cards only 1 core per GPU, this changes in the Kepler version to 32 ) and does some processing that can’t be done on the GPU, but do not share any kind of workloads.

I don’t know how the h.264 codec is structured or if it is possible to have performance gains on encoding. However I really doubt that x.264 can be just ported as they rely heavily on CPU specific features ( SSE etc ) which is quite different to the much higher level bytecode that Java would produce.

via Rootbeer GPU Compiler Lets Almost Any Java Code Run On the GPU – Slashdot.

How graphics card supercomputers could help us map the universe

Over three decades video cards have transformed computer graphics from monochrome line drawings to near photo realistic renderings.

But the processing power of the GPU is increasingly being used to tame the huge sums of data generated by modern industry and science. And now a project to build the world’s largest telescope is considering using a GPU cluster to stitch together more than an exabyte of data each day.

via How graphics card supercomputers could help us map the universe | TechRepublic.

Europe’s Most Powerful Supercomputer Inaugurated

The SuperMUC, ranked fourth in the June TOP500 supercomputing listing, contains 147,456 cores using Intel Xeon 2.7-GHz, 8-core E5-2680 chips. IBM, which built the supercomputer, stated in a recent press release that the supercomputer actually includes more than 155,000 processor cores. It is located at the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) in Garching, Germany, near Munich.

via Europe’s Most Powerful Supercomputer Inaugurated.

Imagination Technologies

Over the past two weeks, Imagination Technologies has announced new, higher-end versions of its Power VR Series 6 GPU, claiming that the new Power VR G6230 and G6430 go “‘all out’, adding incremental extra area for maximum performance whilst minimising power consumption.” There’s a new ray-tracing SDK out and a post discussing how PowerVR is utilizing GPU Compute and OpenCL to offload and accelerate CPU-centric tasks.

Via: PowerVR Plans To Make Mobile Graphics, GPU Compute a Three-Way Race — Again