Lichtl says that people have tried to use wavelet compression before, and these particular simulations are based on work done by Jonathan Regele, a professor at the department of aerospace engineering at Iowa State University.
“The difference is that without GPU acceleration, and without the architecture and the techniques that we just described, it takes months on thousands of cores to run even the simplest of simulations. It is a very interesting approach but it doesn’t have industrial application without the hardware and the correct algorithms behind it. What the GPUs are doing here is enabling tremendous acceleration.
via Rockets Shake And Rattle, So SpaceX Rolls Homegrown CFD.
To be more precise, if you get the temperature wrong in the simulation by a little, you get the kinetic energy of the gas wrong by a lot because there is an exponential relationship there. If you get the pressure or viscosity of the fluid wrong by a little bit, you will see different effects in the nozzle than will happen in the real motor.
Overall, this was a fascinating example of how improvements in GPU hardware have allowed companies to build simulation centers that weren’t really possible before. Shipping companies and airlines have used some type of simulation for decades, but the type and nature of the environments those simulations can include is rapidly expanding — and such improvements at the industrial scale inevitably trickle down to consumer hardware and applications.
via Touring A Carnival Cruise Simulator: 210 Degrees Of GeForce-Powered Projection Systems.
Litecoin confirms transactions faster (every 2.5 minutes, rather than every 10 minutes for Bitcoin) and it contains more coins — 84 million coins will be found in total under the LTC protocol, as opposed to 21 million for BTC. Bitcoin and Litecoin prices tend to move together; Bitcoins stratospheric leap over the past month (it’s down from a high of $1200 but trading at $873 as of this writing) has created an odd situation where it’s easier to mine Litecoin and then convert LTC to BTC then it is to just mine BTC to start with.
via Massive surge in Litecoin mining leads to graphics card shortage | ExtremeTech.
The task of monitoring networks requires reading all the data packets as they cross the network, which “requires a lot of data parallelism,” Wenji said.
Wenji has built a prototype at Fermilab to demonstrate the feasibility of a GPU-based network monitor, using a Nvidia M2070 GPU and an off-the-shelf NIC (network interface card) to capture network traffic. The system could easily be expanded with additional GPUs, he said.
via Super Computing 13: GPUs would make terrific network monitors – Network World.
This time around, Intel is actually much more interested in telling us about that lowered power consumption, as is evident in the use of phrases like “biggest [generation-to-generation] battery life increase in Intel history.” By the company’s measurements, a laptop based on Haswell should in some circumstances be able to get as much as a third more battery life than the same laptop based on Ivy Bridge.
via Haswell is here: we detail Intel’s first 4th-generation Core CPUs | Ars Technica.
Haswell is the sort of CPU upgrade we’ve come to expect from Intel: a whole bunch of incremental improvements over last year’s model, all delivered basically on-time and as promised. Again, we’ll need to have test systems in hand to verify all of the lofty claims that the company is making here, but at least on paper Haswell looks like a big push in the right directions. It increases GPU power to fight off Nvidia and AMD, and it decreases overall power consumption to better battle ARM.
While these techniques are widely used and understood, they work primarily with a model of the abstract sound produced by an instrument or object, not a model of the instrument or object itself. A more recent approach is physical modeling- based audio synthesis, where the audio waveforms are generated using detailed numerical simulation of physical objects or instruments.
via Realtime GPU Audio – ACM Queue.
There are various approaches to physical modeling sound synthesis. One such approach, studied extensively by Stefan Bilbao,1 uses the finite difference approximation to simulate the vibrations of plates and membranes. The finite difference simulation produces realistic and dynamic sounds (examples can be found at http://unixlab.sfsu.edu/~whsu/FDGPU). Realtime finite difference-based simulations of large models are typically too computationally-intensive to run on CPUs. In our work, we have implemented finite difference simulations in realtime on GPUs.
During the class in the spring of 2012 he learned the graphics programming language CUDA, and that opened the doors for tweaking GPUs to divide advanced computations across the GPUs massively parallel architecture.
He knew he had something when he wrote an algorithm to connect millions of points on a map, joining the data together spatially. The performance of his GPU-based computations compared to the same operation done with CPU power on PostGIS, the GIS module for the open-source database PostgreSQL, was “mind-blowing,” he said.
via Fast Database Emerges from MIT Class, GPUs and Student’s Invention.
Programs don’t magically become faster when they are run on GPUs. E.g. Linear Algebra algorithms work really well on CPUs, but if ported 1 to 1 ( as this would ) to a GPU their performance is just abysmal. Usually one needs to use a specially designed algorithm that can actually use the massive parallelism of a GPU and not get stuck e.g. trying to synchronize or doing other kinds of communication. GPUs really like doing the same operation on independent data, which is basically what happens when rendering an image, they are not really designed to have operations that need information of all other data, or neighbouring data in a grid…. . Just because something works on a GPU does not mean its efficient, thus the performance could be much worse using a GPU .
Also balancing CPU and GPU usage is even harder ( maybe impossible ? ) as you cannot predict what kind of System you will run your software on, thus usually these days the CPU feeds the GPU with data ( with the current Tesla cards only 1 core per GPU, this changes in the Kepler version to 32 ) and does some processing that can’t be done on the GPU, but do not share any kind of workloads.
I don’t know how the h.264 codec is structured or if it is possible to have performance gains on encoding. However I really doubt that x.264 can be just ported as they rely heavily on CPU specific features ( SSE etc ) which is quite different to the much higher level bytecode that Java would produce.
via Rootbeer GPU Compiler Lets Almost Any Java Code Run On the GPU – Slashdot.
Over three decades video cards have transformed computer graphics from monochrome line drawings to near photo realistic renderings.
But the processing power of the GPU is increasingly being used to tame the huge sums of data generated by modern industry and science. And now a project to build the world’s largest telescope is considering using a GPU cluster to stitch together more than an exabyte of data each day.
via How graphics card supercomputers could help us map the universe | TechRepublic.
Moving into specific changes made to DirectX 11.1, Microsoft has made it much better at redrawing portions of the screen, such as on YouTube, where only the video segment should be constantly re-rendered. By reducing redundant redrawing of text and other static elements, there is less memory and processor cycle wastage.
via Windows 8 graphics: Microsoft has hardware accelerated all the things | ExtremeTech.
In all cases, faster and more efficient rendering will result in less hardware resources being used, and thus lower power consumption. In the entire cycle of Windows 8, improved efficiency and battery life are probably the strongest and most recurring themes. If the unremovable Metro Start Screen and Surface tablets wasn’t proof enough, this laser-tight focus on efficiency makes it patently clear that Microsoft’s primary concern is being competitive against Google and Apple in the mobile computing sector.