Google’s Tensor Processing Unit could advance Moore’s Law 7 years into the future

“We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law),” the blog said. “TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models, and apply these models more quickly, so users get more intelligent results more rapidly.”

Source: Google’s Tensor Processing Unit could advance Moore’s Law 7 years into the future | PCWorld

Why the Z-80’s data pins are scrambled

I have been reverse-engineering the Z-80 processor using images and data from the Visual 6502 team. The image below is a photograph of the Z-80 die. Around the outside of the chip are the pads that connect to the external pins. (The die photo is rotated 180° compared to the datasheet pinout, if you try to match up the pins.) At the right are the 8 data pins for the Z-80’s 8-bit data bus in a strange order.

via Ken Shirriff’s blog: Why the Z-80’s data pins are scrambled.

The motivation behind splitting the data bus is to allow the chip to perform activities in parallel. For instance an instruction can be read from the data pins into the instruction logic at the same time that data is being copied between the ALU and registers. The partitioned data bus is described briefly in the Z-80 oral history[3], but doesn’t appear in architecture diagrams.

The complex structure of the data buses is closely connected to the ordering of the data pins.

Knights Landing Details

knl2-1Table 1 shows estimates of the critical characteristics of the 14nm Knights Landing, compared to known details of the 22nm Knights Corner, Haswell, and Ivy Bridge-EP. The estimate of Knights Landing differ from the rumored specifications primarily in the capacity of the shared L2 cache, which is estimated to be 512KB, rather than 1MB. It is possible, although extremely unlikely that the shared L2 cache is 256KB. The analysis also incorporate several other critical factors which were not mentioned in any rumors, specifically cache read bandwidth and the large shared L3 cache. The L3 cache is estimated as eight times the size of the L2 caches or 144MB in the unlikely scenario that the L2 cache is 256KB, then the L3 cache is likely to be proportionately smaller.

via Knights Landing Details.

Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs

The attack against the Ivy Bridge processors sabotages random number generator (RNG) instructions Intel engineers added to the processor. The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn’t detected by any of the “Built-In Self-Tests” required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology.

via Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs | Ars Technica.

The Best CPU Coolers: 10-Way Roundup

With the recent arrival of Ivy Bridge-E (see our Core i7-4960X review), I felt it was a good time to check out the latest aftermarket coolers. The new chip is fully compatible with Sandy Bridge-E/EP’s LGA2011 socket. We contacted all the major players and received 10 heatsinks to test including units from Noctua, Thermalright, Xigmatek, Silverstone and Thermaltake.

via The Best CPU Coolers: 10-Way Roundup – TechSpot.

Intel ‘Re-imagines’ The Data Center With New Avoton Server Architecture, Software-Defined Services

Intel isn’t just pushing Avoton as as low-power solution that’ll compete with products from ARM and AMD, but as the linchpin of a system for software defined networking and software defined storage capability. In a typical network, a switch is programmed to send arriving traffic to a particular location. Both the control plane (where traffic goes) and the data plane (the hardware responsible for actually moving the bits) are implemented in hardware and duplicated in every switch.

via Intel ‘Re-imagines’ The Data Center With New Avoton Server Architecture, Software-Defined Services – HotHardware.

Software defined networking replaces this by using software to manage traffic (OpenFlow in the example diagram below) and monitoring it from a central controller. Intel is moving towards such a model and talking it up as an option because it moves control away from specialized hardware baked into expensive routers made by people that aren’t Intel, and towards centralized technology Intel can bake into the CPU itself.

Haswell is here: we detail Intel’s first 4th-generation Core CPUs

This time around, Intel is actually much more interested in telling us about that lowered power consumption, as is evident in the use of phrases like “biggest [generation-to-generation] battery life increase in Intel history.” By the company’s measurements, a laptop based on Haswell should in some circumstances be able to get as much as a third more battery life than the same laptop based on Ivy Bridge.

via Haswell is here: we detail Intel’s first 4th-generation Core CPUs | Ars Technica.

Haswell is the sort of CPU upgrade we’ve come to expect from Intel: a whole bunch of incremental improvements over last year’s model, all delivered basically on-time and as promised. Again, we’ll need to have test systems in hand to verify all of the lofty claims that the company is making here, but at least on paper Haswell looks like a big push in the right directions. It increases GPU power to fight off Nvidia and AMD, and it decreases overall power consumption to better battle ARM.

Samsung laying groundwork for server chips, analysts say

The faster 64-bit processors will appear in servers, high-end smartphones and tablets, and offer better performance-per-watt than ARM’s current 32-bit processors, which haven’t been able to expand beyond embedded and mobile devices. The first servers with 64-bit ARM processors are expected to become available in 2014.

via Samsung laying groundwork for server chips, analysts say – servers, Samsung Electronics, hardware systems, Components, processors – Computerworld.

“Samsung is a lead partner of ARM’s new Cortex A50 processors. However, we’re not in a position to comment on our plans for how we’ll use the Cortex A50 as part of our Exynos product family,” said Lisa Warren-Plungy, a Samsung Semiconductor spokeswoman, in an e-mail.

ARM Information Center

Welcome to the ARM Infocenter. The Infocenter contains all ARM non-confidential Technical Publications, including:

Via ARM Information Center.

Sony develops thermal sheet as good as paste for CPU cooling

Sony Chemical & Information Device Corp. has demonstrated a thermal sheet that it claims matches thermal paste in terms of cooling ability while beating it on life span. The key to the sheet is a combination of silicon and carbon fibers, to produce a thermal conductive layer that’s between 0.3 and 2mm thick.

via Sony develops thermal sheet as good as paste for CPU cooling – Computer Chips & Hardware Technology | Geek.com.