Overview | fogproject.org

FOG is a Linux-based, free and open source computer imaging solution for Windows XP, Vista and 7 that ties together a few open-source tools with a php-based web interface. FOG doesn’t use any boot disks, or CDs; everything is done via TFTP and PXE. Also with fog many drivers are built into the kernel, so you don’t really need to worry about drivers (unless there isn’t a linux kernel driver for it). FOG also supports putting an image that came from a computer with a 80GB partition onto a machine with a 40GB hard drive as long as the data is less than 40GB.

via Overview | fogproject.org.

FOG is centralized. Most of tasks done on FOG don’t require the user to visit the client PC. For example if you imaging a computer all you need to do is start the task. After the task is started WOL will turn the computer on if it is off, PXE will load the OS, DHCP will give it an IP address, FOG will tell the server it is in progess, and PartImage will image your computer. Then when imaging is done FOG will tell PXE not to boot the machine to the fog image and your computer boots up. After the computer is booted, if the FOG service is installed, FOG will change the computer’s hostname and that computer is ready to use!

Xeon E5-2600 Review

http://www.anandtech.com/show/5553/the-xeon-e52600-dual-sandybridge-for-servers

Intel’s Sandy Bridge architecture was introduced to desktop users more than a year ago. Server parts however have been much slower to arrive, as it has taken Intel that long to transpose this new engine into a Xeon processor. Although the core architecture is the same, the system architecture is significantly different from the LGA-1155 CPUs, making this CPU quite a challenge, even for Intel.

 

Are Yahoo’s patents strong enough to topple Facebook?

“What likely will happen in the short term is Facebook will make a decision as to whether it thinks this lawsuit is having a significant impact on its forthcoming IPO,” Patras said. “If it thinks it is having a significant impact then I suspect Facebook will come to a relatively quick license agreement with Yahoo to make this issue go away. If Facebook concludes it’s not having a significant impact they will fight on and get to the merits of these claims down the road.”

via Are Yahoo’s patents strong enough to topple Facebook?.

Despite higher network speeds, no FaceTime calls over LTE

Apple’s open source (albeit with the source code yet to be released by Apple) video chat spec first launched in 2010 with a vague promise from then-CEO Steve Jobs that it might be available over cellular data connections once the cell networks “get ready for the future.” Since then, there have been plenty of rumors that the iPhone would soon gain the ability to make FaceTime calls over 3G—particularly as the launch of iOS 5 loomed last October—but it still hasn’t happened. Attempting to make a FaceTime call when not connected to a WiFi network results in a pop-up that explains a WiFi network is necessary to complete the action.

via Despite higher network speeds, no FaceTime calls over LTE.

I wonder why FaceTime cares what network carries its data.  The network layers should be independent of the application.  If it works with WiFi it should work with LTE.

Mystery Men Forge Servers For Giants of Internet

Hyve Solutions was created to serve the world’s “large-scale internet companies” — companies increasingly interested in buying servers designed specifically for their sweeping online operations. Because their internet services are backed by such an enormous number of servers, these companies are looking to keep the cost and the power consumption of each system to a minimum. They want something a little different from the off-the-shelf machines purchased by the average business. “What we saw was a migration from traditional servers to more custom-built servers,” says Hyve senior vice president and general manager Steve Ichinaga. “The trend began several years ago with Google, and most recently, Facebook was added to the ranks of companies who want this kind of solution.”

via Mystery Men Forge Servers For Giants of Internet | Wired Enterprise | Wired.com.

Hyve is a place where internet giants can go if they want Open Compute servers. But even before Hyve was created, Synnex was working for the big internet names. It has long provided custom machines for Rackspace — the San Antonio, Texas company that offers infrastructure services across the net as a scale rivaled only by Amazon

Super-secret Google builds servers in the dark

Google is one of many big-name Web outfits that lease data center space from Equinix—a company whose massive computing facilities serve as hubs for the world’s biggest Internet providers. All the big Web names set up shop in these data centers, so that they too can plug into the hub. The irony is that they must also share space with their biggest rivals, and this may cause some unease with companies that see their hardware as a competitive advantage best hidden from others.

via Super-secret Google builds servers in the dark.

Google declined to comment on Sharp’s little anecdote. But the tale is not surprising. Google designs its own servers and its own networking gear, and though it still leases space in third-party data centers such as the Equinix facility, it’s now designing and building its own data centers as well. These designs are meant to improve the performance of the company’s Web services but also save power and money. More so than any other outfit, Google views its data center work as an important advantage over competitors.

Standard optical fiber transmits 1.7Tbps over core network

Standard optical fiber transmits 1.7Tbps over core network.

Chinese telecommunications provider ZTE held a field demonstration of an optical network capable of transmitting 1.7Tbps, the company announced today. The network used Wavelength Division Multiplexing to achieve the thousand-gigabit speeds, which separates data into different wavelengths and transmits those wavelengths over the same optical fiber. In ZTE’s demonstration, the company used 8 different channels, each transmitting 216.4Gbps. The transmission was conducted in China over 1,087 miles, on a standard fiber-optic cable.

From the linked article:

ZTE isn’t the only vendor that has conducted demonstrations to show its prowess when it comes to next-generation WDM systems. Last week, ZTE’s Chinese competitor Huawei showed a prototype system that can handle 400Gbps per channel and offer a total capacity of 20Tbps.

I wonder where Tellabs or Lucent equipment is in all of this.

Researchers send instant message with neutrinos

“Using neutrinos, it would be possible to communicate between any two points on Earth without using satellites or cables,” said Dan Stancil, professor of electrical and computer engineering at NC State and lead author of a paper describing the research. “Neutrino communication systems would be much more complicated than today’s systems, but may have important strategic uses.

via Researchers send instant message with neutrinos | ScienceBlog.com.

From Wiki.

A neutrino detector is a physics apparatus designed to study neutrinos. Because neutrinos are only weakly interacting with other particles of matter, neutrino detectors must be very large in order to detect a significant number of neutrinos. Neutrino detectors are often built underground to isolate the detector from cosmic rays and other background radiation.[1] The field of neutrino astronomy is still very much in its infancy – the only confirmed extraterrestrial sources so far are the Sun and supernova SN1987A. Neutrino observatories will “give astronomers fresh eyes with which to study the universe.”[2]

Estimate: Amazon Cloud Backed by 450,000 Servers

How many servers does it take to power Amazon’s huge cloud computing operation? Like many large Internet companies, Amazon doesn’t disclose details of its infrastructure, including how many servers it uses. But a researcher estimates that Amazon Web Services is using at least 454,400 servers in seven data center hubs around the globe.

Huan Liu, a research manager at Accenture Technology Labs, analyzed Amazon’s EC2 compute service using internal and external IP addresses, which he extrapolated to come up with estimates for the number of racks in each data center location. Liu then applied an assumption of 64 blade servers per rack – four 10U chassis, each holding eight blades – to arrive at the estimate.

via Estimate: Amazon Cloud Backed by 450,000 Servers » Data Center Knowledge.

Sweet Mother of God.