The reason HTTP/2.0 does not improve privacy is that the big corporate backers have built their business model on top of the lack of privacy. They are very upset about NSA spying on just about everybody in the entire world, but they do not want to do anything that prevents them from doing the same thing. The proponents of HTTP/2.0 are also trying to use it as a lever for the “SSL anywhere” agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption.
via HTTP/2.0 – The IETF is Phoning It In – ACM Queue.
History has shown overwhelmingly that if you want to change the world for the better, you should deliver good tools for making it better, not policies for making it better. I recommend that anybody with a voice in this matter turn their thumbs down on the HTTP/2.0 draft standard: It is not a good protocol and it is not even good politics.
The technical details get very complicated very quickly, but what it all amounts to is simple enough. The proposal expects Internet users to provide “informed consent” that they “trust” intermediate sites (e.g. Verizon, AT&T, etc.) to decode their encrypted data, process it in some manner for “presumably” innocent purposes, re-encrypt it, then pass the re-encrypted data along to its original destination.
via Lauren Weinstein’s Blog: No, I Don’t Trust You! — One of the Most Alarming Internet Proposals I’ve Ever Seen.
In essence it’s a kind of sucker bait. Average users could easily believe they were “kinda sorta” doing traditional SSL but they really wouldn’t be, ’cause the ISP would have access to their unencrypted data in the clear. And as the proposal itself suggests, it would take significant knowledge for users to understand the ramifications of this — and most users won’t have that knowledge.
This editorial illustrates that Man In The Middle (MITM) attacks cannot happen without user consent. This blogger fears that ISPs will require consent for all SSL sessions making all users’ end to end encryption vulnerable to a “trusted” proxy. Here is a blurb in the draft.
From the IETF draft: Explicit Trusted Proxy in HTTP/2.0 draft-loreto-httpbis-trusted-proxy20-01
This document describes two alternative methods for an user-agent to automatically discover and for an user to provide consent for a Trusted Proxy to be securely involved when he or she is requesting an HTTP URI resource over HTTP2 with TLS. The consent is supposed to be per network access. The draft also describes the role of the Trusted Proxy in helping the user to fetch HTTP URIs resource when the user has provided consent to the Trusted Proxy to be involved.
The consent is supposed to be on a per network (or destination) basis which means there may be a reason the user agent will want to use a trusted proxy — perhaps they do not trust the destination network. The blogger implies ISPs will want blanket consent over all destinations which 1) they could implement now without this standard and 2) this would not make for a good PR move because it would not go unnoticed.
Yet one major caveat remains. While the IETF might be able to secure the pipes through which users’ data travel, users must also be able to trust the parties where their data is stored: software, hardware and services such as Cisco, Gmail and Facebook. These parties can hand over user data directly to government agencies.
via In response to NSA revelations, the internet’s engineers set out to PRISM-proof the net | Radio Netherlands Worldwide.
WebRTC (Web Real-Time Communication) is an HTML5 standard being drafted by the World Wide Web Consortium (W3C), with a mailing list created in April 2011., and jointly in the IETF with a working group chartered in May 2011. It is also the name of framework that was open sourced on June 1, 2011, which implements early versions of the standard and allows web browsers to conduct real-time communication. The goal of WebRTC is to enable applications such as voice calling, video chat and P2P file sharing without plugins.
via WebRTC – Wikipedia, the free encyclopedia.
IETF explores new working group on identity management in the cloud.
A specification already exists for Simple Cloud Identity Management (SCIM) that is supported by security software vendors including Cisco, Courion, Ping Identity, UnboundID and SailPoint. SCIM also has support from key cloud vendors, including Salesforce, Google and VMware.
What happens when a bunch of IETF super nerds show up in Paris for a major conference and discover their hotel’s Wi-Fi network has imploded?
They give it an Extreme Wi-Fi Makeover.
via IETF attendees reengineer their hotel’s Wi-Fi network.
This document specifies how DNS resource records are named and structured to facilitate service discovery. Given a type of service that a client is looking for, and a domain in which the client is looking for that service, this allows clients to discover a list of named instances of that desired service, using standard DNS queries. This is referred to as DNS-based Service Discovery, or DNS-SD.