Winterton, a senior entomologist at the California Department of Food and Agriculture, has seen a lot of bugs. But he hadn’t seen this species before.
There’s no off switch when you’re the senior entomologist. If you’re browsing the web you find your way to Flickr photos of insects or start correcting Wikipedia articles on insects.
This specification defines a “Problem Detail” as an extensible way to carry machine-readable details of errors in a response, to avoid the need to invent new response formats.
During formalization of the WebFinger protocol [I-D.jones-appsawg-webfinger], much discussion occurred regarding the appropriate URI scheme to include when specifying a user’s account as a web link [RFC5988].
…
acctURI = “acct:” userpart “@” domainpart
A series of videos with math profs explaining a specific number (or interesting topic with a contrived connection to a particular number)
This page is a high-level overview of the project and provides guidence on how to implement the intents in your applications without the need for the you to understand the entire spec.
Summary of one of the Chrome security exploits from pwn2own. Basically XSS into the chrome URI scheme which gives access to special APIs.
So this is another Stuxnet by Israel/US?
The analysis reinforces theories that researchers from Kaspersky Lab, CrySyS Lab, and Symantec published almost two weeks ago. Namely, Flame could only have been developed with the backing of a wealthy nation-state. … “It’s not a garden-variety collision attack, or just an implementation of previous MD5 collisions papers—which would be difficult enough,” Matthew Green, a professor specializing in cryptography in the computer science department at Johns Hopkins University, told Ars. “There were mathematicians doing new science to make Flame work.”
Use of my old Hotmail account has really snuck up on me as I end up caring more and more about all of the services with which it is associated. The last straw is Windows 8 login, but previous straws include Xbox, Zune, SkyDrive, and my Windows 7 Phone. I like the features and sync'ing associated with the Windows Live ID, but I don't like my old, spam filled, hotmail email address on the Live ID account.
A coworker told me about creating a Live ID from a custom domain, which sounded like just the ticket for me. Following the instructions above I was able to create a new deletethis.net Live ID but the next step of actually using this new Live ID was much more difficult. My first hope was there would be some way to link my new and old Live IDs so as to make them interchangeable. As it turns out there is a way to link Live IDs but all that does is make it easy to switch between accounts on Live Mail, SkyDrive and some other webpages.
Instead one must change over each service or start over depending on the service:
Very interesting - both technically as well as looking into the moral justifications the botnet operator provides. But equally interesting is the discussion on Hacker News: http://news.ycombinator.com/item?id=3960034. Especially the discussion on the Verified by Visa (3D Secure) system and how the goal is basically to move liability onto the consumer and off of the merchant or credit card company.
Specifically Twitter has said that they will only used these assigned patent rights defensively to protect themselves against hostile actions. And further that any company that acquires these patent rights from Twitter will need the inventor’s consent to use them in an offensive action. Twitter has also provided the inventor with certain rights to license the patent to others for defensive purposes. You can read the entire set of provisions on GitHub.
Elaborating on a previous brief post on the topic of Web Worker initialization race conditions, there's two important points to avoid a race condition when setting up a Worker:
For example the following has no race becaues the spec guarentees that messages posted to a worker during its first synchronous block of execution will be queued and handled after that block. So the worker gets a chance to setup its onmessage handler. No race:
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
onmessage = function(e) {
// ...
}
The following has a race because there's no guarentee that the parent's onmessage handler is setup before the worker executes postMessage. Race (violates 1):
'parent.js':
var worker = new Worker();
worker.onmessage = function(e) {
// ...
};
'worker.js':
postMessage("initialize");
The following has a race because the worker has no onmessage handler set in its first synchronous execution block and so the parent's postMessage may be sent before the worker sets its onmessage handler. Race (violates 2):
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
setTimeout(
function() {
onmessage = function(e) {
// ...
}
},
0);
As a professional URI aficionado I deal with various levels of ignorance on URI percent-encoding (aka URI encoding, or URL escaping).
Getting into the more subtle levels of URI percent-encoding ignorance, folks try to apply their knowledge of percent-encoding to URIs as a whole producing the concepts escaped URIs and unescaped URIs. However there are no such things - URIs themselves aren't percent-encoded or decoded but rather contain characters that are percent-encoded or decoded. Applying percent-encoding or decoding to a URI as a whole produces a new and non-equivalent URI.
Instead of lingering on the incorrect concepts we'll just cover the correct ones: there's raw unencoded data, non-normal form URIs and normal form URIs. For example:
In the above (A) is not an 'encoded URI' but rather a non-normal form URI. The characters of 'the' and 'path' are percent-encoded but as unreserved characters specific in the RFC should not be encoded. In the normal form of the URI (B) the characters are decoded. But (B) is not a 'decoded URI' -- it still has an encoded '?' in it because that's a reserved character which by the RFC holds different meaning when appearing decoded versus encoded. Specifically in this case, it appears encoded which means it is data -- a literal '?' that appears as part of the path segment. This is as opposed to the decoded '?' that appears in the URI which is not part of the path but rather the delimiter to the query.
Usually when developers talk about decoding the URI what they really want is the raw data from the URI. The raw decoded data is (C) above. The only thing to note beyond what's covered already is that to obtain the decoded data one must parse the URI before percent decoding all percent-encoded octets.
Of course the exception here is when a URI is the raw data. In this case you must percent-encode the URI to have it appear in another URI. More on percent-encoding while constructing URIs later.
Interesting article on an expert attempting to modify an article on Wikipedia. Sounds like an issue when presented in this fashion, but looking at it from Wikipedia’s perspective, I don’t know how they could do better.
As a professional URI aficionado I deal with various levels of ignorance on URI percent-encoding (aka URI encoding, or URL escaping). The basest ignorance is with respect to the mere existence of percent-encoding. Percents in URIs are special: they always represent the start of a percent-encoded octet. That is to say, a percent is always followed by two hex digits that represents a value between 0 and 255 and doesn't show up in a URI otherwise.
The IPv6 textual syntax for scoped addresses uses the '%' to delimit the zone ID from the rest of the address. When it came time to define how to represent scoped IPv6 addresses in URIs there were two camps: Folks who wanted to use the IPv6 format as is in the URI, and those who wanted to encode or replace the '%' with a different character. The resulting thread was more lively than what shows up on the IETF URI discussion mailing list. Ultimately we went with a percent-encoded '%' which means the percent maintains its special status and singular purpose.
Things you do in VIM but faster with more obscure and specific commands.
“The syntax for allowed Top-Level Domain (TLD) labels in the Domain Name System (DNS) is not clearly applicable to the encoding of Internationalised Domain Names (IDNs) as TLDs. This document provides a concise specification of TLD label syntax based on existing syntax documentation, extended minimally to accommodate IDNs.” Still irritated about arbitrary TLDs.
Cool and (relatively) new methods on the JavaScript Array object are here in the most recent versions of your favorite browser! More about them on ECMAScript5, MSDN, the IE blog, or Mozilla's documentation. Here's the list that's got me excited:
Shortly after joining the Internet Explorer team I got a bug from a PM on a popular Microsoft web server product that I'll leave unnamed (from now on UWS). The bug said that IE was handling empty path segments incorrectly by not removing them before resolving dotted path segments. For example UWS would do the following:
A.1. http://example.com/a/b//../
A.2. http://example.com/a/b/../
A.3. http://example.com/a/
In step 1 they are given a URI with dotted path segment and an empty
path segment. In step 2 they remove the empty path segment, and in step 3 they resolve the dotted path segment. Whereas, given the same initial URI, IE would do the following:
B.1. http://example.com/a/b//../
B.2. http://example.com/a/b/
IE simply resolves the dotted path segment against the empty path segment and removes them both. So, how
did I resolve this bug? As "By Design" of course!
The URI RFC allows path segments of zero length and does not assign them any special meaning. So generic user agents that intend to work on the web must not treat an empty path segment any different from a path segment with some text in it. In the case above IE is doing the correct thing.
That's the case for generic user agents, however servers may decide that a URI with an empty path segment returns the same resource as a the same URI without that empty path segment. Essentially they can decide to ignore empty path segments. Both IIS and Apache work this way and thus return the same resource for the following URIs:
http://exmaple.com/foo//bar///baz
http://example.com/foo/bar/baz
The issue for UWS is that it removes empty path segments before resolving dotted path segments. It must
follow normal URI procedure before applying its own additional rules for empty path segments. Not doing that means they end up violating URI equivalency rules: URIs (A.1) and (B.2) are equivalent
but UWS will not return the same resource for them.
A bug came up the other day involving markup containing <input type="image" src="http://example.com/...
. I knew that "image" was a valid input type but it wasn't until that moment
that I realized I didn't know what it did. Looking it up I found that it displays the specified image and when the user clicks on the image, the form is submitted with an additional two name
value pairs: the x and y positions of the point at which the user clicked the image.
Take for example the following HTML:
<form action="http://example.com/">
<input type="image" name="foo" src="http://deletethis.net/dave/images/davebefore.jpg">
</form>
If the user
clicks on the image, the browser will submit the form with a URI like the following:http://example.com/?foo.x=145&foo.y=124
.
This seemed like an incredibly specific feature to be built directly into the language when this could instead be done with javascript. I looked a bit further and saw that its been in HTML since at least HTML2, which of course makes much more sense. Javascript barely existed at that point and sending off the user's click location in a form may have been the only way to do something interesting with that action.