There are two main differences in terms of security between a JavaScript UWP app and the Edge browser:
A JavaScript UWP app has one process (technically not true with background tasks and other edge cases but ignoring that for the moment) that runs in the corresponding appcontainer defined by the app's appx manifest. This one process is where edgehtml is loaded and is rendering HTML, talking to the network, and executing script. Specifically, the UWP main UI thread is the one where your script is running and calling into WinRT.
In the Edge browser there is a browser process running in the same appcontainer defined by its appx manifest, but there are also tab processes. These tab processes are running in restricted app containers that have fewer appx capabilities. The browser process has XAML loaded and coordinates between tabs and handles some (non-WinRT) brokering from the tab processes. The tab processes load edgehtml and that is where they render HTML, talk to the network and execute script.
There is no way to configure the JavaScript UWP app's process model but using WebViews you can approximate it. You can create out of process WebViews and to some extent configure their capabilities, although not to the same extent as the browser. The WebView processes in this case are similar to the browser's tab processes. See the MSWebViewProcess object for configuring out of process WebView creation. I also implemented out of proc WebView tabs in my JSBrowser fork.
The ApplicationContentUriRules (ACUR) section of the appx manifest lets an application define what URIs are considered app code. See a previous post for the list of ACUR effects.
Notably app code is able to access WinRT APIs. Because of this, DOM security restrictions are loosended to match what is possible with WinRT.
Privileged DOM APIs like geolocation, camera, mic etc require a user prompt in the browser before use. App code does not show the same browser prompt. There still may be an OS prompt – the same prompt that applies to any UWP app, but that’s usually per app not per origin.
App code also gets to use XMLHttpRequest or fetch to access cross origin content. Because UWP apps have separate state, cross origin here might not mean much to an attacker unless your app also has the user login to Facebook or some other interesting cross origin target.
Previously I described Application Content URI Rules (ACUR) parsing and ACUR ordering. This post describes what you get from putting a URI in ACUR.
URIs in the ACUR gain the following which is otherwise unavailable:
URIs in the ACUR that also have full WinRT access additionally gain the following:
Application Content URI Rules (ACUR from now on) defines the bounds on the web that make up a Microsoft Store application. The previous blog post discussed the syntax of the Rule's Match attribute and this time I'll write about the interactions between the Rules elements.
A single ApplicationContentUriRules element may have up to 100 Rule child elements. When determining if a navigation URI matches any of the ACUR the last Rule in the list with a matching match wildcard URI is used. If that Rule is an include rule then the navigation URI is determined to be an application content URI and if that Rule is an exclude rule then the navigation rule is not an application content URI. For example:
Rule Type='include' Match='https://example.com/'/
Rule Type='exclude' Match='https://example.com/'/
Given the above two rules in that order, the navigation URI https://example.com/ is not an application content URI because the last matching rule is the exclude rule. Reverse the order of the rules and get the opposite result.
In addition to determining if a navigation URI is application content or not, a Rule may also confer varying levels of WinRT access via the optional WindowsRuntimeAccess attribute which may be set to 'none', 'allowForWeb', or 'all'. If a navigation URI matches multiple different include rules only the last rule is applied even as it applies to the WindowsRuntimeAccess attribute. For example:
Rule Type='include' Match='https://example.com/' WindowsRuntimeAccess='none'/
Rule Type='include' Match='https://example.com/' WindowsRuntimeAccess='all'/
Given the above two rules in that order, the navigation URI https://example.com/ will have access to all WinRT APIs because the last matching rule wins. Reverse the rule order and the navigation URI https://example.com/ will have no access to WinRT. There is no summation or combining of multiple matching rules - only the last matching rule wins.
Since I had last posted about using Let's Encrypt with NearlyFreeSpeech, NFS has changed their process for setting TLS info. Instead of putting the various files in /home/protected/ssl and submitting an assistance request, now there is a command to submit the certificate info and a webpage for submitting the certificate info.
The webpage is https://members.nearlyfreespeech.net/{username}/sites/{sitename}/add_tls
and has a textbox for you to paste in all the cert info in PEM form into the textbox. The
domain key, the domain certificate, and the Let's Encrypt intermediate cert must be pasted into the textbox and submitted.
Alternatively, that same info may be provided as standard input to nfsn -i set-tls
To renew my certificate with the updated NFS process I followed the commands from Andrei Damian-Fekete's script which depends on acme_tiny.py:
python acme_tiny.py --account-key account.key --csr domain.csr --acme-dir /home/public/.well-known/acme-challenge/ > signed.crt
wget -O - https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > intermediate.pem
cat domain.key signed.crt intermediate.pem > chained.pem
nfsn -i set-tls < chained.pem
Because
my certificate had already expired I needed to comment out the section in acme_tiny.py that validates the challenge file. The filenames in the above map to the following:
I've put my WPAD Fiddler extension source and the installer on GitHub.
Six years ago I made a WPAD DHCP server Fiddler extension (described previously and previously). The extension runs a WPAD DHCP server telling any clients that connect to connect to the running Fiddler instance. I've finally got around to putting the source on GitHub. I haven't touched it in five or so years so this is either for posterity or education or something.
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Main\FeatureControl\FEATURE_BROWSER_EMULATION]
"Fitbit Connect.exe"=dword:000022b8
For
those familiar with the Windows registry the above should be enough. For those not familiar, copy and paste the above into notepad, save as a file named "fitbit.reg", and then double click the reg
file and say 'Yes' to the prompt. Hopefully in the final release of Windows 8.1 this won't be an issue.
Level 5 of the Stripe CTF revolved around a design issue in an OpenID like protocol.
def authenticated?(body)
body =~ /[^\w]AUTHENTICATED[^\w]*$/
end
...
if authenticated?(body)
session[:auth_user] = username
session[:auth_host] = host
return "Remote server responded with: #{body}." \
" Authenticated as #{username}@#{host}!"
This level is an implementation of a federated identity protocol. You give it an endpoint URI and a username and password, it posts the username and password to the endpoint URI, and if the response is 'AUTHENTICATED' then access is allowed. It is easy to be authenticated on a server you control, but this level requires you to authenticate from the server running the level. This level only talks to stripe CTF servers so the first step is to upload a document to the level 2 server containing the text 'AUTHENTICATED' and we can now authenticate on a level 2 server. Notice that the level 5 server will dump out the content of the endpoint URI and that the regexp it uses to detect the text 'AUTHENTICATED' can match on that dump. Accordingly I uploaded an authenticated file to
https://level02-2.stripe-ctf.com/user-ajvivlehdt/uploads/authenticated
Using that as my endpoint URI means authenticating as level 2. I can then choose the following endpoint
URI to authenticate as level 5.
https://level05-1.stripe-ctf.com/user-qtoyekwrod/?pingback=https%3A%2F%2Flevel02-2.stripe-ctf.com%2Fuser-ajvivlehdt%2Fuploads%2Fauthenticated&username=a&password=a
Navigating
to that URI results in the level 5 server telling me I'm authenticated as level 2 and lists the text of the level 2 file 'AUTHENTICATED'. Feeding this back into the level 5 server as my endpoint
URI means level 5 seeing 'AUTHENTICATED' coming back from a level 5 URI.
I didn't see any particular code review red flags, really the issue here is that the regular expression testing for 'AUTHENTICATED' is too permisive and the protocol itself doesn't do enough. The protocol requires only a set piece of common literal text to be returned which makes it easy for a server to accidentally fall into authenticating. Having the endpoint URI have to return variable text based on the input would make it much harder for a server to accidentally authenticate.
Use of my old Hotmail account has really snuck up on me as I end up caring more and more about all of the services with which it is associated. The last straw is Windows 8 login, but previous straws include Xbox, Zune, SkyDrive, and my Windows 7 Phone. I like the features and sync'ing associated with the Windows Live ID, but I don't like my old, spam filled, hotmail email address on the Live ID account.
A coworker told me about creating a Live ID from a custom domain, which sounded like just the ticket for me. Following the instructions above I was able to create a new deletethis.net Live ID but the next step of actually using this new Live ID was much more difficult. My first hope was there would be some way to link my new and old Live IDs so as to make them interchangeable. As it turns out there is a way to link Live IDs but all that does is make it easy to switch between accounts on Live Mail, SkyDrive and some other webpages.
Instead one must change over each service or start over depending on the service:
I'm done playing Fez. The style is atmospheric retro nastalgia and on the surface the gameplay is standard 2D platformer with one interesting Flatland style game mechanic but dig deeper to find Myst style puzzles. Despite the following I thoroughly enjoyed the game and would recommend it to anyone intrigued by the previous. Five eighths through the game I ran into one of the game's infamous Fez save game breaking issues, but I enjoyed the game enough that I started over before any patches were released and had no further issues.
While playing the game I created some tools to help keep track of my Fez notes (spoilers) including a Pixelated Image Creator that makes it easy to generate data URIs for large, black and white pixelated images, and (spoilers) a Fez Transliteration Tool to help me translate the in-game writing system.
The goal of this experiment was to combine the flipping tables emoticons with the Threw It On The Ground video using shiny new HTML5-ish features and the end result is the table flipper flipping the Threw It On the Ground video.
The table flipper emoticon is CSS before content that changes on hover. Additionally on hover a CSS transform is applied to flip the video upside down several times and move it to the right and there's a CSS transition to animate the flipping. The only issue I ran into is that (at least on Windows) Flash doesn't like to have CSS transform rotations applied to it. So to get the most out of the flip experiment you must opt-in to HTML5 video on YouTube. And of course you must use a browser that supports the various things I just mentioned, like the latest Chrome (or not yet released IE10).
I've been working on the Glitch Helperator. It is a collection of tools and things I've put together for Glitch. It has a few features that I haven't seen elsewhere including:
Elaborating on a previous brief post on the topic of Web Worker initialization race conditions, there's two important points to avoid a race condition when setting up a Worker:
For example the following has no race becaues the spec guarentees that messages posted to a worker during its first synchronous block of execution will be queued and handled after that block. So the worker gets a chance to setup its onmessage handler. No race:
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
onmessage = function(e) {
// ...
}
The following has a race because there's no guarentee that the parent's onmessage handler is setup before the worker executes postMessage. Race (violates 1):
'parent.js':
var worker = new Worker();
worker.onmessage = function(e) {
// ...
};
'worker.js':
postMessage("initialize");
The following has a race because the worker has no onmessage handler set in its first synchronous execution block and so the parent's postMessage may be sent before the worker sets its onmessage handler. Race (violates 2):
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
setTimeout(
function() {
onmessage = function(e) {
// ...
}
},
0);
Sarah and I have been enjoying Glitch for a while now. Reviews are usually positive although occasionally biting (but mostly accurate).
I enjoy Glitch as a game of exploration: exploring the game's lands with hidden and secret rooms, and exploring the games skills and game mechanics. The issue with my enjoyment coming from exploration is that after I've explored all streets and learned all skills I've got nothing left to do. But I've found that even after that I can have fun writing client side JavaScript against Glitch's web APIs making tools (I work on the Glitch Helperator) for use in Glitch. And on a semi-regular basis they add new features reviving my interest in the game itself.
Last time I wrote about how I switched from Delicious to Google Reader's shared links feature only to find out that week that Google was removing the Google Reader shared links feature in favor of Google Plus social features (I'll save my Google Plus rant for another day).
Forced to find something new again, I'm now very pleased with Tumblr. Google Reader has Tumblr in its preset list of Send To sites which makes it relatively easy to add articles. And Tumblr's UX for adding things lets me easily pick a photo or video to display from the article - something which I had put together with a less convenient UX on my bespoke blogging system. For adding things outside of Google Reader I made a Tumblr accelerator to hookup to the Tumblr Add UX.
Of course they have an RSS feed which I hooked up to my blog. The only issue I had there is that when you add a link (and not a video or photo) to Tumblr, the RSS feed entry title for that link is repeated in the entry description as a link followed by a colon and then the actual description entered into Tumblr. I want my title separate so I can apply my own markup so I did a bit of parsing of the description to remove the repeated title from the description.
A bug came up the other day involving markup containing <input type="image" src="http://example.com/...
. I knew that "image" was a valid input type but it wasn't until that moment
that I realized I didn't know what it did. Looking it up I found that it displays the specified image and when the user clicks on the image, the form is submitted with an additional two name
value pairs: the x and y positions of the point at which the user clicked the image.
Take for example the following HTML:
<form action="http://example.com/">
<input type="image" name="foo" src="http://deletethis.net/dave/images/davebefore.jpg">
</form>
If the user
clicks on the image, the browser will submit the form with a URI like the following:http://example.com/?foo.x=145&foo.y=124
.
This seemed like an incredibly specific feature to be built directly into the language when this could instead be done with javascript. I looked a bit further and saw that its been in HTML since at least HTML2, which of course makes much more sense. Javascript barely existed at that point and sending off the user's click location in a form may have been the only way to do something interesting with that action.