There are two main differences in terms of security between a JavaScript UWP app and the Edge browser:
A JavaScript UWP app has one process (technically not true with background tasks and other edge cases but ignoring that for the moment) that runs in the corresponding appcontainer defined by the app's appx manifest. This one process is where edgehtml is loaded and is rendering HTML, talking to the network, and executing script. Specifically, the UWP main UI thread is the one where your script is running and calling into WinRT.
In the Edge browser there is a browser process running in the same appcontainer defined by its appx manifest, but there are also tab processes. These tab processes are running in restricted app containers that have fewer appx capabilities. The browser process has XAML loaded and coordinates between tabs and handles some (non-WinRT) brokering from the tab processes. The tab processes load edgehtml and that is where they render HTML, talk to the network and execute script.
There is no way to configure the JavaScript UWP app's process model but using WebViews you can approximate it. You can create out of process WebViews and to some extent configure their capabilities, although not to the same extent as the browser. The WebView processes in this case are similar to the browser's tab processes. See the MSWebViewProcess object for configuring out of process WebView creation. I also implemented out of proc WebView tabs in my JSBrowser fork.
The ApplicationContentUriRules (ACUR) section of the appx manifest lets an application define what URIs are considered app code. See a previous post for the list of ACUR effects.
Notably app code is able to access WinRT APIs. Because of this, DOM security restrictions are loosended to match what is possible with WinRT.
Privileged DOM APIs like geolocation, camera, mic etc require a user prompt in the browser before use. App code does not show the same browser prompt. There still may be an OS prompt – the same prompt that applies to any UWP app, but that’s usually per app not per origin.
App code also gets to use XMLHttpRequest or fetch to access cross origin content. Because UWP apps have separate state, cross origin here might not mean much to an attacker unless your app also has the user login to Facebook or some other interesting cross origin target.
In Win8.1 JavaScript UWP apps we supported multiple windows using MSApp DOM APIs. In Win10 we use window.open and window and a new MSApp API getViewId and the previous MSApp APIs are gone:
Win10 | Win8.1 | |
---|---|---|
Create new window | window.open | MSApp.createNewView |
New window object | window | MSAppView |
viewId | MSApp.getViewId(window) | MSAppView.viewId |
We use window.open and window for creating new windows, but then to interact with WinRT APIs we add the MSApp.getViewId API. It takes a window object as a parameter and returns a viewId number that can be used with the various Windows.UI.ViewManagement.ApplicationViewSwitcher APIs.
Views in WinRT normally start hidden and the end developer uses something like TryShowAsStandaloneAsync
to display the view once it is fully prepared. In the web world, window.open shows a window immediately and the end user can watch as content is loaded and rendered. To have your new windows act
like views in WinRT and not display immediately we have added a window.open option. For example
let newWindow = window.open("https://example.com", null, "msHideView=yes");
The primary window that is initially opened by the OS acts differently than the secondary windows that it opens:
Primary | Secondary | |
---|---|---|
window.open | Allowed | Disallowed |
window.close | Close app | Close window |
Navigation restrictions | ACUR only | No restrictions |
The restriction on secondary windows such that they cannot open secondary windows could change in the future depending on feedback.
Lastly, there is a very difficult technical issue preventing us from properly supporting synchronous, same-origin, cross-window, script calls. That is, when you open a window that's same origin, script in one window is allowed to directly call functions in the other window and some of these calls will fail. postMessage calls work just fine and is the recommended way to do things if that's possible for you. Otherwise we continue to work on improving this.
Application Content URI Rules (ACUR from now on) defines the bounds of the web that make up the Microsoft Store application. Package content via the ms-appx URI scheme is automatically considered part of the app. But if you have content on the web via http or https you can use ACUR to declare to Windows that those URIs are also part of your application. When your app navigates to URIs on the web those URIs will be matched against the ACUR to determine if they are part of your app or not. The documentation for how matching is done on the wildcard URIs in the ACUR Rule elements is not very helpful on MSDN so here are some notes.
You can have up to 100 Rule XML elements per ApplicationContentUriRules element. Each has a Match attribute that can be up to 2084 characters long. The content of the Match attribute is parsed with CreateUri and when matching against URIs on the web additional wildcard processing is performed. I’ll call the URI from the ACUR Rule the rule URI and the URI we compare it to found during app navigation the navigation URI.
The rule URI is matched to a navigation URI by URI component: scheme, username, password, host, port, path, query, and fragment. If a component does not exist on the rule URI then it matches any value of that component in the navigation URI. For example, a rule URI with no fragment will match a navigation URI with no fragment, with an empty string fragment, or a fragment with any value in it.
Each component except the port may have up to 8 asterisks. Two asterisks in a row counts as an escape and will match 1 literal asterisk. For scheme, username, password, query and fragment the asterisk matches whatever it can within the component.
For the host, if the host consists of exactly one single asterisk then it matches anything. Otherwise an asterisk in a host only matches within its domain name label. For example, http://*.example.com will match http://a.example.com/ but not http://b.a.example.com/ or http://example.com/. And http://*/ will match http://example.com, http://a.example.com/, and http://b.a.example.com/. However the Store places restrictions on submitting apps that use the http://* rule or rules with an asterisk in the second effective domain name label. For example, http://*.com is also restricted for Store submission.
For the path, an asterisk matches within the path segment. For example, http://example.com/a/*/c will match http://example.com/a/b/c and http://example.com/a//c but not http://example.com/a/b/b/c or http://example.com/a/c
Additionally for the path, if the path ends with a slash then it matches any path that starts with that same path. For example, http://example.com/a/ will match http://example.com/a/b and http://example.com/a/b/c/d/e/, but not http://example.com/b/.
If the path doesn’t end with a slash then there is no suffix matching performed. For example, http://example.com/a will match only http://example.com/a and no URIs with a different path.
As a part of parsing the rule URI and the navigation URI, CreateUri will perform URI normalization and so the hostname and scheme will be made lower case (casing matters in all other parts of the URI and case sensitive comparisons will be performed), IDN normalization will be performed, ‘.’ and ‘..’ path segments will be resolved and other normalizations as described in the CreateUri documentation.
TL;DR: Web content in a JavaScript Windows Store app or WebView in a Windows Store app that has full access to WinRT also gets to use XHR unrestricted by cross origin checks.
By default web content in a WebView control in a Windows Store App has the same sort of limitations as that web content in a web browser. However, if you give the URI of that web content full access to WinRT, then the web content also gains the ability to use XMLHttpRequest unrestricted by cross origin checks. This means no CORS checks and no OPTIONS requests. This only works if the web content's URI matches a Rule in the ApplicationContentUriRules of your app's manifest and that Rule declares WindowsRuntimeAccess="all". If it declares WinRT access as 'None' or 'AllowForWebOnly' then XHR acts as it normally does.
In terms of security, if you've already given a page access to all of WinRT which includes the HttpRequest class and other networking classes that don't perform cross origin checks, then allowing XHR to skip CORS doesn't make things worse.
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.
Level 7 of the Stripe CTF involved running a length extension attack on the level 7 server's custom crypto code.
@app.route('/logs/')
@require_authentication
def logs(id):
rows = get_logs(id)
return render_template('logs.html', logs=rows)
...
def verify_signature(user_id, sig, raw_params):
# get secret token for user_id
try:
row = g.db.select_one('users', {'id': user_id})
except db.NotFound:
raise BadSignature('no such user_id')
secret = str(row['secret'])
h = hashlib.sha1()
h.update(secret + raw_params)
print 'computed signature', h.hexdigest(), 'for body', repr(raw_params)
if h.hexdigest() != sig:
raise BadSignature('signature does not match')
return True
The level 7 web app is a web API in which clients submit signed RESTful requests and some actions are restricted to particular clients. The goal is to view the response to one of the restricted actions. The first issue is that there is a logs path to display the previous requests for a user and although the logs path requires the client to be authenticatd, it doesn't restrict the logs you view to be for the user for which you are authenticated. So you can manually change the number in the '/logs/[#]' to '/logs/1' to view the logs for the user ID 1 who can make restricted requests. The level 7 web app can be exploited with replay attacks but you won't find in the logs any of the restricted requests we need to run for our goal. And we can't just modify the requests because they are signed.
However they are signed using their own custom signing code which can be exploited by a length extension attack. All Merkle–Damgård hash algorithms (which includes MD5, and SHA) have the property that if you hash data of the form (secret + data) where data is known and the length but not content of secret is known you can construct the hash for a new message (secret + data + padding + newdata) where newdata is whatever you like and padding is determined using newdata, data, and the length of secret. You can find a sha-padding.py script on VNSecurity blog that will tell you the new hash and padding per the above. With that I produced my new restricted request based on another user's previous request. The original request was the following.
count=10&lat=37.351&user_id=1&long=%2D119.827&waffle=eggo|sig:8dbd9dfa60ef3964b1ee0785a68760af8658048c
The new request with padding and my new content was the
following.
count=10&lat=37.351&user_id=1&long=%2D119.827&waffle=eggo%80%02%28&waffle=liege|sig:8dbd9dfa60ef3964b1ee0785a68760af8658048c
My new data in the new
request is able to overwrite the waffle parameter because their parser fills in a map without checking if the parameter existed previously.
Code review red flags included custom crypto looking code. However I am not a crypto expert and it was difficult for me to find the solution to this level.
451 Unavailable for Legal Reasons: The 451 status code is optional; clients cannot rely upon its use. It is imaginable that certain legal authorities may wish to avoid transparency, and not only forbid access to certain resources, but also disclosure that the restriction exists.
That was fast.
A House subcommittee has passed the Global Online Freedom Act (GOFA), which would require disclosure from companies about their human rights practices and limit the export of technologies that “serve the primary purpose of” facilitating government surveillance or censorship to countries designated as “Internet-restricting.”
Cool and (relatively) new methods on the JavaScript Array object are here in the most recent versions of your favorite browser! More about them on ECMAScript5, MSDN, the IE blog, or Mozilla's documentation. Here's the list that's got me excited:
Raymond Chen has some thought experiments useful for discovering various kinds of stupidity in software design:
Tim Berners-Lee's principles of Web design includes my favorite: Test of Independent Invention. This has a thought experiment containing the construction of the MMM (Multi-Media Mesh) with MRIs (Media Resource Identifiers) and MMTP (Muli-Media Transport Protocol).
The Internet design principles (RFC 1958) includes the Robustness Principle: be strict when sending and tolerant when receiving. A good one, but applied too liberally can lead to interop issues. For instance, consider web browsers. Imagine one browser becomes so popular that web devs create web pages and just test out their pages in this popular browser. They don't ensure their pages conform to standards and accidentally end up depending on the manner in which this popular browser tolerantly accepts non-standard input. This non-standard behavior ends up as de facto standard and future updates to the standard essentially has had decisions made for it.
I've found while debugging networking in IE its often useful to quickly tell if a string is encoded in UTF-8. You can check for the Byte Order Mark (EF BB BF in UTF-8) but, I rarely see the BOM on UTF-8 strings. Instead I apply a quick and dirty UTF-8 test that takes advantage of the well-formed UTF-8 restrictions.
Unlike other multibyte character encoding forms (see Windows supported character sets or IANA's list of character sets), for example Big5, where sticking together any two bytes is more likely than not to give a valid byte sequence, UTF-8 is more restrictive. And unlike other multibyte character encodings, UTF-8 bytes may be taken out of context and one can still know that its a single byte character, the starting byte of a three byte sequence, etc.
The full rules for well-formed UTF-8 are a little too complicated for me to commit to memory. Instead I've got my own simpler (this is the quick part) set of rules that will be mostly correct (this is the dirty part). For as many bytes in the string as you care to examine, check the most significant digit of the byte:
Code Points | 1st Byte | 2nd Byte | 3rd Byte | 4th Byte |
---|---|---|---|---|
U+0000..U+007F | 00..7F | |||
U+0080..U+07FF | C2..DF | 80..BF | ||
U+0800..U+0FFF | E0 | A0..BF | 80..BF | |
U+1000..U+CFFF | E1..EC | 80..BF | 80..BF | |
U+D000..U+D7FF | ED | 80..9F | 80..BF | |
U+E000..U+FFFF | EE..EF | 80..BF | 80..BF | |
U+10000..U+3FFFF | F0 | 90..BF | 80..BF | 80..BF |
U+40000..U+FFFFF | F1..F3 | 80..BF | 80..BF | 80..BF |
U+100000..U+10FFFF | F4 | 80..8F | 80..BF | 80..BF |