Folks familiar with JavaScript UWP apps in Win10 have often been confused by what PWAs in Win10 actually are. TLDR: PWAs in Win10 are simply JavaScript UWP apps. The main difference between these JS UWP Apps and our non-PWA JS UWP apps are our target end developer audience, and how we get Win10 PWAs into the Microsoft Store. See this Win10 blog post on PWAs on Win10 for related info.
On the web a subset of web sites are web apps. These are web sites that have app like behavior - that is a user might call it an app like Outlook, Maps or Gmail. And they may also have a W3C app manifest.
A subset of web apps are progressive web apps. Progressive web apps are web apps that have a W3C app manifest and a service worker. Various OSes are beginning to support PWAs as first class apps on their platform. This is true for Win10 as well in which PWAs are run as a WWA.
In Win10 a WWA (Windows Web App) is an unofficial term for a JavaScript UWP app. These are UWP apps so they have an AppxManifest.xml, they are packaged in an Appx package, they run in an App Container, they use WinRT APIs, and are installed via the Microsoft Store. Specific to WWAs though, is that the AppxManifest.xml specifies a StartPage attribute identifying some HTML content to be used as the app. When the app is activated the OS will create a WWAHost.exe process that hosts the HTML content using the EdgeHtml rendering engine.
Within that we have a notion of a packaged web app and an HWA (hosted web app). There's no real technical distinction for the end developer between these two. The only real difference is whether the StartPage identifies remote HTML content on the web (HWA), or packaged HTML content from the app's appx package (packaged web app). An end developer may create an app that is a mix of these as well, with HTML content in the package and HTML content from the web. These terms are more like ends on a continuum and identifying two different developer scenarios since the underlying technical aspect is pretty much identical.
Win10 PWAs are simply HWAs that specify a StartPage of a URI for a PWA on the web. These are still JavaScript UWP apps with all the same behavior and abilities as other UWP apps. We have two ways of getting PWAs into the Microsoft Store as Win10 PWAs. The first is PWA Builder which is a tool that helps PWA end developers create and submit to the Microsoft Store a Win10 PWA appx package. The second is a crawler that runs over the web looking for PWAs which we convert and submit to the Store using an automated PWA Builder-like tool to create a Win10 PWA from PWAs on the web (see Welcoming PWAs to Win10 for more info). In both cases the conversion involves examining the PWAs W3C app manifest and producing a corresponding AppxManifest.xml. Not all features supported by AppxManifest.xml are also available in the W3c app manifest. But the result of PWA Builder can be a working starting point for end developers who can then update the AppxManifest.xml as they like to support features like share targets or others not available in W3C app manifests.
JSBrowser is a basic browser built as a Win10 JavaScript UWP app around the WebView HTML element. Its fun and relatively simple to implement tiny browser features in JavaScript and in this post I'm implementing crash resistance.
The normal DOM mechanisms for creating an HTML WebView create an in-process WebView, in which the WebView runs on a unique UI thread. But we can use the MSWebView constructor instead to create an out-of-process WebView in which the WebView runs in its own distinct WebView process. Unlike an in-process WebView, Web content running in an out-of-process WebView can only crash the WebView process and not the app process.
this.replaceWebView = () => {
let webview = document.querySelector("#WebView");
// Cannot access webview.src - anything that would need to communicate with the webview process may fail
let oldSrc = browser.currentUrl;
const webviewParent = webview.parentElement;
webviewParent.removeChild(webview);
webview = new MSWebView();
Object.assign(this, {
"webview": webview
});
webview.setAttribute("id", "WebView");
// During startup our currentUrl field is blank. If the WebView has crashed
// and we were on a URI then we may obtain it from this property.
if (browser.currentUrl && browser.currentUrl != "") {
this.trigger("newWebview");
this.navigateTo(browser.currentUrl);
}
webviewParent.appendChild(webview);
I run replaceWebView during startup to replace the in-process WebView created via HTML markup with an out-of-process WebView. I could be doing more to dynamically copy styles, attributes, etc but I know what I need to set on the WebView and just do that.
When a WebView process crashes the corresponding WebView object is no longer useful and a new WebView element must be created. In fact if the old WebView object is used it may throw and will no longer have valid state. Accordingly when the WebView crashes I run replaceWebView again. Additionally, I need to store the last URI we've navigated to (browser.currentUrl in the above) since the crashed WebView object won't know what URI it is on after it crashes.
webview.addEventListener("MSWebViewProcessExited", () => {
if (browser.currentUrl === browser.lastCrashUrl) { ++browser.lastCrashUrlCrashCount;
}
else {
browser.lastCrashUrl = browser.currentUrl;
browser.lastCrashUrlCrashCount = 1;
}
// If we crash again and again on the same URI, maybe stop trying to load that URI.
if (browser.lastCrashUrlCrashCount >= 3) {
browser.lastCrashUrl = "";
browser.lastCrashUrlCrashCount = 0;
browser.currentUrl = browser.startPage;
}
this.replaceWebView();
});
I also keep track of the last URI that we recovered and how many times we've recovered that same URI. If the same URI crashes more than 3 times in a row then I assume that it will keep happening and I navigate to the start URI instead.
JSBrowser is a basic browser built as a Win10 JavaScript UWP app around the WebView HTML element. Its fun and relatively simple to implement tiny browser features in JavaScript and in this post I'm implementing zoom.
My plan to implement zoom is to add a zoom slider to the settings div that controls the scale of the WebView element via CSS transform. My resulting zoom change is in git and you can try the whole thing out in my JSBrowser fork.
I can implement the zoom settings slider as a range type input HTML element. This conveniently provides me a min, max, and step property and suits exactly my purposes. I chose some values that I thought would be reasonable so the browser can scale between half to 3x by increments of one quarter. This is a tiny browser feature after all so there's no custom zoom entry.
<a><label for="webviewZoom">Zoom</label><input type="range" min="50" max="300" step="25" value="100" id="webviewZoom" /></a>
To let the user know this slider is for controlling zoom, I make a label HTML element that says Zoom. The label HTML element has a for attribute which takes the id of another HTML element. This lets the browser know what the label is labelling and lets the browser do things like when the label is clicked to put focus on the slider.
There are no explicit scale APIs for WebView so to change the size of the content in the WebView we use CSS.
this.applyWebviewZoom = state => {
const minValue = this.webviewZoom.getAttribute("min");
const maxValue = this.webviewZoom.getAttribute("max");
const scaleValue = Math.max(Math.min(parseInt(this.webviewZoom.value, 10), maxValue), minValue) / 100;
// Use setAttribute so they all change together to avoid weird visual glitches
this.webview.setAttribute("style", [
["width", (100 / scaleValue) + "%"],
["height", "calc(" + (-40 / scaleValue) + "px + " + (100 / scaleValue) + "%)"],
["transform", "scale(" + scaleValue + ")"]
].map(pair => pair[0] + ": " + pair[1]).join("; "));
};
Because the user changes the scale at runtime I accordingly replace the static CSS for the WebView element with the script above to programmatically modify the style of the WebView. I change the style with one setAttribute call to do my best to avoid the browser performing unnecessary work or displaying the WebView in an intermediate and incomplete state. Applying the scale to the element is as simple as adding 'transform: scale(X)' but then there are two interesting problems.
The first is that the size of the WebView is also scaled not just the content within it. To keep the WebView the same effective size so that it still fits properly into our browser UI, we must compensate for the scale in the WebView width and height. Accordingly, you can see that we scale up by scaleValue and then in width and height we divide by the scaleValue.
transform-origin: 0% 0%;
The other issue is that by default the scale transform's origin is the center of the WebView element. This means when scaled up all sides of the WebView would expand out. But when modifying the width and height those apply relative to the upper left of the element so our inverse scale application to the width and height above aren't quite enough. We also have to change the origin of the scale transform to match the origin of the changes to the width and height.
The x-ms-webview HTML element has the void addWebAllowedObject(string name, any value) method and the webview XAML element has the void AddWebAllowedObject(String name, Object value) method. The object parameter is projected into the webview’s top-level HTML document’s script engine as a new property on the global object with property name set to the name parameter. It is not injected into the current document but rather it is projected during initialization of the next top-level HTML document to which the webview navigates.
If AddWebAllowedObject is called during a NavigationStarting event handler the object will be injected into the document resulting from the navigation corresponding to that event.
If AddWebAllowedObject is called outside of the NavigationStarting event handler it will apply to the navigation corresponding to the next explicit navigate method called on the webview or the navigation corresponding to the next NavigationStarting event handler that fires, whichever comes first.
To avoid this potential race, you should use AddWebAllowedObject in one of two ways: 1. During a NavigationStarting event handler, 2. Before calling a Navigate method and without returning to the main loop.
If called both before calling a navigate method and in the NavigationStarting event handler then the result is the aggregate of all those calls.
If called multiple times for the same document with the same name the last call wins and the previous are silently ignored.
If AddWebAllowedObject is called for a navigation and that navigation fails or redirects to a different URI, the AddWebAllowedObject call is silently ignored.
After successfully adding an object to a document, the object will no longer be projected once a navigation to a new document occurs.
If AddWebAllowedObject is called for a document with All WinRT access then projection will succeed and the object will be added.
If AddWebAllowedObject is called for a document which has a URI which has no declared WinRT access via ApplicationContentUriRules then Allow for web only WinRT access is given to that document.
If the document has Allow for web only WinRT access then projection will succeed only if the object’s runtimeclass has the Windows.Foundation.Metadata.AllowForWeb metadata attribute.
The object must implement the IAgileObject interface. Because the XAML and HTML webview elements run on ASTA view threads and the webview’s content’s JavaScript thread runs on another ASTA thread a developer should not create their non-agile runtimeclass on the view thread. To encourage end developers to do this correctly we require the object implements IAgileObject.
The name parameter must be a valid JavaScript property name, otherwise the call will fail silently. If the name is already a property name on the global object, that property is overwritten if the property is configurable. Non-configurable properties on the global object are not overwritten and the AddWebAllowedObject call fails silently. On success, the projected property is writable, configurable, and enumerable.
Some errors as described above fail silently. Other issues, such as lack of IAgileObject or lack of the AllowForWeb attribute result in an error in the JavaScript developer console.
Application Content URI Rules (ACUR from now on) defines the bounds on the web that make up a Microsoft Store application. The previous blog post discussed the syntax of the Rule's Match attribute and this time I'll write about the interactions between the Rules elements.
A single ApplicationContentUriRules element may have up to 100 Rule child elements. When determining if a navigation URI matches any of the ACUR the last Rule in the list with a matching match wildcard URI is used. If that Rule is an include rule then the navigation URI is determined to be an application content URI and if that Rule is an exclude rule then the navigation rule is not an application content URI. For example:
Rule Type='include' Match='https://example.com/'/
Rule Type='exclude' Match='https://example.com/'/
Given the above two rules in that order, the navigation URI https://example.com/ is not an application content URI because the last matching rule is the exclude rule. Reverse the order of the rules and get the opposite result.
In addition to determining if a navigation URI is application content or not, a Rule may also confer varying levels of WinRT access via the optional WindowsRuntimeAccess attribute which may be set to 'none', 'allowForWeb', or 'all'. If a navigation URI matches multiple different include rules only the last rule is applied even as it applies to the WindowsRuntimeAccess attribute. For example:
Rule Type='include' Match='https://example.com/' WindowsRuntimeAccess='none'/
Rule Type='include' Match='https://example.com/' WindowsRuntimeAccess='all'/
Given the above two rules in that order, the navigation URI https://example.com/ will have access to all WinRT APIs because the last matching rule wins. Reverse the rule order and the navigation URI https://example.com/ will have no access to WinRT. There is no summation or combining of multiple matching rules - only the last matching rule wins.
Application Content URI Rules (ACUR from now on) defines the bounds of the web that make up the Microsoft Store application. Package content via the ms-appx URI scheme is automatically considered part of the app. But if you have content on the web via http or https you can use ACUR to declare to Windows that those URIs are also part of your application. When your app navigates to URIs on the web those URIs will be matched against the ACUR to determine if they are part of your app or not. The documentation for how matching is done on the wildcard URIs in the ACUR Rule elements is not very helpful on MSDN so here are some notes.
You can have up to 100 Rule XML elements per ApplicationContentUriRules element. Each has a Match attribute that can be up to 2084 characters long. The content of the Match attribute is parsed with CreateUri and when matching against URIs on the web additional wildcard processing is performed. I’ll call the URI from the ACUR Rule the rule URI and the URI we compare it to found during app navigation the navigation URI.
The rule URI is matched to a navigation URI by URI component: scheme, username, password, host, port, path, query, and fragment. If a component does not exist on the rule URI then it matches any value of that component in the navigation URI. For example, a rule URI with no fragment will match a navigation URI with no fragment, with an empty string fragment, or a fragment with any value in it.
Each component except the port may have up to 8 asterisks. Two asterisks in a row counts as an escape and will match 1 literal asterisk. For scheme, username, password, query and fragment the asterisk matches whatever it can within the component.
For the host, if the host consists of exactly one single asterisk then it matches anything. Otherwise an asterisk in a host only matches within its domain name label. For example, http://*.example.com will match http://a.example.com/ but not http://b.a.example.com/ or http://example.com/. And http://*/ will match http://example.com, http://a.example.com/, and http://b.a.example.com/. However the Store places restrictions on submitting apps that use the http://* rule or rules with an asterisk in the second effective domain name label. For example, http://*.com is also restricted for Store submission.
For the path, an asterisk matches within the path segment. For example, http://example.com/a/*/c will match http://example.com/a/b/c and http://example.com/a//c but not http://example.com/a/b/b/c or http://example.com/a/c
Additionally for the path, if the path ends with a slash then it matches any path that starts with that same path. For example, http://example.com/a/ will match http://example.com/a/b and http://example.com/a/b/c/d/e/, but not http://example.com/b/.
If the path doesn’t end with a slash then there is no suffix matching performed. For example, http://example.com/a will match only http://example.com/a and no URIs with a different path.
As a part of parsing the rule URI and the navigation URI, CreateUri will perform URI normalization and so the hostname and scheme will be made lower case (casing matters in all other parts of the URI and case sensitive comparisons will be performed), IDN normalization will be performed, ‘.’ and ‘..’ path segments will be resolved and other normalizations as described in the CreateUri documentation.
Parsing WinMD files, the containers of WinRT API metadata, is relatively simple using the appropriate .NET reflection APIs. However, figuring out which reflection APIs to use is not obvious. I've got a completed C sharp class parsing WinMD files that you can check out for reference.
Use System.Reflection.Assembly.ReflectionOnlyLoad
to load the
WinMD file. Don't use the normal load methods because the WinMD files contain only metadata. This will load up info about APIs defined in that WinMD, but any references to types outside of that
WinMD including types found in the normal OS system WinMD files must be resolved by the app code via the System.Reflection.InteropServices.WindowsRuntimeMetadata.ReflectionOnlyNamespaceResolve
event.
In this event handler you must resolve the unknown namespace reference by adding an assembly to the NamespaceResolveEventArgs's ResolvedAssemblies property. If you're only interested in OS system
WinMD files you can use System.Reflection.InteropServices.WindowsRuntimeMetadata.ResolveNamespace
to
turn a namespace into the expected OS system WinMD path and turn that path into an assembly with ReflectionOnlyLoad.
The other day I had to debug a JavaScript UWA that was failing when trying to use an undefined property. In a previous OS build this code would run and the property was defined. I wanted something similar to windbg/cdb's ba command that lets me set a breakpoint on read or writes to a memory location so I could see what was creating the object in the previous OS build and what that code was doing now in the current OS build. I couldn't find such a breakpoint mechanism in Visual Studio or F12 so I wrote a little script to approximate JavaScript data breakpoints.
The script creates a stub object with a getter and setter. It actually performs the get or set but also calls debugger; to break in the debugger. In order to handle my case of needing to break when window.object1.object2 was created or accessed, I further had it recursively set up such stub objects for the matching property names.
Its not perfect because it is an enumerable property and shows up in hasOwnProperty and likely other places. But for your average code that checks for the existence of a property via if (object.property) it works well.
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.