There's no perfect way to change the user agent string for the UWP WebView (x-ms-webview in HTML, Windows.UI.Xaml.Controls.WebView in XAML, and Windows.Web.UI.Interop.WebViewControl in Win32) but there are two imperfect methods folks end up using.
The first is to call UrlMkSetSessionOption. This is an old public API that allows you to configure various arcane options including one that is the default user agent string for requests running through urlmon. This API is allowed by the Microsoft Store for UWP apps. The change it applies is process wide which has two potential drawbacks. If you want to be able to have different UA strings set for different requests from a WebView that's not really possible with this solution. The other drawback is if you're using out of process WebView, you need to ensure you're calling into UrlMkSetSessionOption in the WebView's process. You'll need to write third party WinRT that calls UrlMkSetSessionOption, create the out of proc WebView, navigate it to some trusted local page, use AddWebAllowedObject or provide that URI WinRT access, and call into your third party WinRT. You'll need to do that for any new WebView process you create.
The second less generally applicable solution is to use NavigateWithHttpRequestMessage and set the User-Agent HTTP header. In this case you get to control the scope of the user agent string changes but has the limitations that not all sub resource downloads will use this user agent string and for navigations you don't initiate you have to manually intercept and re-request being careful to transfer over all POST body state and HTTP headers correctly. That last part is not actually possible for iframes.
Folks familiar with JavaScript UWP apps in Win10 have often been confused by what PWAs in Win10 actually are. TLDR: PWAs in Win10 are simply JavaScript UWP apps. The main difference between these JS UWP Apps and our non-PWA JS UWP apps are our target end developer audience, and how we get Win10 PWAs into the Microsoft Store. See this Win10 blog post on PWAs on Win10 for related info.
On the web a subset of web sites are web apps. These are web sites that have app like behavior - that is a user might call it an app like Outlook, Maps or Gmail. And they may also have a W3C app manifest.
A subset of web apps are progressive web apps. Progressive web apps are web apps that have a W3C app manifest and a service worker. Various OSes are beginning to support PWAs as first class apps on their platform. This is true for Win10 as well in which PWAs are run as a WWA.
In Win10 a WWA (Windows Web App) is an unofficial term for a JavaScript UWP app. These are UWP apps so they have an AppxManifest.xml, they are packaged in an Appx package, they run in an App Container, they use WinRT APIs, and are installed via the Microsoft Store. Specific to WWAs though, is that the AppxManifest.xml specifies a StartPage attribute identifying some HTML content to be used as the app. When the app is activated the OS will create a WWAHost.exe process that hosts the HTML content using the EdgeHtml rendering engine.
Within that we have a notion of a packaged web app and an HWA (hosted web app). There's no real technical distinction for the end developer between these two. The only real difference is whether the StartPage identifies remote HTML content on the web (HWA), or packaged HTML content from the app's appx package (packaged web app). An end developer may create an app that is a mix of these as well, with HTML content in the package and HTML content from the web. These terms are more like ends on a continuum and identifying two different developer scenarios since the underlying technical aspect is pretty much identical.
Win10 PWAs are simply HWAs that specify a StartPage of a URI for a PWA on the web. These are still JavaScript UWP apps with all the same behavior and abilities as other UWP apps. We have two ways of getting PWAs into the Microsoft Store as Win10 PWAs. The first is PWA Builder which is a tool that helps PWA end developers create and submit to the Microsoft Store a Win10 PWA appx package. The second is a crawler that runs over the web looking for PWAs which we convert and submit to the Store using an automated PWA Builder-like tool to create a Win10 PWA from PWAs on the web (see Welcoming PWAs to Win10 for more info). In both cases the conversion involves examining the PWAs W3C app manifest and producing a corresponding AppxManifest.xml. Not all features supported by AppxManifest.xml are also available in the W3c app manifest. But the result of PWA Builder can be a working starting point for end developers who can then update the AppxManifest.xml as they like to support features like share targets or others not available in W3C app manifests.
JSBrowser is a basic browser built as a Win10 JavaScript UWP app around the WebView HTML element. Its fun and relatively simple to implement tiny browser features in JavaScript and in this post I'm implementing zoom.
My plan to implement zoom is to add a zoom slider to the settings div that controls the scale of the WebView element via CSS transform. My resulting zoom change is in git and you can try the whole thing out in my JSBrowser fork.
I can implement the zoom settings slider as a range type input HTML element. This conveniently provides me a min, max, and step property and suits exactly my purposes. I chose some values that I thought would be reasonable so the browser can scale between half to 3x by increments of one quarter. This is a tiny browser feature after all so there's no custom zoom entry.
<a><label for="webviewZoom">Zoom</label><input type="range" min="50" max="300" step="25" value="100" id="webviewZoom" /></a>
To let the user know this slider is for controlling zoom, I make a label HTML element that says Zoom. The label HTML element has a for attribute which takes the id of another HTML element. This lets the browser know what the label is labelling and lets the browser do things like when the label is clicked to put focus on the slider.
There are no explicit scale APIs for WebView so to change the size of the content in the WebView we use CSS.
this.applyWebviewZoom = state => {
const minValue = this.webviewZoom.getAttribute("min");
const maxValue = this.webviewZoom.getAttribute("max");
const scaleValue = Math.max(Math.min(parseInt(this.webviewZoom.value, 10), maxValue), minValue) / 100;
// Use setAttribute so they all change together to avoid weird visual glitches
this.webview.setAttribute("style", [
["width", (100 / scaleValue) + "%"],
["height", "calc(" + (-40 / scaleValue) + "px + " + (100 / scaleValue) + "%)"],
["transform", "scale(" + scaleValue + ")"]
].map(pair => pair[0] + ": " + pair[1]).join("; "));
};
Because the user changes the scale at runtime I accordingly replace the static CSS for the WebView element with the script above to programmatically modify the style of the WebView. I change the style with one setAttribute call to do my best to avoid the browser performing unnecessary work or displaying the WebView in an intermediate and incomplete state. Applying the scale to the element is as simple as adding 'transform: scale(X)' but then there are two interesting problems.
The first is that the size of the WebView is also scaled not just the content within it. To keep the WebView the same effective size so that it still fits properly into our browser UI, we must compensate for the scale in the WebView width and height. Accordingly, you can see that we scale up by scaleValue and then in width and height we divide by the scaleValue.
transform-origin: 0% 0%;
The other issue is that by default the scale transform's origin is the center of the WebView element. This means when scaled up all sides of the WebView would expand out. But when modifying the width and height those apply relative to the upper left of the element so our inverse scale application to the width and height above aren't quite enough. We also have to change the origin of the scale transform to match the origin of the changes to the width and height.
Application Content URI Rules (ACUR from now on) defines the bounds of the web that make up the Microsoft Store application. Package content via the ms-appx URI scheme is automatically considered part of the app. But if you have content on the web via http or https you can use ACUR to declare to Windows that those URIs are also part of your application. When your app navigates to URIs on the web those URIs will be matched against the ACUR to determine if they are part of your app or not. The documentation for how matching is done on the wildcard URIs in the ACUR Rule elements is not very helpful on MSDN so here are some notes.
You can have up to 100 Rule XML elements per ApplicationContentUriRules element. Each has a Match attribute that can be up to 2084 characters long. The content of the Match attribute is parsed with CreateUri and when matching against URIs on the web additional wildcard processing is performed. I’ll call the URI from the ACUR Rule the rule URI and the URI we compare it to found during app navigation the navigation URI.
The rule URI is matched to a navigation URI by URI component: scheme, username, password, host, port, path, query, and fragment. If a component does not exist on the rule URI then it matches any value of that component in the navigation URI. For example, a rule URI with no fragment will match a navigation URI with no fragment, with an empty string fragment, or a fragment with any value in it.
Each component except the port may have up to 8 asterisks. Two asterisks in a row counts as an escape and will match 1 literal asterisk. For scheme, username, password, query and fragment the asterisk matches whatever it can within the component.
For the host, if the host consists of exactly one single asterisk then it matches anything. Otherwise an asterisk in a host only matches within its domain name label. For example, http://*.example.com will match http://a.example.com/ but not http://b.a.example.com/ or http://example.com/. And http://*/ will match http://example.com, http://a.example.com/, and http://b.a.example.com/. However the Store places restrictions on submitting apps that use the http://* rule or rules with an asterisk in the second effective domain name label. For example, http://*.com is also restricted for Store submission.
For the path, an asterisk matches within the path segment. For example, http://example.com/a/*/c will match http://example.com/a/b/c and http://example.com/a//c but not http://example.com/a/b/b/c or http://example.com/a/c
Additionally for the path, if the path ends with a slash then it matches any path that starts with that same path. For example, http://example.com/a/ will match http://example.com/a/b and http://example.com/a/b/c/d/e/, but not http://example.com/b/.
If the path doesn’t end with a slash then there is no suffix matching performed. For example, http://example.com/a will match only http://example.com/a and no URIs with a different path.
As a part of parsing the rule URI and the navigation URI, CreateUri will perform URI normalization and so the hostname and scheme will be made lower case (casing matters in all other parts of the URI and case sensitive comparisons will be performed), IDN normalization will be performed, ‘.’ and ‘..’ path segments will be resolved and other normalizations as described in the CreateUri documentation.
Since I had last posted about using Let's Encrypt with NearlyFreeSpeech, NFS has changed their process for setting TLS info. Instead of putting the various files in /home/protected/ssl and submitting an assistance request, now there is a command to submit the certificate info and a webpage for submitting the certificate info.
The webpage is https://members.nearlyfreespeech.net/{username}/sites/{sitename}/add_tls
and has a textbox for you to paste in all the cert info in PEM form into the textbox. The
domain key, the domain certificate, and the Let's Encrypt intermediate cert must be pasted into the textbox and submitted.
Alternatively, that same info may be provided as standard input to nfsn -i set-tls
To renew my certificate with the updated NFS process I followed the commands from Andrei Damian-Fekete's script which depends on acme_tiny.py:
python acme_tiny.py --account-key account.key --csr domain.csr --acme-dir /home/public/.well-known/acme-challenge/ > signed.crt
wget -O - https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > intermediate.pem
cat domain.key signed.crt intermediate.pem > chained.pem
nfsn -i set-tls < chained.pem
Because
my certificate had already expired I needed to comment out the section in acme_tiny.py that validates the challenge file. The filenames in the above map to the following:
TL;DR: Web content in a JavaScript Windows Store app or WebView in a Windows Store app that has full access to WinRT also gets to use XHR unrestricted by cross origin checks.
By default web content in a WebView control in a Windows Store App has the same sort of limitations as that web content in a web browser. However, if you give the URI of that web content full access to WinRT, then the web content also gains the ability to use XMLHttpRequest unrestricted by cross origin checks. This means no CORS checks and no OPTIONS requests. This only works if the web content's URI matches a Rule in the ApplicationContentUriRules of your app's manifest and that Rule declares WindowsRuntimeAccess="all". If it declares WinRT access as 'None' or 'AllowForWebOnly' then XHR acts as it normally does.
In terms of security, if you've already given a page access to all of WinRT which includes the HttpRequest class and other networking classes that don't perform cross origin checks, then allowing XHR to skip CORS doesn't make things worse.
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.
nasa:
This 30 day mission will help our researchers learn how isolation and close quarters affect individual and group behavior. This study at our Johnson Space Center prepares us for long duration space missions, like a trip to an asteroid or even to Mars.
The Human Research Exploration Analog (HERA) that the crew members will be living in is one compact, science-making house. But unlike in a normal house, these inhabitants won’t go outside for 30 days. Their communication with the rest of planet Earth will also be very limited, and they won’t have any access to internet. So no checking social media kids!
The only people they will talk with regularly are mission control and each other.
The crew member selection process is based on a number of criteria, including the same criteria for astronaut selection.
What will they be doing?
Because this mission simulates a 715-day journey to a Near-Earth asteroid, the four crew members will complete activities similar to what would happen during an outbound transit, on location at the asteroid, and the return transit phases of a mission (just in a bit of an accelerated timeframe). This simulation means that even when communicating with mission control, there will be a delay on all communications ranging from 1 to 10 minutes each way. The crew will also perform virtual spacewalk missions once they reach their destination, where they will inspect the asteroid and collect samples from it.
A few other details:
- The crew follows a timeline that is similar to one used for the ISS crew.
- They work 16 hours a day, Monday through Friday. This includes time for daily planning, conferences, meals and exercises.
- They will be growing and taking care of plants and brine shrimp, which they will analyze and document.
But beware! While we do all we can to avoid crises during missions, crews need to be able to respond in the event of an emergency. The HERA crew will conduct a couple of emergency scenario simulations, including one that will require them to maneuver through a debris field during the Earth-bound phase of the mission.
Throughout the mission, researchers will gather information about cohabitation, teamwork, team cohesion, mood, performance and overall well-being. The crew members will be tracked by numerous devices that each capture different types of data.
Past HERA crew members wore a sensor that recorded heart rate, distance, motion and sound intensity. When crew members were working together, the sensor would also record their proximity as well, helping investigators learn about team cohesion.
Researchers also learned about how crew members react to stress by recording and analyzing verbal interactions and by analyzing “markers” in blood and saliva samples.
In total, this mission will include 19 individual investigations across key human research elements. From psychological to physiological experiments, the crew members will help prepare us for future missions.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
nasa:
This 30 day mission will help our researchers learn how isolation and close quarters affect individual and group behavior. This study at our Johnson Space Center prepares us for long duration space missions, like a trip to an asteroid or even to Mars.
The Human Research Exploration Analog (HERA) that the crew members will be living in is one compact, science-making house. But unlike in a normal house, these inhabitants won’t go outside for 30 days. Their communication with the rest of planet Earth will also be very limited, and they won’t have any access to internet. So no checking social media kids!
The only people they will talk with regularly are mission control and each other.
The crew member selection process is based on a number of criteria, including the same criteria for astronaut selection.
What will they be doing?
Because this mission simulates a 715-day journey to a Near-Earth asteroid, the four crew members will complete activities similar to what would happen during an outbound transit, on location at the asteroid, and the return transit phases of a mission (just in a bit of an accelerated timeframe). This simulation means that even when communicating with mission control, there will be a delay on all communications ranging from 1 to 10 minutes each way. The crew will also perform virtual spacewalk missions once they reach their destination, where they will inspect the asteroid and collect samples from it.
A few other details:
- The crew follows a timeline that is similar to one used for the ISS crew.
- They work 16 hours a day, Monday through Friday. This includes time for daily planning, conferences, meals and exercises.
- They will be growing and taking care of plants and brine shrimp, which they will analyze and document.
But beware! While we do all we can to avoid crises during missions, crews need to be able to respond in the event of an emergency. The HERA crew will conduct a couple of emergency scenario simulations, including one that will require them to maneuver through a debris field during the Earth-bound phase of the mission.
Throughout the mission, researchers will gather information about cohabitation, teamwork, team cohesion, mood, performance and overall well-being. The crew members will be tracked by numerous devices that each capture different types of data.
Past HERA crew members wore a sensor that recorded heart rate, distance, motion and sound intensity. When crew members were working together, the sensor would also record their proximity as well, helping investigators learn about team cohesion.
Researchers also learned about how crew members react to stress by recording and analyzing verbal interactions and by analyzing “markers” in blood and saliva samples.
In total, this mission will include 19 individual investigations across key human research elements. From psychological to physiological experiments, the crew members will help prepare us for future missions.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com