Folks familiar with JavaScript UWP apps in Win10 have often been confused by what PWAs in Win10 actually are. TLDR: PWAs in Win10 are simply JavaScript UWP apps. The main difference between these JS UWP Apps and our non-PWA JS UWP apps are our target end developer audience, and how we get Win10 PWAs into the Microsoft Store. See this Win10 blog post on PWAs on Win10 for related info.
On the web a subset of web sites are web apps. These are web sites that have app like behavior - that is a user might call it an app like Outlook, Maps or Gmail. And they may also have a W3C app manifest.
A subset of web apps are progressive web apps. Progressive web apps are web apps that have a W3C app manifest and a service worker. Various OSes are beginning to support PWAs as first class apps on their platform. This is true for Win10 as well in which PWAs are run as a WWA.
In Win10 a WWA (Windows Web App) is an unofficial term for a JavaScript UWP app. These are UWP apps so they have an AppxManifest.xml, they are packaged in an Appx package, they run in an App Container, they use WinRT APIs, and are installed via the Microsoft Store. Specific to WWAs though, is that the AppxManifest.xml specifies a StartPage attribute identifying some HTML content to be used as the app. When the app is activated the OS will create a WWAHost.exe process that hosts the HTML content using the EdgeHtml rendering engine.
Within that we have a notion of a packaged web app and an HWA (hosted web app). There's no real technical distinction for the end developer between these two. The only real difference is whether the StartPage identifies remote HTML content on the web (HWA), or packaged HTML content from the app's appx package (packaged web app). An end developer may create an app that is a mix of these as well, with HTML content in the package and HTML content from the web. These terms are more like ends on a continuum and identifying two different developer scenarios since the underlying technical aspect is pretty much identical.
Win10 PWAs are simply HWAs that specify a StartPage of a URI for a PWA on the web. These are still JavaScript UWP apps with all the same behavior and abilities as other UWP apps. We have two ways of getting PWAs into the Microsoft Store as Win10 PWAs. The first is PWA Builder which is a tool that helps PWA end developers create and submit to the Microsoft Store a Win10 PWA appx package. The second is a crawler that runs over the web looking for PWAs which we convert and submit to the Store using an automated PWA Builder-like tool to create a Win10 PWA from PWAs on the web (see Welcoming PWAs to Win10 for more info). In both cases the conversion involves examining the PWAs W3C app manifest and producing a corresponding AppxManifest.xml. Not all features supported by AppxManifest.xml are also available in the W3c app manifest. But the result of PWA Builder can be a working starting point for end developers who can then update the AppxManifest.xml as they like to support features like share targets or others not available in W3C app manifests.
Since I had last posted about using Let's Encrypt with NearlyFreeSpeech, NFS has changed their process for setting TLS info. Instead of putting the various files in /home/protected/ssl and submitting an assistance request, now there is a command to submit the certificate info and a webpage for submitting the certificate info.
The webpage is https://members.nearlyfreespeech.net/{username}/sites/{sitename}/add_tls
and has a textbox for you to paste in all the cert info in PEM form into the textbox. The
domain key, the domain certificate, and the Let's Encrypt intermediate cert must be pasted into the textbox and submitted.
Alternatively, that same info may be provided as standard input to nfsn -i set-tls
To renew my certificate with the updated NFS process I followed the commands from Andrei Damian-Fekete's script which depends on acme_tiny.py:
python acme_tiny.py --account-key account.key --csr domain.csr --acme-dir /home/public/.well-known/acme-challenge/ > signed.crt
wget -O - https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > intermediate.pem
cat domain.key signed.crt intermediate.pem > chained.pem
nfsn -i set-tls < chained.pem
Because
my certificate had already expired I needed to comment out the section in acme_tiny.py that validates the challenge file. The filenames in the above map to the following:
During formalization of the WebFinger protocol [I-D.jones-appsawg-webfinger], much discussion occurred regarding the appropriate URI scheme to include when specifying a user’s account as a web link [RFC5988].
…
acctURI = “acct:” userpart “@” domainpart
Seems generally bad to embed sensitive info in the URI (the http+aes URI scheme’s decryption key) similar to the now deprecated password field.
Use case is covered here: http://lists.w3.org/Archives/Public/ietf-http-wg/2012JanMar/0811.html. Also discussion including someone mentioning the issue above.
Elaborating on a previous brief post on the topic of Web Worker initialization race conditions, there's two important points to avoid a race condition when setting up a Worker:
For example the following has no race becaues the spec guarentees that messages posted to a worker during its first synchronous block of execution will be queued and handled after that block. So the worker gets a chance to setup its onmessage handler. No race:
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
onmessage = function(e) {
// ...
}
The following has a race because there's no guarentee that the parent's onmessage handler is setup before the worker executes postMessage. Race (violates 1):
'parent.js':
var worker = new Worker();
worker.onmessage = function(e) {
// ...
};
'worker.js':
postMessage("initialize");
The following has a race because the worker has no onmessage handler set in its first synchronous execution block and so the parent's postMessage may be sent before the worker sets its onmessage handler. Race (violates 2):
'parent.js':
var worker = new Worker();
worker.postMessage("initialize");
'worker.js':
setTimeout(
function() {
onmessage = function(e) {
// ...
}
},
0);
It was relatively easy, although still more difficult than I would have guessed, to hook my bespoke website's Atom feed up to Google Buzz. I already have a Google email account and associated profile so Buzz just showed up in my Gmail interface. Setting it up it offered to connect to my YouTube account or my Google Chat account but I didn't see an option to connect to an arbitrary RSS or Atom feed like I expected.
But of course hooking up an arbitrary Atom or RSS feed is documented. You hook it up in the same manner you claim a website as your own via the Google Profile (for some reason they want to ensure you own the feed connected to your Buzz account). You do this via Google's social graph API which uses XFN or FOAF. I used XFN by simply adding a link to my feed to my Google profile (And be sure to check the 'This is a profile page about me' which ensures that a rel="me" tag is added to the HTML on your profile. This is how XFN works.) And by adding a corresponding link in my feed back to my Google profile page with the following:
atom:link rel="me" href="http://www.google.com/profiles/david.risney"
I used this Google tool to check my XFN
connections and when I checked back the next day my feed showed up in Google Buzz's configuration dialog.
So more difficult than I would have expected (more difficult than just an 'Add your feed' button and textbox) but not super difficult. And yet after reading this Buzz from DeWitt Clinton I feel better about opting-in to Google's Social API.