The documentation for printing in JavaScript UWP apps is out of date as it all references MSApp.getHtmlPrintDocumentSource but that method has been replaced by MSApp.getHtmlPrintDocumentSourceAsync since WinPhone 8.1.
Previous to WinPhone 8.1 the WebView's HTML content ran on the UI thread of the app. This is troublesome for rendering arbitrary web content since in the extreme case the JavaScript of some arbitrary web page might just sit in a loop and never return control to your app's UI. With WinPhone 8.1 we added off thread WebView in which the WebView runs HTML content on a separate UI thread.
Off thread WebView required changing our MSApp.getHtmlPrintDocumentSource API which could no longer synchronously produce an HtmlPrintDocumentSource. With WebViews running on their own threads it may take some time for them to generate their print content for the HtmlPrintDocumentSource and we don't want to hang the app's UI thread in the interim. So the MSApp.getHtmlPrintDocumentSource API was replaced with MSApp.getHtmlPrintDocumentSourceAsync which returns a promise the resolved value of which is the eventual HtmlPrintDocumentSource.
However, the usage of the API is otherwise unchanged. So in sample code you see referencing MSApp.getHtmlPrintDocumentSource the sample code is still reasonable but you need to call MSApp.getHtmlPrintDocumentSourceAsync instead and wait for the promise to complete. For example the PrintManager docs has an example implementing a PrintTaskRequested event handler in a JavaScript UWP app.
function onPrintTaskRequested(printEvent) {
var printTask = printEvent.request.createPrintTask("Print Sample", function (args) {
args.setSource(MSApp.getHtmlPrintDocumentSource(document));
});
Instead we need to obtain a deferral in the event handler so we can asynchronously wait for getHtmlPrintDocumentSourceAsync to complete:
function onPrintTaskRequested(printEvent) {
var printTask = printEvent.request.createPrintTask("Print Sample", function (args) {
const deferral = args.getDeferral();
MSApp.getHtmlPrintDocumentSourceAsync(document).then(htmlPrintDocumentSource => {
args.setSource(htmlPrintDocumentSource);
deferral.complete();
}, error => {
console.error("Error: " + error.message + " " + error.stack);
deferral.complete();
});
});
Application Content URI Rules (ACUR from now on) defines the bounds of the web that make up the Microsoft Store application. Package content via the ms-appx URI scheme is automatically considered part of the app. But if you have content on the web via http or https you can use ACUR to declare to Windows that those URIs are also part of your application. When your app navigates to URIs on the web those URIs will be matched against the ACUR to determine if they are part of your app or not. The documentation for how matching is done on the wildcard URIs in the ACUR Rule elements is not very helpful on MSDN so here are some notes.
You can have up to 100 Rule XML elements per ApplicationContentUriRules element. Each has a Match attribute that can be up to 2084 characters long. The content of the Match attribute is parsed with CreateUri and when matching against URIs on the web additional wildcard processing is performed. I’ll call the URI from the ACUR Rule the rule URI and the URI we compare it to found during app navigation the navigation URI.
The rule URI is matched to a navigation URI by URI component: scheme, username, password, host, port, path, query, and fragment. If a component does not exist on the rule URI then it matches any value of that component in the navigation URI. For example, a rule URI with no fragment will match a navigation URI with no fragment, with an empty string fragment, or a fragment with any value in it.
Each component except the port may have up to 8 asterisks. Two asterisks in a row counts as an escape and will match 1 literal asterisk. For scheme, username, password, query and fragment the asterisk matches whatever it can within the component.
For the host, if the host consists of exactly one single asterisk then it matches anything. Otherwise an asterisk in a host only matches within its domain name label. For example, http://*.example.com will match http://a.example.com/ but not http://b.a.example.com/ or http://example.com/. And http://*/ will match http://example.com, http://a.example.com/, and http://b.a.example.com/. However the Store places restrictions on submitting apps that use the http://* rule or rules with an asterisk in the second effective domain name label. For example, http://*.com is also restricted for Store submission.
For the path, an asterisk matches within the path segment. For example, http://example.com/a/*/c will match http://example.com/a/b/c and http://example.com/a//c but not http://example.com/a/b/b/c or http://example.com/a/c
Additionally for the path, if the path ends with a slash then it matches any path that starts with that same path. For example, http://example.com/a/ will match http://example.com/a/b and http://example.com/a/b/c/d/e/, but not http://example.com/b/.
If the path doesn’t end with a slash then there is no suffix matching performed. For example, http://example.com/a will match only http://example.com/a and no URIs with a different path.
As a part of parsing the rule URI and the navigation URI, CreateUri will perform URI normalization and so the hostname and scheme will be made lower case (casing matters in all other parts of the URI and case sensitive comparisons will be performed), IDN normalization will be performed, ‘.’ and ‘..’ path segments will be resolved and other normalizations as described in the CreateUri documentation.
2016-Nov-5: Updated post on using Let's Encrypt with NearlyFreeSpeech.net
I use NearlyFreeSpeech.net for my webhosting for my personal website and I've just finished setting up TLS via Let's Encrypt. The process was slightly more complicated than what you'd like from Let's Encrypt. So for those interested in doing the same on NearlyFreeSpeech.net, I've taken the following notes.
The standard Let's Encrypt client requires su/sudo access which is not available on NearlyFreeSpeech.net's servers. Additionally NFSN's webserver doesn't have any Let's Encrypt plugins installed. So I used the Let's Encrypt Without Sudo client. I followed the instructions listed on the tool's page with the addition of providing the "--file-based" parameter to sign_csr.py.
One thing the script doesn't produce is the chain file. But this topic "Let's Encrypt - Quick HOWTO for NSFN" covers how to obtain that:
curl -o domain.chn https://letsencrypt.org/certs/lets-encrypt-x1-cross-signed.pem
Now that you have all the required files, on your NFSN server make the directory /home/protected/ssl and copy your files into it. This is described in the NFSN topic provide certificates to NFSN. After copying the files and setting their permissions as described in the previous link you submit an assistance request. For me it was only 15 minutes later that everything was setup.
After enabling HTTPS I wanted to have all HTTP requests redirect to HTTPS. The normal Apache documentation on how to do this doesn't work on NFSN servers. Instead the NFSN FAQ describes it in "redirect http to https and HSTS". You use the X-Forwarded-Proto instead of the HTTPS variable because of how NFSN's virtual hosting is setup.
RewriteEngine on
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301]
Turning on HSTS is as simple as adding the HSTS HTTP header. However, the description in the above link didn't work because my site's NFSN realm isn't on the latest Apache yet. Instead I added the following to my .htaccess. After I'm comfortable with everything working well for a few days I'll start turning up the max-age to the recommended minimum value of 180 days.
Header set Strict-Transport-Security "max-age=3600;"
Finally, to turn on CSP I started up Fiddler with my CSP Fiddler extension. It allows me to determine the most restrictive CSP rules I could apply and still have all resources on my page load. From there I found and removed inline script and some content loaded via http and otherwise continued tweaking my site and CSP rules.
After I was done I checked out my site on SSL Lab's SSL Test to see what I might have done wrong or needed improving. The first time I went through these steps I hadn't included the chain file which the SSL Test told me about. I was able to add that file to the same files I had already previously generated from the Let's Encrypt client and do another NFSN assistance request and 15 minutes later the SSL Test had upgraded me from 'B' to 'A'.
Internet Archive lets you play one of the earliest computer games Space War! emulated in JavaScript in the browser.
This entry covers the historical context of Space War!, and instructions for working with our in-browser emulator. The system doesn’t require installed plugins (although a more powerful machine and recent browser version is suggested).
The JSMESS emulator (a conversion of the larger MESS project) also contains a real-time portrayal of the lights and switches of a Digital PDP-1, as well as links to documentation and manuals for this $800,000 (2014 dollars) minicomputer.
Param([Parameter(Mandatory=$true)][string]$Path);
$excel = New-Object -ComObject Excel.Application
$xlWindows=2
$xlDelimited=1 # 1 = delimited, 2 = fixed width
$xlTextQualifierDoubleQuote=1 # 1= doublt quote, -4142 = no delim, 2 = single quote
$consequitiveDelim = $False;
$tabDelim = $False;
$semicolonDelim = $False;
$commaDelim = $True;
$StartRow=1
$Semicolon=$True
$excel.visible=$true
$excel.workbooks.OpenText($Path,$xlWindows,$StartRow,$xlDelimited,$xlTextQualifierDoubleQuote,$consequitiveDelim,$tabDelim,$semicolonDelim, $commaDelim);
See
Workbooks.OpenText documentation for more information.
HTTP Content Coding Token | gzip | deflate | compress |
---|---|---|---|
An encoding format produced by the file compression program "gzip" (GNU zip) | The "zlib" format as described in RFC 1950. | The encoding format produced by the common UNIX file compression program "compress". | |
Data Format | GZIP file format | ZLIB Compressed Data Format | The compress program's file format |
Compression Method | Deflate compression method | LZW | |
Deflate consists of LZ77 and Huffman coding |
Compress doesn't seem to be supported by popular current browsers, possibly due to its past with patents.
Deflate isn't done correctly all the time. Some servers would send the deflate data format instead of the zlib data format and at least some versions of Internet Explorer expect deflate data format instead of zlib data format.
Won’t someone think of the URIs?! At some point in the not too distant past, MSDN changed how you link to documentation and broke all existing links. This included some of links in documents on MSDN.
Documentation for the VS JS profiler for Win8 HTML Metro Apps on profiling apps running on remote machines.
“The syntax for allowed Top-Level Domain (TLD) labels in the Domain Name System (DNS) is not clearly applicable to the encoding of Internationalised Domain Names (IDNs) as TLDs. This document provides a concise specification of TLD label syntax based on existing syntax documentation, extended minimally to accommodate IDNs.” Still irritated about arbitrary TLDs.
Cool and (relatively) new methods on the JavaScript Array object are here in the most recent versions of your favorite browser! More about them on ECMAScript5, MSDN, the IE blog, or Mozilla's documentation. Here's the list that's got me excited:
I wanted to ensure that my switch statement in my implementation of IInternetSecurityManager::ProcessURLAction had a case for every possible documented URLACTION. I wrote the following short command line sequence to see the list of all URLACTIONs in the SDK header file not found in my source file:
grep URLACTION urlmon.idl | sed 's/.*\(URLACTION[a-zA-Z0-9_]*\).*/\1/g;' | sort | uniq > allURLACTIONs.txt
grep URLACTION MySecurityManager.cpp | sed 's/.*\(URLACTION[a-zA-Z0-9_]*\).*/\1/g;' | sort | uniq > myURLACTIONs.txt
comm -23 allURLACTIONs.txt myURLACTIONs.txt
I'm
not a sed expert so I had to read the sed documentation, and I heard about comm from Kris Kowal's blog which happilly was in the Win32 GNU tools pack I
already run.
But in my effort to learn and use PowerShell I found the following similar command line:
diff
(more urlmon.idl | %{ if ($_ -cmatch "URLACTION[a-zA-Z0-9_]*") { $matches[0] } } | sort -uniq)
(more MySecurityManager.cpp | %{ if ($_ -cmatch "URLACTION[a-zA-Z0-9_]*") { $matches[0] } } | sort -uniq)
In
the PowerShell version I can skip the temporary files which is nice. 'diff' is mapped to 'compare-object' which seems similar to comm but with no parameters to filter out the different streams
(although this could be done more verbosely with the ?{ } filter syntax). In PowerShell uniq functionality is built into sort. The builtin -cmatch operator (c is for case sensitive) to do regexp is
nice plus the side effect of generating the $matches variable with the regexp results.
It was relatively easy, although still more difficult than I would have guessed, to hook my bespoke website's Atom feed up to Google Buzz. I already have a Google email account and associated profile so Buzz just showed up in my Gmail interface. Setting it up it offered to connect to my YouTube account or my Google Chat account but I didn't see an option to connect to an arbitrary RSS or Atom feed like I expected.
But of course hooking up an arbitrary Atom or RSS feed is documented. You hook it up in the same manner you claim a website as your own via the Google Profile (for some reason they want to ensure you own the feed connected to your Buzz account). You do this via Google's social graph API which uses XFN or FOAF. I used XFN by simply adding a link to my feed to my Google profile (And be sure to check the 'This is a profile page about me' which ensures that a rel="me" tag is added to the HTML on your profile. This is how XFN works.) And by adding a corresponding link in my feed back to my Google profile page with the following:
atom:link rel="me" href="http://www.google.com/profiles/david.risney"
I used this Google tool to check my XFN
connections and when I checked back the next day my feed showed up in Google Buzz's configuration dialog.
So more difficult than I would have expected (more difficult than just an 'Add your feed' button and textbox) but not super difficult. And yet after reading this Buzz from DeWitt Clinton I feel better about opting-in to Google's Social API.
Before we shipped IE8 there were no Accelerators, so we had some fun making our own for our favorite web services. I've got a small set of tips for creating Accelerators for other people's web services. I was planning on writing this up as an IE blog post, but Jon wrote a post covering a similar area so rather than write a full and coherent blog post I'll just list a few points:
Working on Internet Explorer extensions in C++ & COM, I had to relearn or rediscover how to do several totally basic and important things. To save myself and possibly others trouble in the future, here's some pertinent links and tips.
First you must choose your IE extensibility point. Here's a very short list of the few I've used:
Once you've created your COM object that implements IObjectWithSite and whatever other interfaces your extensibility point requires as described in the above links you'll see your SetSite method get called by IE. You might want to know how to get the top level browser object from the IUnknown site object passed in via that method.
After that you may also want to listen for some events from the browser. To do this you'll need to:
If you want to check if an IHTMLElement is not visible on screen due how the page is scrolled, try comparing the Body or Document Element's client height and width, which appears to be the dimensions of the visible document area, to the element's bounding client rect which appears to be its position relative to the upper left corner of the visible document area. I've found this to be working for me so far, but I'm not positive that frames, iframes, zooming, editable document areas, etc won't mess this up.
Be sure to use pointers you get from the IWebBrowser/IHTMLDocument/etc. only on the thread on which you obtained the pointer or correctly marshal the pointers to other threads to avoid weird crashes and hangs.
Obtaining the HTML document of a subframe is slightly more complicated then you might hope. On the other hand this might be resolved by the new to IE8 method IHTMLFrameElement3::get_contentDocument
Check out Eric's IE blog post on IE extensibility which has some great links on this topic as well.