JSBrowser is a basic browser built as a Win10 JavaScript UWP app around the WebView HTML element. Its fun and relatively simple to implement tiny browser features in JavaScript and in this post I'm implementing zoom.
My plan to implement zoom is to add a zoom slider to the settings div that controls the scale of the WebView element via CSS transform. My resulting zoom change is in git and you can try the whole thing out in my JSBrowser fork.
I can implement the zoom settings slider as a range type input HTML element. This conveniently provides me a min, max, and step property and suits exactly my purposes. I chose some values that I thought would be reasonable so the browser can scale between half to 3x by increments of one quarter. This is a tiny browser feature after all so there's no custom zoom entry.
<a><label for="webviewZoom">Zoom</label><input type="range" min="50" max="300" step="25" value="100" id="webviewZoom" /></a>
To let the user know this slider is for controlling zoom, I make a label HTML element that says Zoom. The label HTML element has a for attribute which takes the id of another HTML element. This lets the browser know what the label is labelling and lets the browser do things like when the label is clicked to put focus on the slider.
There are no explicit scale APIs for WebView so to change the size of the content in the WebView we use CSS.
this.applyWebviewZoom = state => {
const minValue = this.webviewZoom.getAttribute("min");
const maxValue = this.webviewZoom.getAttribute("max");
const scaleValue = Math.max(Math.min(parseInt(this.webviewZoom.value, 10), maxValue), minValue) / 100;
// Use setAttribute so they all change together to avoid weird visual glitches
this.webview.setAttribute("style", [
["width", (100 / scaleValue) + "%"],
["height", "calc(" + (-40 / scaleValue) + "px + " + (100 / scaleValue) + "%)"],
["transform", "scale(" + scaleValue + ")"]
].map(pair => pair[0] + ": " + pair[1]).join("; "));
};
Because the user changes the scale at runtime I accordingly replace the static CSS for the WebView element with the script above to programmatically modify the style of the WebView. I change the style with one setAttribute call to do my best to avoid the browser performing unnecessary work or displaying the WebView in an intermediate and incomplete state. Applying the scale to the element is as simple as adding 'transform: scale(X)' but then there are two interesting problems.
The first is that the size of the WebView is also scaled not just the content within it. To keep the WebView the same effective size so that it still fits properly into our browser UI, we must compensate for the scale in the WebView width and height. Accordingly, you can see that we scale up by scaleValue and then in width and height we divide by the scaleValue.
transform-origin: 0% 0%;
The other issue is that by default the scale transform's origin is the center of the WebView element. This means when scaled up all sides of the WebView would expand out. But when modifying the width and height those apply relative to the upper left of the element so our inverse scale application to the width and height above aren't quite enough. We also have to change the origin of the scale transform to match the origin of the changes to the width and height.
In Win8.1 JavaScript UWP apps we supported multiple windows using MSApp DOM APIs. In Win10 we use window.open and window and a new MSApp API getViewId and the previous MSApp APIs are gone:
Win10 | Win8.1 | |
---|---|---|
Create new window | window.open | MSApp.createNewView |
New window object | window | MSAppView |
viewId | MSApp.getViewId(window) | MSAppView.viewId |
We use window.open and window for creating new windows, but then to interact with WinRT APIs we add the MSApp.getViewId API. It takes a window object as a parameter and returns a viewId number that can be used with the various Windows.UI.ViewManagement.ApplicationViewSwitcher APIs.
Views in WinRT normally start hidden and the end developer uses something like TryShowAsStandaloneAsync
to display the view once it is fully prepared. In the web world, window.open shows a window immediately and the end user can watch as content is loaded and rendered. To have your new windows act
like views in WinRT and not display immediately we have added a window.open option. For example
let newWindow = window.open("https://example.com", null, "msHideView=yes");
The primary window that is initially opened by the OS acts differently than the secondary windows that it opens:
Primary | Secondary | |
---|---|---|
window.open | Allowed | Disallowed |
window.close | Close app | Close window |
Navigation restrictions | ACUR only | No restrictions |
The restriction on secondary windows such that they cannot open secondary windows could change in the future depending on feedback.
Lastly, there is a very difficult technical issue preventing us from properly supporting synchronous, same-origin, cross-window, script calls. That is, when you open a window that's same origin, script in one window is allowed to directly call functions in the other window and some of these calls will fail. postMessage calls work just fine and is the recommended way to do things if that's possible for you. Otherwise we continue to work on improving this.
Previously I described Application Content URI Rules (ACUR) parsing and ACUR ordering. This post describes what you get from putting a URI in ACUR.
URIs in the ACUR gain the following which is otherwise unavailable:
URIs in the ACUR that also have full WinRT access additionally gain the following:
Since I had last posted about using Let's Encrypt with NearlyFreeSpeech, NFS has changed their process for setting TLS info. Instead of putting the various files in /home/protected/ssl and submitting an assistance request, now there is a command to submit the certificate info and a webpage for submitting the certificate info.
The webpage is https://members.nearlyfreespeech.net/{username}/sites/{sitename}/add_tls
and has a textbox for you to paste in all the cert info in PEM form into the textbox. The
domain key, the domain certificate, and the Let's Encrypt intermediate cert must be pasted into the textbox and submitted.
Alternatively, that same info may be provided as standard input to nfsn -i set-tls
To renew my certificate with the updated NFS process I followed the commands from Andrei Damian-Fekete's script which depends on acme_tiny.py:
python acme_tiny.py --account-key account.key --csr domain.csr --acme-dir /home/public/.well-known/acme-challenge/ > signed.crt
wget -O - https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem > intermediate.pem
cat domain.key signed.crt intermediate.pem > chained.pem
nfsn -i set-tls < chained.pem
Because
my certificate had already expired I needed to comment out the section in acme_tiny.py that validates the challenge file. The filenames in the above map to the following:
nasa:
This 30 day mission will help our researchers learn how isolation and close quarters affect individual and group behavior. This study at our Johnson Space Center prepares us for long duration space missions, like a trip to an asteroid or even to Mars.
The Human Research Exploration Analog (HERA) that the crew members will be living in is one compact, science-making house. But unlike in a normal house, these inhabitants won’t go outside for 30 days. Their communication with the rest of planet Earth will also be very limited, and they won’t have any access to internet. So no checking social media kids!
The only people they will talk with regularly are mission control and each other.
The crew member selection process is based on a number of criteria, including the same criteria for astronaut selection.
What will they be doing?
Because this mission simulates a 715-day journey to a Near-Earth asteroid, the four crew members will complete activities similar to what would happen during an outbound transit, on location at the asteroid, and the return transit phases of a mission (just in a bit of an accelerated timeframe). This simulation means that even when communicating with mission control, there will be a delay on all communications ranging from 1 to 10 minutes each way. The crew will also perform virtual spacewalk missions once they reach their destination, where they will inspect the asteroid and collect samples from it.
A few other details:
- The crew follows a timeline that is similar to one used for the ISS crew.
- They work 16 hours a day, Monday through Friday. This includes time for daily planning, conferences, meals and exercises.
- They will be growing and taking care of plants and brine shrimp, which they will analyze and document.
But beware! While we do all we can to avoid crises during missions, crews need to be able to respond in the event of an emergency. The HERA crew will conduct a couple of emergency scenario simulations, including one that will require them to maneuver through a debris field during the Earth-bound phase of the mission.
Throughout the mission, researchers will gather information about cohabitation, teamwork, team cohesion, mood, performance and overall well-being. The crew members will be tracked by numerous devices that each capture different types of data.
Past HERA crew members wore a sensor that recorded heart rate, distance, motion and sound intensity. When crew members were working together, the sensor would also record their proximity as well, helping investigators learn about team cohesion.
Researchers also learned about how crew members react to stress by recording and analyzing verbal interactions and by analyzing “markers” in blood and saliva samples.
In total, this mission will include 19 individual investigations across key human research elements. From psychological to physiological experiments, the crew members will help prepare us for future missions.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com