The scrollbars in UWP WebView and in Edge have different default behavior leading to many emails to my team. (Everything I talk about here is for the EdgeHtml based WebView and Edge browser and does not apply to the Chromium based Edge browser and WebView2).
There is a Edge only -ms-overflow-style
CSS property that controls scroll behavior. We have a
different default for this in the WebView as compared to the Edge browser. If you want the appearance of the scrollbar in the WebView to match the browser then you must explicitly set that CSS
property. The Edge browser default is scrollbar
which gives us a Windows desktop styled non-auto-hiding scrollbar. The WebView default is -ms-autohiding-scrollbar
which
gives a sort of compromise between desktop and UWP app scrollbar behavior. In this configuration it is auto-hiding. When used with the mouse you'll get Windows desktop styled scrollbars and when
used with touch you'll get the UWP styled scrollbars.
Since WebViews are intended to be used in apps this style is the default in order to better match the app's scrollbars. However this difference between the browser and WebView has led to confusion.
Here’s an -ms-overflow-style JSFiddle showing the difference between the two styles. Try it in the Edge browser and in WebView. An easy way to try it in the Edge WebView is using the JavaScript Browser.
The GoBack and GoForward methods on the UWP WebView (x-ms-webview in HTML, Windows.UI.Xaml.Controls.WebView in XAML, and Windows.Web.UI.Interop.WebViewControl in Win32) act the same as the Back and Forward buttons in the Edge browser. They don't necessarily change the top level document of the WebView. If inside the webview an iframe navigates then that navigation will be recorded in the forward/back history and the GoBack / GoForward call may result in navigating that iframe. This makes sense as an end user using the Edge browser since if I click a link to navigate one place and then hit Back I expect to sort of undo that most recent navigation regardless of if that navigation happened in an iframe or the top level document.
If that doesn't make sense for your application and you want to navigate forward or back ignoring iframe navigates, unfortunately there's no perfect workaround.
One workaround could be to try calling GoBack and then checking if a FrameNavigationStarting event fires or a NavigationStarting event fires. If a frame navigates then try calling GoBack again. There could be async races in this case since other navigates could come in and send you the wrong signal and interrupt your multi step GoBack operation.
You could also try keeping track of all top level document navigations and manually navigate back to the URIs you care about. However, GoBack and GoForward also restore some amount of user state (form fills etc) in addition to navigating. Manually calling navigate will not give this same behavior.
Folks familiar with JavaScript UWP apps in Win10 have often been confused by what PWAs in Win10 actually are. TLDR: PWAs in Win10 are simply JavaScript UWP apps. The main difference between these JS UWP Apps and our non-PWA JS UWP apps are our target end developer audience, and how we get Win10 PWAs into the Microsoft Store. See this Win10 blog post on PWAs on Win10 for related info.
On the web a subset of web sites are web apps. These are web sites that have app like behavior - that is a user might call it an app like Outlook, Maps or Gmail. And they may also have a W3C app manifest.
A subset of web apps are progressive web apps. Progressive web apps are web apps that have a W3C app manifest and a service worker. Various OSes are beginning to support PWAs as first class apps on their platform. This is true for Win10 as well in which PWAs are run as a WWA.
In Win10 a WWA (Windows Web App) is an unofficial term for a JavaScript UWP app. These are UWP apps so they have an AppxManifest.xml, they are packaged in an Appx package, they run in an App Container, they use WinRT APIs, and are installed via the Microsoft Store. Specific to WWAs though, is that the AppxManifest.xml specifies a StartPage attribute identifying some HTML content to be used as the app. When the app is activated the OS will create a WWAHost.exe process that hosts the HTML content using the EdgeHtml rendering engine.
Within that we have a notion of a packaged web app and an HWA (hosted web app). There's no real technical distinction for the end developer between these two. The only real difference is whether the StartPage identifies remote HTML content on the web (HWA), or packaged HTML content from the app's appx package (packaged web app). An end developer may create an app that is a mix of these as well, with HTML content in the package and HTML content from the web. These terms are more like ends on a continuum and identifying two different developer scenarios since the underlying technical aspect is pretty much identical.
Win10 PWAs are simply HWAs that specify a StartPage of a URI for a PWA on the web. These are still JavaScript UWP apps with all the same behavior and abilities as other UWP apps. We have two ways of getting PWAs into the Microsoft Store as Win10 PWAs. The first is PWA Builder which is a tool that helps PWA end developers create and submit to the Microsoft Store a Win10 PWA appx package. The second is a crawler that runs over the web looking for PWAs which we convert and submit to the Store using an automated PWA Builder-like tool to create a Win10 PWA from PWAs on the web (see Welcoming PWAs to Win10 for more info). In both cases the conversion involves examining the PWAs W3C app manifest and producing a corresponding AppxManifest.xml. Not all features supported by AppxManifest.xml are also available in the W3c app manifest. But the result of PWA Builder can be a working starting point for end developers who can then update the AppxManifest.xml as they like to support features like share targets or others not available in W3C app manifests.
JavaScript Microsoft Store apps have some details related to activation that are specific to JavaScript Store apps and that are poorly documented which I’ll describe here.
The StartPage attributes in the AppxManifest.xml (Package/Applications/Application/@StartPage, Package/Applications/Extensions/Extension/@StartPage) define the HTML page entry point for that kind of activation. That is, Application/@StartPage defines the entry point for tile activation, Extension[@Category="windows.protocol"]/@StartPage defines the entry point for URI handling activation, etc. There are two kinds of supported values in StartPage attributes: relative Windows file paths and absolute URIs. If the attribute doesn’t parse as an absolute URI then it is instead interpreted as relative Windows file path.
This implies a few things that I’ll declare explicitly here. Windows file paths, unlike URIs, don’t have a query or fragment, so if you are using a relative Windows file path for your StartPage attribute you cannot include anything like ‘?param=value’ at the end. Absolute URIs use percent-encoding for reserved characters like ‘%’ and ‘#’. If you have a ‘#’ in your HTML filename then you need to percent-encode that ‘#’ for a URI and not for a relative Windows file path.
If you specify a relative Windows file path, it is turned into an ms-appx URI by changing all backslashes to forward slashes, percent-encoding reserved characters, and combining the result with a base URI of ms-appx:///. Accordingly the relative Windows file paths are relative to the root of your package. If you are using a relative Windows file path as your StartPage and need to switch to using a URI so you can include a query or fragment, you can follow the same steps above.
The validity of the StartPage is not determined before activation. If the StartPage is a relative Windows file path for a file that doesn’t exist, or an absolute URI that is not in the Application Content URI Rules, or something that doesn’t parse as a Windows file path or URI, or otherwise an absolute URI that fails to resolve (404, bad hostname, etc etc) then the JavaScript app will navigate to the app’s navigation error page (perhaps more on that in a future blog post). Just to call it out explicitly because I have personally accidentally done this: StartPage URIs are not automatically included in the Application Content URI Rules and if you forget to include your StartPage in your ACUR you will always fail to navigate to that StartPage.
When your app is activated for a particular activation kind, the StartPage value from the entry in your app’s manifest that corresponds to that activation kind is used as the navigation target.
If the app is not already running, the app is activated, navigated to that StartPage value and then the Windows.UI.WebUI.WebUIApplication activated
event is fired (more details on
the order of various events in a moment). If, however, your app is already running and an activation occurs, we navigate or don’t navigate to the corresponding StartPage depending on the current
page of the app. Take the app’s current top level document’s URI and if after removing the fragment it already matches the StartPage value then we won’t navigate and will jump straight to firing
the WebUIApplication activated event.
Since navigating the top-level document means destroying the current JavaScript engine instance and losing all your state, this behavior might be a problem for you. If so, you can use the
MSApp.pageHandlesAllApplicationActivations(true)
API to always skip navigating to the StartPage and instead always jump straight to firing the WebUIApplication activated event. This
does require of course that all of your pages all handle all activation kinds about which any part of your app cares.
TL;DR: Web content in a JavaScript Windows Store app or WebView in a Windows Store app that has full access to WinRT also gets to use XHR unrestricted by cross origin checks.
By default web content in a WebView control in a Windows Store App has the same sort of limitations as that web content in a web browser. However, if you give the URI of that web content full access to WinRT, then the web content also gains the ability to use XMLHttpRequest unrestricted by cross origin checks. This means no CORS checks and no OPTIONS requests. This only works if the web content's URI matches a Rule in the ApplicationContentUriRules of your app's manifest and that Rule declares WindowsRuntimeAccess="all". If it declares WinRT access as 'None' or 'AllowForWebOnly' then XHR acts as it normally does.
In terms of security, if you've already given a page access to all of WinRT which includes the HttpRequest class and other networking classes that don't perform cross origin checks, then allowing XHR to skip CORS doesn't make things worse.
nasa:
This 30 day mission will help our researchers learn how isolation and close quarters affect individual and group behavior. This study at our Johnson Space Center prepares us for long duration space missions, like a trip to an asteroid or even to Mars.
The Human Research Exploration Analog (HERA) that the crew members will be living in is one compact, science-making house. But unlike in a normal house, these inhabitants won’t go outside for 30 days. Their communication with the rest of planet Earth will also be very limited, and they won’t have any access to internet. So no checking social media kids!
The only people they will talk with regularly are mission control and each other.
The crew member selection process is based on a number of criteria, including the same criteria for astronaut selection.
What will they be doing?
Because this mission simulates a 715-day journey to a Near-Earth asteroid, the four crew members will complete activities similar to what would happen during an outbound transit, on location at the asteroid, and the return transit phases of a mission (just in a bit of an accelerated timeframe). This simulation means that even when communicating with mission control, there will be a delay on all communications ranging from 1 to 10 minutes each way. The crew will also perform virtual spacewalk missions once they reach their destination, where they will inspect the asteroid and collect samples from it.
A few other details:
- The crew follows a timeline that is similar to one used for the ISS crew.
- They work 16 hours a day, Monday through Friday. This includes time for daily planning, conferences, meals and exercises.
- They will be growing and taking care of plants and brine shrimp, which they will analyze and document.
But beware! While we do all we can to avoid crises during missions, crews need to be able to respond in the event of an emergency. The HERA crew will conduct a couple of emergency scenario simulations, including one that will require them to maneuver through a debris field during the Earth-bound phase of the mission.
Throughout the mission, researchers will gather information about cohabitation, teamwork, team cohesion, mood, performance and overall well-being. The crew members will be tracked by numerous devices that each capture different types of data.
Past HERA crew members wore a sensor that recorded heart rate, distance, motion and sound intensity. When crew members were working together, the sensor would also record their proximity as well, helping investigators learn about team cohesion.
Researchers also learned about how crew members react to stress by recording and analyzing verbal interactions and by analyzing “markers” in blood and saliva samples.
In total, this mission will include 19 individual investigations across key human research elements. From psychological to physiological experiments, the crew members will help prepare us for future missions.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
nasa:
This 30 day mission will help our researchers learn how isolation and close quarters affect individual and group behavior. This study at our Johnson Space Center prepares us for long duration space missions, like a trip to an asteroid or even to Mars.
The Human Research Exploration Analog (HERA) that the crew members will be living in is one compact, science-making house. But unlike in a normal house, these inhabitants won’t go outside for 30 days. Their communication with the rest of planet Earth will also be very limited, and they won’t have any access to internet. So no checking social media kids!
The only people they will talk with regularly are mission control and each other.
The crew member selection process is based on a number of criteria, including the same criteria for astronaut selection.
What will they be doing?
Because this mission simulates a 715-day journey to a Near-Earth asteroid, the four crew members will complete activities similar to what would happen during an outbound transit, on location at the asteroid, and the return transit phases of a mission (just in a bit of an accelerated timeframe). This simulation means that even when communicating with mission control, there will be a delay on all communications ranging from 1 to 10 minutes each way. The crew will also perform virtual spacewalk missions once they reach their destination, where they will inspect the asteroid and collect samples from it.
A few other details:
- The crew follows a timeline that is similar to one used for the ISS crew.
- They work 16 hours a day, Monday through Friday. This includes time for daily planning, conferences, meals and exercises.
- They will be growing and taking care of plants and brine shrimp, which they will analyze and document.
But beware! While we do all we can to avoid crises during missions, crews need to be able to respond in the event of an emergency. The HERA crew will conduct a couple of emergency scenario simulations, including one that will require them to maneuver through a debris field during the Earth-bound phase of the mission.
Throughout the mission, researchers will gather information about cohabitation, teamwork, team cohesion, mood, performance and overall well-being. The crew members will be tracked by numerous devices that each capture different types of data.
Past HERA crew members wore a sensor that recorded heart rate, distance, motion and sound intensity. When crew members were working together, the sensor would also record their proximity as well, helping investigators learn about team cohesion.
Researchers also learned about how crew members react to stress by recording and analyzing verbal interactions and by analyzing “markers” in blood and saliva samples.
In total, this mission will include 19 individual investigations across key human research elements. From psychological to physiological experiments, the crew members will help prepare us for future missions.
Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com
If you want to represent a value larger than 32bits in an enum in MSVC++ you can use C++0x style syntax to tell the compiler exactly what kind of integral type to store the enum values. Unfortunately by default an enum is always 32bits, and additionally while you can specify constants larger than 32bits for the enum values, they are silently truncated to 32bits.
For instance the following doesn't compile because Lorem::a and Lorem::b have the same value of '1':
enum Lorem {
a = 0x1,
b = 0x100000001
} val;
switch (val) {
case Lorem::a:
break;
case Lorem::b:
break;
}
Unfortunately it is not an error to have b's constant truncated, and the previous without the switch statement does compile just fine:
enum Lorem {
a = 0x1,
b = 0x100000001
} val;
But you can explicitly specify that the enum should be represented by a 64bit value and get expected compiling behavior with the following:
enum Lorem : UINT64 {
a = 0x1,
b = 0x100000001
} val;
switch (val) {
case Lorem::a:
break;
case Lorem::b:
break;
}
Moral: laws should cover behavior not specific technologies. The implementation can change, laws shouldn’t take such dependencies.
Stripe's web security CTF's Level 1 and level 2 of the Stripe CTF had issues with missing input validation solutions described below.
$filename = 'secret-combination.txt';
extract($_GET);
if (isset($attempt)) {
$combination = trim(file_get_contents($filename));
if ($attempt === $combination) {
The issue here is the usage of the extract php method which extracts name value pairs from the map input parameter and creates corresponding local variables. However this code uses $_GET which contains a map of name value pairs passed in the query of the URI. The expected behavior is to get an attempt variable out, but since no input validation is done I can provide a filename variable and overwrite the value of $filename. Providing an empty string gives an empty string $combination which I can match with an empty string $attempt. So without knowing the combination I can get past the combination check.
Code review red flag in this case was the direct use of $_GET with no validation. Instead of using extract the developer could try to extract specifically the attempt variable manually without using extract.
$dest_dir = "uploads/";
$dest = $dest_dir . basename($_FILES["dispic"]["name"]);
$src = $_FILES["dispic"]["tmp_name"];
if (move_uploaded_file($src, $dest)) {
$_SESSION["dispic_url"] = $dest;
chmod($dest, 0644);
echo "Successfully uploaded your display picture.
";
}
This code accepts POST uploads of images but with no validation to ensure it is not an arbitrary file. And even though it uses chmod to ensure the file is not executable, things like PHP don't require a file to be executable in order to run them. Accordingly, one can upload a PHP script, then navigate to that script to run it. My PHP script dumped out the contents of the file we're interested in for this level:
Code review red flags include manual file management, chmod, and use of file and filename inputs without any kind of validation. If this code controlled the filename and ensured that the extension was one of a set of image extensions, this would solve this issue. Due to browser mime sniffing its additionally a good idea to serve a content-type that starts with "image/" for these uploads to ensure browsers treat these as images and not sniff for script or HTML.
Web apps really make obvious the lack of URI APIs in the DOM or JavaScript. This blog post goes over using DOM API side effects to resolve relative URIs and parse URIs. An additonal benefit of this mechanism is that you avoid security issues caused by mismatched behavior between the browser’s URI parsing and your app’s URI parsing.
The following code compiled just fine but did not at all act in the manner I expected:
BOOL CheckForThing(__in CObj *pObj, __in IFigMgr* pFigMgr, __in_opt LPCWSTR url)
{
BOOL fCheck = FALSE;
if (SubCheck(pObj))
{
...
I’m
calling SubCheck which looks like:
bool SubCheck(const CObj& obj);
Did you spot the bug? As you can see I should be passing in *pObj not pObj since the method takes a const CObj& not a CObj*. But then why does it compile?
It works because CObj has a constructor with all but one param with default values and CObj is derived from IUnknown:
CObj(__in_opt IUnknown * pUnkOuter, __in_opt LPCWSTR pszUrl = NULL);
Accordingly C++ uses this constructor as an implicit conversion operator. So instead of passing in my
CObj, I end up creating a new CObj on the stack passing in the CObj I wanted as the outer object which has a number of issues.
The lesson is unless you really want this behavior, don't make constructors with all but 1 or 0 default parameters. If you need to do that consider using the 'explicit' keyword on the constructor.
More info about forcing single argument constructors to be explicit is available on stack overflow.
CreateIUriBuilder(resolvedUri, 0, 0, &builder);
builder->SetHost(host);
builder->CreateUri(0xFFFFFFFF, 0, 0, &resolvedUri);
ResolveHost(resolvedUri, &resolvedUri);
operator T**()
{
T *ptrValue = mPtrValue;
mPtrValue->Release();
mPtrValue = NULL;
return &ptrValue;
}
Raymond Chen has some thought experiments useful for discovering various kinds of stupidity in software design:
Tim Berners-Lee's principles of Web design includes my favorite: Test of Independent Invention. This has a thought experiment containing the construction of the MMM (Multi-Media Mesh) with MRIs (Media Resource Identifiers) and MMTP (Muli-Media Transport Protocol).
The Internet design principles (RFC 1958) includes the Robustness Principle: be strict when sending and tolerant when receiving. A good one, but applied too liberally can lead to interop issues. For instance, consider web browsers. Imagine one browser becomes so popular that web devs create web pages and just test out their pages in this popular browser. They don't ensure their pages conform to standards and accidentally end up depending on the manner in which this popular browser tolerantly accepts non-standard input. This non-standard behavior ends up as de facto standard and future updates to the standard essentially has had decisions made for it.
Looking at the HTTP traffic of Netflix under Fiddler I could see the HTTP request that added a movie to my queue and didn't see anything obvious that would prevent a CSRF. Sure enough its pretty easy to create a page that, if the user has set Netflix to auto-login, will add movies to the user's queue without their knowledge. I thought this was pretty neat, because I could finally get people to watch Primer. However, when I searched for Netflix CSRF I found that this issue has been known and reported to Netflix since 2006. Again my thoughts stolen from me and the theif doesn't even have the common decency to let me have the thought first!
With this issue known for nearly three years its hard to continue calling it an issue. Really they should just document it in their API docs and be done with it. Who knows what Netflix based web sites and services they'll break if they try to change this behavior? For instance, follow this link to add my Netflix recommended movies to your queue.