Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g. id.example.com, app1.example.com, app2.example.com), as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google Chrome.app” –args –proxy-server=10.211.55.3:8888 –proxy-bypass-list=localhost, where 10.221.55.3 is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

hosting
As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.

Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use example.com subdomains (e.g. id.example.com, app1.example.com, app2.example.com), which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than 127.0.0.1. So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

127.0.0.2 app1.example.com
127.0.0.3 app2.example.com
127.0.0.4 id.example.com

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.

The OpenID Connect Cast of Characters

Introduction

The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.

oauth2-1

  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).

oauth2-2

An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.

federation

  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.

federation-2

In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.

oidc

I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.

Using Fiddler for an Android and Windows VM development environment

In this post I describe the development environment that I use when creating Android apps that rely on ASP.NET based Web applications and Web APIs.

  • The development machine is a MBP running OS X with Android Studio.
  • Android virtual devices are run on Genymotion, which uses VirtualBox underneath.
  • Web applications and Web APIs are hosted on a Windows VM running on Parallels over the OS X host.

I use the Fiddler proxy to enable connectivity between Android and the ASP.NET apps, as well as to provide me full visibility on the HTTP messages. Fiddler also enables me to use HTTPS even on this development environment.

The main idea is to use Fiddler as the Android’s system HTTP proxy, in conjunction with a port forwarding rule that maps a port on the OS X host to the Windows VM. This is depicted in the following diagram.

android

 

The required configuration steps are:

  1. Start Fiddler on the Windows VM and allow remote computers to connect
    • Fiddler – Tools – Fiddler Options – Connections – check “Allow remote computers to connect”.
    • This will make Fiddler listen on 0.0.0.0:8888.
  2. Enable Fiddler to intercept HTTPS traffic
    • Fiddler – Tools – Fiddler Options –  HTTPS – check “Decrypt HTTPS traffic”.
    • This will add a new root certificate to the “Trusted Root Certification Authorities” Windows certificate store.
  3. Define a port forwarding rule mapping TCP port 8888 on the OS X host to port TCP 8888 on the Windows guest (where Fiddler is listening).
    • Parallels – Preferences – Network:change settings – Port forward rules  – add “TCP:8888 -> Windows:8888”.
  4. Check which “host-only network” is the Android VM using
    • VirtualBox Android VM – Settings – Network – Name (e.g. “vboxnet1”).
  5. Find the IP for the identified adapter
    • VirtualBox – Preferences – Network – Host-only Networks – “vboxnet1”.
    • In my case the IP is 192.168.57.1.
  6. On Android, configure the Wi-Fi connection HTTP proxy (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Settings – Wi-Fi – long tap on choosen network – modify network – enable advanced options – manual proxy
      • Set “Proxy hostname” to the IP identified in the previous step (e.g. 192.168.57.1).
      • Set “Proxy port” to 8888.
    • With this step, all the HTTP traffic will be directed to the Fiddler HTTP proxy running on the Windows VM
  7. The last step is to install the Fiddler root certificate, so that the Fiddler generated certificates are accepted by the Android applications, such as the system browser (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Open the browser and navigate to http://ipv4.fiddler:8888
    • Select the link “FiddlerRoot certificate” and on the Android dialog select “Credential use: VPN and apps”.

And that’s it: all HTTP traffic that uses the Android system’s proxy settings will be directed to Fiddler, with the following advantages

  • Visibility of the requests and responses on the Fiddler UI, namely the ones using HTTPS.
  • Access to Web applications running on the Windows VM, using both IIS hosting or self-hosting.
  • Access to external hosts on the Internet.
  • Use of the Windows “hosts” file host name overrides.
    • For development purposes I typically use host names other than “localhost”, such as “app1.example.com” or “id.example.com”.
    • Since the name resolution will be done by the Fiddler proxy, these host names can be used directly on Android.

Here is the screenshot of Chrome running on Android and presenting a ASP.NET MVC application running on the Windows VM. Notice the green “https” icon.

Screen Shot 2016-03-05 at 19.31.53

And here is the Chrome screenshot of a IdentityServer3 login screen, also running on the Windows VM.

Screen Shot 2016-03-05 at 19.34.42

Hope this helps!

OAuth 2.0 and PKCE

Introduction

Both Google and IdentityServer have recently announced support for the PKCE (Proof Key for Code Exchange by OAuth Public Clients) specification defined by RFC 7636.

This is an excellent opportunity to revisit the OAuth 2.0 authorization code flow and illustrate how PKCE addresses some of the security issues that exist when this flow is implemented on native applications.

tl;dr

On the authorization code flow, the redirect from the authorization server back to client is one of the most security sensitive parts of the OAuth 2.0 protocol. The main reason is that this redirect contains the code representing the authorization delegation performed by the User. On public clients, such as native applications, this code is enough to obtain the access tokens allowing access to the User’s resources.

The PKCE specification addresses an attack vector where an attacker creates a native application that registers the same URL scheme used by the Client application, therefore gaining access to the authorization code. Succinctly, the PKCE specification requires the exchange of the code for the access token to use a ephemeral secret information that is not available on the redirect, making the knowledge of the code insufficient to use it. This extra information (or a transformation of it) is sent on the initial authorization request.

A slightly longer version

The OAuth 2.0 cast of characters

  • The User is typically an human entity capable of granting access to resources.
  • The Resource Server (RS) is the entity exposing an HTTP API to access these resources.
  • The Client is an application (e.g. server-based Web application or native application) wanting to access these resources, via a authorization delegation performed by the User. Clients can be
    • confidential – client applications that can hold a secret. The typical example are Web applications, where a client secret is stored and used only on the server side.
    • public – client application that cannot hold a secret, such as native applications running on the User’s mobile device.
  • The Authorization Server (AS) is the entity that authenticates the user, captures her authorization consent and issues access tokens that the Client application can use to access the resources exposed on the RS.

Authorization code flow for Web Applications

The following diagram illustrates the authorization code flow for Web applications (the Client application is a Web server).

Slide2

 

  1. The flow starts with the Client application server-side producing a redirect HTTP response (e.g. response with 302 status) with the authorization request URL in the Location header. This URL will contain the authorization request parameters such as the state, scope and redirect_uri.
  2. When receiving this response, the User’s browser automatically performs a GET HTTP request to the Authorization Server (AS) authorization endpoint, containing the OAuth 2.0 authorization request.
  3. The AS then starts an interaction sequence to authenticate the user (e.g. username and password, two-factor authentication, delegated authentication), and to obtain the user consent. This sequence is not defined by OAuth 2.0 and can take multiple steps.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL points to the client application hostname and contains the the authorization response parameters, such as the state and the (security sensitive) code.
  5. When receiving this response, the user’s browser automatically performs a GET request to the Client redirect endpoint with the OAuth 2.0 authorization response. By using HTTPS on the request to the Client, the protocol minimises the chances of the code being leaked to an attacker.
  6. Having received that authorization code, the Client then uses it to obtain the access token from the AS token endpoint. Since the client is a confidencial client, this request is authenticated with the client credentials (client ID and client secret), typically sent in the Authorization header using the basic scheme. The AS checks if this code is valid, namely if it was issued to the requesting authenticated client. If everything is verified, a 200 response with the access token is returned.
  7. Finally, the client can use the received access token to access the protected resources.

Authorization code flow for native Applications

For a native application, the flow is slightly different, namely on the first phase (the authorization request). Recall that in this case the Client application is running in the User’s device

Slide3

  1. The flow begins with the Client application starting the system’s browser (or a web view, more on this on another post) at a URL with the authorization request. For instance, on the Android platform this is achieved by sending an intent.
  2. The browser comes into the foreground and performs a GET request to the AS authorization endpoint containing the authorization request.
  3. The same authentication and consent dance occurs between the AS and the User’s browser.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL contains the the authorization response parameters. However, there is something special in the redirect URL. Instead of using a http URL scheme, which would make the browser perform another HTTP request, the redirect URL contains a custom URI scheme.
  5. As a result, when the browser receives this response and processes the redirect an inter-application message (e.g. an intent in Android) is sent to the application associated to this scheme, which should be the Client application. This brings the Client application to the foreground and provides it with the authorization response parameters, namely the authorization code.
  6. From now on, the flow is similar to the Web based one. Namely, the Client application  uses the code to obtain the access token from the AS token endpoint. Since the client is a public client, this request is not authenticated, that is no client secret is used.
  7. Finally, having received the access token, the client application running on the device can access the User’s resources.

On both scenarios, the authorization code communication path, from the AS to the Client via User’s browser, is very security sensitive. This is specially relevant in the native scenario since the Client is public and the knowledge of that authorization code is enough to obtain the access token.

Hijacking the redirect

On the Web application scenario, the GET request with the authorization response has a HTTPS URL, which means that the browser will only send the code if the server correctly authenticates itself. However, on the native scenario, the intent will be sent to any installed application that registered the custom scheme. Unfortunately, there isn’t a central entity controlling and validating these scheme registrations, so an application can hijack the message from the browser to the client application, as shown in the following diagram.

Slide4

Having obtained the authorization code, the attacker’s application has all the information required to retrieve a token and access the User’s resources.

The PKCE protection

The PKCE specification mitigates this vulnerability by requiring an extra code_verifier parameter on the exchange of the authorization code for the access token.Slide5

  • On step 1, the Client application generates a random secret, stores it and uses its hash value on the new code_challenge authorization request parameter.
  • On step 4, the AS somehow associates the returned code to the code_challenge.
  • On step 6, the Client includes a code_verifier parameter with the secret on the token request message. The AS computes the hash of the code_verifier value and compares it with the original code_challenge associated with the code. Only if they are equals is the code accepted and an access token returned.

This ensures that only the entity that started the flow (sent the code_challenge on the authorization request) can end the flow and obtain the access token. By using a cryptographic hash function on the code_challenge, the protocol is protected from attackers that have read access to the original authorization request. However, the protocol also allows the secret to be used directly on the code_challenge.

Finally, the PKCE support by an AS can be advertised on the OAuth 2.0 or OpenID Connect discovery document, using the code_challenge_methods_supported field. The following is the Google’s OpenID Connect discovery document, located at https://accounts.google.com/.well-known/openid-configuration.

{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
 "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
 "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
 "response_types_supported": [
  "code",
  "token",
  "id_token",
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
  "none"
 ],
 "subject_types_supported": [
  "public"
 ],
 "id_token_signing_alg_values_supported": [
  "RS256"
 ],
 "scopes_supported": [
  "openid",
  "email",
  "profile"
 ],
 "token_endpoint_auth_methods_supported": [
  "client_secret_post",
  "client_secret_basic"
 ],
 "claims_supported": [
  "aud",
  "email",
  "email_verified",
  "exp",
  "family_name",
  "given_name",
  "iat",
  "iss",
  "locale",
  "name",
  "picture",
  "sub"
 ],
 "code_challenge_methods_supported": [
  "plain",
  "S256"
 ]
}

 

 

 

 

Using Vagrant to test ASP.NET 5 RC1

The recent Release Candidate 1 (RC1) for ASP.NET 5 includes support for Linux and OS X via .NET Core. After trying it out on OS X, I wanted to do some experiments on Linux as well. For that I used Vagrant to automate the creation and provision of the Linux development environments. In this post I describe the steps required for this task, using OS X as the host (the steps on a Windows host will be similar).

Short version

Start by ensuring Vagrant and VirtualBox are installed on your host machine.
Then open a shell and do the following commands.
The vagrant up may take a while since it will not only download and boot the base virtual machine image, but also provision ASP.NET 5 RC1 and all its dependencies.

git clone https://github.com/pmhsfelix/vagrant-aspnet-rc1.git (or your own fork URL instead)
cd vagrant-aspnet-rc1
vagrant up
vagrant ssh

After the last command completes you should have a SSH session into a Ubuntu Server with ASP.NET RC1 installed, running on a virtual machine (VM). Port 5000 on the host is mapped into port 5000 on the guest.

The vagrant-aspnet-rc1 host folder is mounted into the /vagrant guest folder, so you can use this to share files between host and guest.
For instance, a ASP.NET project published to vagrant-aspnet-rc1/published on the host will be visible on the /vagrant/published guest path.

For any comment or issue that you have, please raise an issue at https://github.com/pmhsfelix/vagrant-aspnet-rc1.

Longer (and perhaps more instructive) version

First, start by installing Vagrant and also VirtualBox, which will be required to run the virtual machine with Linux.

Afterwards, create a new folder (e.g. vagrant-aspnet-rc1) to host the Vagrant configuration.

dotnet pedro$ mkdir vagrant-aspnet-rc1
dotnet pedro$ cd vagrant-aspnet-rc1
vagrant-aspnet-rc1 pedro$

Then, initialize the Vagrant configuration using the init command.

vagrant-aspnet-rc1 pedro$ vagrant init ubuntu/trusty64
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
vagrant-aspnet-rc1 pedro$ ls
Vagrantfile

The second parameter, ubuntu/trusty64, is the name of a box available on the Vagrant public catalog, which in this case contains a Ubuntu Server 14.04 LTS.
Notice also how a Vagrantfile file, containing the Vagrant configuration, was created on the current directory. We will be using this file latter on.

The next step is to start the virtual machine.

vagrant-aspnet-rc1 pedro$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: vagrant-aspnet_default_1451428161431_85889
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default:
    default: Guest Additions Version: 4.3.34
    default: VirtualBox Version: 5.0
==> default: Mounting shared folders...
    default: /vagrant => /Users/pedro/code/dotnet/vagrant-aspnet-rc1

As can be seen in the command output, a VM was booted and SSH was configured. So the next step is to open a SSH session into the machine to check if everything is working properly. This is accomplished using the ssh command.

vagrant-aspnet-rc1 pedro$ vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Dec 29 22:29:41 UTC 2015

  System load:  0.35              Processes:           80
  Usage of /:   3.4% of 39.34GB   Users logged in:     0
  Memory usage: 25%               IP address for eth0: 10.0.2.15
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

vagrant@vagrant-ubuntu-trusty-64:~$ hostname
vagrant-ubuntu-trusty-64
vagrant@vagrant-ubuntu-trusty-64:~$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant)

Notice how we end up with a session into a vagrant-ubuntu-trusty-64 machine, running under the vagrant user.
In addition to setting up SSH, Vagrant also mounted the vagrant-aspnet-rc1 host folder (the one were the Vagrantfile was created) into the /vagrant file on the guest.

vagrant@vagrant-ubuntu-trusty-64:~$ ls /vagrant
Vagrantfile

We could now start to install ASP.NET 5 following the procedure outlined at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html. However, that would be the “old way of doing things” and would not provide us with a reproducable development environment.
A better solution is to create a provision script, called bootstrap.sh, and use it with Vagrant.

The provision script is simply a copy of the procedures at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html, slightly changed to allow unsupervised installation.

#!/usr/bin/env bash

# install dnvm pre-requisites
sudo apt-get install -y unzip curl
# install dnvm
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

# install dnx pre-requisites
sudo apt-get install -y libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
# install dnx via dnvm
dnvm upgrade -r coreclr

# install libuv from source
sudo apt-get install -y make automake libtool curl
curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh autogen.sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
sudo ldconfig

The next step is to edit the Vagrantfile so this provision script is run automatically by Vagrant.
We also change the port forwarding rule so that is matches the default 5000 port used by ASP.NET.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision :shell, path: "bootstrap.sh", privileged: false
  config.vm.network "forwarded_port", guest: 5000, host: 5000
end

To check that everything is properly configured we redo the whole process by destroying the VM and creating it again.

vagrant-aspnet-rc1 pedro$ vagrant destroy
    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...
vagrant-aspnet-rc1 pedro$ vagrant up
( ... lots of things that take a while to happen ... )

Finally, do vagrant ssh and check that dnx is fully functional.

How about publish with runtime?

Instead of having to previously provision ASP.NET, wouldn’t it be nice to include all the dependencies on the published project so that we could deploy it on a plain vanilla Ubuntu or Debian machine?
Well, one one hand it is possible to configure the publish process to also include the runtime, via the --runtime parameter.

dnu publish --out ~/code/dotnet/vagrant-ubuntu/published --no-source --runtime dnx-coreclr-linux-x64.1.0.0-rc1-update1

On the other hand, in order to have the Linux DNX runtime available on OS X we just need to explicitly specify the OS on the dnvm command

dnvm install latest -OS linux -r coreclr

Unfortunately, this approach does not work because the published runtime is not self-sufficient.
For it to work properly it still requires some dependencies to be previously provisioned on the deployed machine.
This can be seen if we try to run the ASP.NET project

vagrant@vagrant-ubuntu-trusty-64:~$ /vagrant/published/approot/web
failed to locate libcoreclr with error libunwind-x86_64.so.8: cannot open shared object file: No such file or directory
vagrant@vagrant-ubuntu-trusty-64:~$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.

Notice how the libunwind-x86_64.so.8 failed to be opened.
So, for the time being, we need to provision at least the runtime dependencies on the deployed machine.
The runtime itself can be contained in the published project.

A first look at .NET Core and the dotnet CLI tool

A recent post by Scott Hanselman triggered my curiosity about the new dotnet Command Line Interface (CLI) tool for .NET Core, which aims to be a “cross-platform general purpose managed framework”. In this post I present my first look on using .NET Core and the dotnet tool on OS X.

Installation

For OS X, the recommended installation procedure is to use the “official PKG”. Unfortunately, this PKG doesn’t seem to be signed so trying to run it directly from the browser will result in an error. The workaround is use Finder to locate the downloaded file and then select “Open” on the file. Notice that this PKG requires administrative privileges to run, so proceed at your own risk (the .NET Core home page uses a https URI and the PKG is hosted on Azure Blob Storage, also using HTTPS – https://dotnetcli.blob.core.windows.net/dotnet/dev/Installers/Latest/dotnet-osx-x64.latest.pkg).

After installation, the dotnet tool will be available on your shell.

~ pedro$ which dotnet
/usr/local/bin/dotnet

I confess that I was expecting the recommended installation procedure to use homebrew instead of a downloaded PKG.

Creating the application

To create an application we start by making an empty folder (e.g. HelloDotNet) and then run dotnet new on it.

dotnet pedro$ mkdir HelloDotNet
dotnet pedro$ cd HelloDotNet
HelloDotNet pedro$ dotnet new
Created new project in /Users/pedro/code/dotnet/HelloDotNet.

This newcommand creates three new files in the current folder.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
└── project.json

0 directories, 3 files

The first one, NuGet.Config is an XML file containing the NuGet package sources, namely the http://www.myget.org feed containing .NET Core.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<!--To inherit the global NuGet package sources remove the <clear/> line below -->
<clear />
<add key="dotnet-core" value="https://www.myget.org/F/dotnet-core/api/v3/index.json" />
<add key="api.nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>

The second one is a C# source file containing the classical static void Main(string[] args) application entry point.

HelloDotNet pedro$ cat Program.cs
using System;

namespace ConsoleApplication
{
    public class Program
    {
        public static void Main(string[] args)
        {
            Console.WriteLine(&amp;amp;amp;amp;quot;Hello World!&amp;amp;amp;amp;quot;);
        }
    }
}

Finally, the third file is the project.json containing the project definitions, such as compilation options and library dependencies.

HelloDotNet pedro$ cat project.json
{
"version": "1.0.0-*",
"compilationOptions": {
"emitEntryPoint": true
},

"dependencies": {
"NETStandard.Library": "1.0.0-rc2-23616"
},

"frameworks": {
"dnxcore50": { }
}
}

Resolving dependencies

The next step is to ensure all dependencies required by our project are available. For that we use the restore command.

HelloDotNet pedro$ dotnet restore
Microsoft .NET Development Utility CoreClr-x64-1.0.0-rc1-16231

  GET https://www.myget.org/F/dotnet-core/api/v3/index.json
  OK https://www.myget.org/F/dotnet-core/api/v3/index.json 778ms
  GET https://api.nuget.org/v3/index.json
  ...
Restore complete, 40937ms elapsed

NuGet Config files used:
    /Users/pedro/code/dotnet/HelloDotNet/nuget.config

Feeds used:
    https://www.myget.org/F/dotnet-core/api/v3/flatcontainer/
    https://api.nuget.org/v3-flatcontainer/

Installed:
    69 package(s) to /Users/pedro/.dnx/packages

After figuratively downloading almost half of the Internet, or 69 packages to be more precise, the restore process ends stating that the required dependencies where installed at ~/.dnx/packages.
Notice the dnx in the path, which shows the DNX heritage of the dotnet tool. I presume this names will change before the RTM version. Notice also that the only thing added to the current folder is the project.lock.json containing the complete dependency graph created by the restore process based on the direct dependencies.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files

Namely, no dependencies where copied to the local folder.
Instead the global ~/.dnx/packages/ repository is used.

Running the application

After changing the greetings message to Hello dotnet we can run the application using the run command.

HelloDotNet pedro$ dotnet run
Hello dotnet!

Looking again into the current folder we notice that not extra files where created when running the application.

<del>
HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files
</del>

This happens because the compilation produces in-memory assemblies, which aren’t persisted on any file. The CoreCLR virtual machine uses this in-memory assemblies when running the application.

Well, it seems I was wrong: the dotnet run command does indeed produce persisted files. This is a change when compare with dnx, which  did use in-memory assemblies.

We can see this behaviour by  using the -v switch

HelloDotNet pedro$ dotnet -v run
Running /usr/local/bin/dotnet-compile --output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --temp-output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --framework "DNXCore,Version=v5.0" --configuration "Debug" /Users/pedro/code/dotnet/HelloDotNet
Process ID: 20580
Compiling HelloDotNet for DNXCore,Version=v5.0
Running /usr/local/bin/dotnet-compile-csc @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile.HelloDotNet.rsp"
Process ID: 20581
Running csc -noconfig @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile-csc.rsp"
Process ID: 20582

Compilation succeeded.
0 Warning(s)
0 Error(s)

Time elapsed 00:00:01.4388306

Running /Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/HelloDotNet
Process ID: 20583
Hello dotnet!

Notice how it first calls the compile command (addressed in the next section) before running the application.

Compiling the application

The dotnet tool also allows the explicit compilation via its compile command.

HelloDotNet pedro$ dotnet compile
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.4249439

The resulting artifacts are stored in two new folders

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           └── NuGet.Config
├── obj
│   └── Debug
│       └── dnxcore50
│           ├── dotnet-compile-csc.rsp
│           ├── dotnet-compile.HelloDotNet.rsp
│           └── dotnet-compile.assemblyinfo.cs
├── project.json
└── project.lock.json
6 directories, 12 files

The bin/Debug/dnxcore50 contains the most interesting outputs from the compilation process. The HelloDotNet is a native executable, visible by a _main symbol inside of it, that loads the CoreCLR virtual machine and uses it to run the application.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep _main
_main:

otool is the object file displaying tool for OS X.

We can also see that the libcoreclr dynamic library is used by this bootstrap executable

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;

The HelloDotNet.dll file is a .NET assembly (has dll extension and starts with the 4d 5a magic number) containing the compiled application.

HelloDotNet pedro$ hexdump -n 32 bin/Debug/dnxcore50/HelloDotNet.dll
0000000 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00
0000010 b8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00
0000020

Directly executing the HelloDotNet file runs the application.

HelloDotNet pedro$ bin/Debug/dnxcore50/HelloDotNet
Hello dotnet!

We can also see that the CoreCLR is hosted in executing process by examining the loaded libraries.

dotnet pedro$ ps | grep Hello
18981 ttys001    0:00.23 bin/Debug/dnxcore50/HelloDotNet
19311 ttys002    0:00.00 grep Hello
dotnet pedro$ sudo vmmap 18981 | grep libcoreclr
__TEXT                 0000000105225000-000000010557a000 [ 3412K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557b000-000000010575a000 [ 1916K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575b000-0000000105813000 [  736K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__LINKEDIT             0000000105859000-00000001059e1000 [ 1568K] r--/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557a000-000000010557b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575a000-000000010575b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105813000-0000000105841000 [  184K] rw-/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105841000-0000000105859000 [   96K] rw-/rwx SM=ZER  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib

Native compilation

One of the most interesting features on .NET Core and the dotnet tool is the ability to create a native executable containing the complete program and not just a boot strap into the virtual machine. For that, we use the --native option on the compile command.

HelloDotNet pedro$ ls
NuGet.Config        Program.cs      project.json        project.lock.json
HelloDotNet pedro$ dotnet compile --native
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.1267350

The output of this compilation is a new native folder containing another HelloDotNet executable.

HelloDotNet pedro$ tree .
.
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           ├── NuGet.Config
│           └── native
│               ├── HelloDotNet
│               └── HelloDotNet.dSYM
│                   └── Contents
│                       ├── Info.plist
│                           └── Resources
│                               └── DWARF
│                                   └── HelloDotNet
├── obj
│   └── Debug
│   └── dnxcore50
│   └── HelloDotNet.obj
├── project.json
└── project.lock.json

11 directories, 13 files

Running the executable produces the expected result

HelloDotNet pedro$ bin/Debug/dnxcore50/native/HelloDotNet
Hello dotnet!

At first sight, this new executable is rather bigger that the first one, since it isn’t just a bootstrap into the virtual machine: it contains the complete application.

HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/HelloDotNet
-rwxr-xr-x  1 pedro  staff  66368 Dec 28 10:12 bin/Debug/dnxcore50/HelloDotNet
HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/native/HelloDotNet
-rwxr-xr-x  1 pedro  staff  987872 Dec 28 10:12 bin/Debug/dnxcore50/native/HelloDotNet

There are two more signs that this new executable is the application. First, there aren’t any references to the libcoreclr dynamic library.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep libcoreclr.dylib
HelloDotNet pedro$

Second, it contains a ___managed__Main symbol with the static void Main(string[] args) native code

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep -A 8 managed__Main:
___managed__Main:
0000000100001b20    pushq   %rax
0000000100001b21    movq    ___ThreadStaticRegionStart(%rip), %rdi
0000000100001b28    movq    (%rdi), %rdi
0000000100001b2b    callq   _System_Console_System_Console__WriteLine_13
0000000100001b30    nop
0000000100001b31    addq    $0x8, %rsp
0000000100001b35    retq
0000000100001b36    nop

In addition to the HelloDotNet executable, the compile --native command also creates a bin/Debug/dnxcore50/native/HelloDotNet.dSYM folder the native debug information.

Unfortunately, the .NET Core native support seems to be in the very early stages and I was unable to compile anything more complex than a simple “Hello World”. However, I’m looking forward to further developments in this area.