Focus on the representation semantics, leave the transfer semantics to HTTP

A couple of days ago I was reading the latest OAuth 2.0 Authorization Server Metadata document version and my eye got caught on one sentence. On section 3.2, the document states

A successful response MUST use the 200 OK HTTP status code and return a JSON object using the “application/json” content type (…)

My first reaction was thinking that this specification was being redundant: of course a 200 OK HTTP status should be returned on a successful response. However, that “MUST” in the text made me think: is a 200 really the only acceptable response status code for a successful response? In my opinion, the answer is no.

For instance, if caching and ETags are being used, the client can send a conditional GET request (see Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests) using the If-None-Match header, for which a 304 (Not Modified) status code is perfectly acceptable. Another example is if the metadata location changes and the server responds with a 301 (Moved Permanently) or a 302 (Found) status code.Does that means the request was unsuccessful? In my opinion, no. It just means that the request should be followed by a subsequent request to another location.

So, why does this little observation deserve a blog post?
Well, mainly because it reflects two common tendencies when designing HTTP APIs (or HTTP interfaces):

  • First, the tendency to redefine transfer semantics that are already defined by HTTP.
  • Secondly, a very simplistic view of HTTP, ignoring parts such as caching and optimistic concurrency.

The HTTP specification already defines a quite rich set of mechanisms for representation transfer, and HTTP related specifications should take advantage of that. What HTTP does not define is the semantics of the representation itself. That should be the focus of specifications such as the OAuth 2.0 Authorization Server Metadata.

When defining HTTP APIs, focus on the representation semantics. The transfer semantics is already defined by the HTTP protocol.

 

Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g. id.example.com, app1.example.com, app2.example.com), as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google Chrome.app” –args –proxy-server=10.211.55.3:8888 –proxy-bypass-list=localhost, where 10.221.55.3 is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

hosting
As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.

Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use example.com subdomains (e.g. id.example.com, app1.example.com, app2.example.com), which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than 127.0.0.1. So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

127.0.0.2 app1.example.com
127.0.0.3 app2.example.com
127.0.0.4 id.example.com

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.

The OpenID Connect Cast of Characters

Introduction

The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.

oauth2-1

  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).

oauth2-2

An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.

federation

  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.

federation-2

In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.

oidc

I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.

Using Fiddler for an Android and Windows VM development environment

In this post I describe the development environment that I use when creating Android apps that rely on ASP.NET based Web applications and Web APIs.

  • The development machine is a MBP running OS X with Android Studio.
  • Android virtual devices are run on Genymotion, which uses VirtualBox underneath.
  • Web applications and Web APIs are hosted on a Windows VM running on Parallels over the OS X host.

I use the Fiddler proxy to enable connectivity between Android and the ASP.NET apps, as well as to provide me full visibility on the HTTP messages. Fiddler also enables me to use HTTPS even on this development environment.

The main idea is to use Fiddler as the Android’s system HTTP proxy, in conjunction with a port forwarding rule that maps a port on the OS X host to the Windows VM. This is depicted in the following diagram.

android

 

The required configuration steps are:

  1. Start Fiddler on the Windows VM and allow remote computers to connect
    • Fiddler – Tools – Fiddler Options – Connections – check “Allow remote computers to connect”.
    • This will make Fiddler listen on 0.0.0.0:8888.
  2. Enable Fiddler to intercept HTTPS traffic
    • Fiddler – Tools – Fiddler Options –  HTTPS – check “Decrypt HTTPS traffic”.
    • This will add a new root certificate to the “Trusted Root Certification Authorities” Windows certificate store.
  3. Define a port forwarding rule mapping TCP port 8888 on the OS X host to port TCP 8888 on the Windows guest (where Fiddler is listening).
    • Parallels – Preferences – Network:change settings – Port forward rules  – add “TCP:8888 -> Windows:8888”.
  4. Check which “host-only network” is the Android VM using
    • VirtualBox Android VM – Settings – Network – Name (e.g. “vboxnet1”).
  5. Find the IP for the identified adapter
    • VirtualBox – Preferences – Network – Host-only Networks – “vboxnet1”.
    • In my case the IP is 192.168.57.1.
  6. On Android, configure the Wi-Fi connection HTTP proxy (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Settings – Wi-Fi – long tap on choosen network – modify network – enable advanced options – manual proxy
      • Set “Proxy hostname” to the IP identified in the previous step (e.g. 192.168.57.1).
      • Set “Proxy port” to 8888.
    • With this step, all the HTTP traffic will be directed to the Fiddler HTTP proxy running on the Windows VM
  7. The last step is to install the Fiddler root certificate, so that the Fiddler generated certificates are accepted by the Android applications, such as the system browser (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Open the browser and navigate to http://ipv4.fiddler:8888
    • Select the link “FiddlerRoot certificate” and on the Android dialog select “Credential use: VPN and apps”.

And that’s it: all HTTP traffic that uses the Android system’s proxy settings will be directed to Fiddler, with the following advantages

  • Visibility of the requests and responses on the Fiddler UI, namely the ones using HTTPS.
  • Access to Web applications running on the Windows VM, using both IIS hosting or self-hosting.
  • Access to external hosts on the Internet.
  • Use of the Windows “hosts” file host name overrides.
    • For development purposes I typically use host names other than “localhost”, such as “app1.example.com” or “id.example.com”.
    • Since the name resolution will be done by the Fiddler proxy, these host names can be used directly on Android.

Here is the screenshot of Chrome running on Android and presenting a ASP.NET MVC application running on the Windows VM. Notice the green “https” icon.

Screen Shot 2016-03-05 at 19.31.53

And here is the Chrome screenshot of a IdentityServer3 login screen, also running on the Windows VM.

Screen Shot 2016-03-05 at 19.34.42

Hope this helps!

OAuth 2.0 and PKCE

Introduction

Both Google and IdentityServer have recently announced support for the PKCE (Proof Key for Code Exchange by OAuth Public Clients) specification defined by RFC 7636.

This is an excellent opportunity to revisit the OAuth 2.0 authorization code flow and illustrate how PKCE addresses some of the security issues that exist when this flow is implemented on native applications.

tl;dr

On the authorization code flow, the redirect from the authorization server back to client is one of the most security sensitive parts of the OAuth 2.0 protocol. The main reason is that this redirect contains the code representing the authorization delegation performed by the User. On public clients, such as native applications, this code is enough to obtain the access tokens allowing access to the User’s resources.

The PKCE specification addresses an attack vector where an attacker creates a native application that registers the same URL scheme used by the Client application, therefore gaining access to the authorization code. Succinctly, the PKCE specification requires the exchange of the code for the access token to use a ephemeral secret information that is not available on the redirect, making the knowledge of the code insufficient to use it. This extra information (or a transformation of it) is sent on the initial authorization request.

A slightly longer version

The OAuth 2.0 cast of characters

  • The User is typically an human entity capable of granting access to resources.
  • The Resource Server (RS) is the entity exposing an HTTP API to access these resources.
  • The Client is an application (e.g. server-based Web application or native application) wanting to access these resources, via a authorization delegation performed by the User. Clients can be
    • confidential – client applications that can hold a secret. The typical example are Web applications, where a client secret is stored and used only on the server side.
    • public – client application that cannot hold a secret, such as native applications running on the User’s mobile device.
  • The Authorization Server (AS) is the entity that authenticates the user, captures her authorization consent and issues access tokens that the Client application can use to access the resources exposed on the RS.

Authorization code flow for Web Applications

The following diagram illustrates the authorization code flow for Web applications (the Client application is a Web server).

Slide2

 

  1. The flow starts with the Client application server-side producing a redirect HTTP response (e.g. response with 302 status) with the authorization request URL in the Location header. This URL will contain the authorization request parameters such as the state, scope and redirect_uri.
  2. When receiving this response, the User’s browser automatically performs a GET HTTP request to the Authorization Server (AS) authorization endpoint, containing the OAuth 2.0 authorization request.
  3. The AS then starts an interaction sequence to authenticate the user (e.g. username and password, two-factor authentication, delegated authentication), and to obtain the user consent. This sequence is not defined by OAuth 2.0 and can take multiple steps.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL points to the client application hostname and contains the the authorization response parameters, such as the state and the (security sensitive) code.
  5. When receiving this response, the user’s browser automatically performs a GET request to the Client redirect endpoint with the OAuth 2.0 authorization response. By using HTTPS on the request to the Client, the protocol minimises the chances of the code being leaked to an attacker.
  6. Having received that authorization code, the Client then uses it to obtain the access token from the AS token endpoint. Since the client is a confidencial client, this request is authenticated with the client credentials (client ID and client secret), typically sent in the Authorization header using the basic scheme. The AS checks if this code is valid, namely if it was issued to the requesting authenticated client. If everything is verified, a 200 response with the access token is returned.
  7. Finally, the client can use the received access token to access the protected resources.

Authorization code flow for native Applications

For a native application, the flow is slightly different, namely on the first phase (the authorization request). Recall that in this case the Client application is running in the User’s device

Slide3

  1. The flow begins with the Client application starting the system’s browser (or a web view, more on this on another post) at a URL with the authorization request. For instance, on the Android platform this is achieved by sending an intent.
  2. The browser comes into the foreground and performs a GET request to the AS authorization endpoint containing the authorization request.
  3. The same authentication and consent dance occurs between the AS and the User’s browser.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL contains the the authorization response parameters. However, there is something special in the redirect URL. Instead of using a http URL scheme, which would make the browser perform another HTTP request, the redirect URL contains a custom URI scheme.
  5. As a result, when the browser receives this response and processes the redirect an inter-application message (e.g. an intent in Android) is sent to the application associated to this scheme, which should be the Client application. This brings the Client application to the foreground and provides it with the authorization response parameters, namely the authorization code.
  6. From now on, the flow is similar to the Web based one. Namely, the Client application  uses the code to obtain the access token from the AS token endpoint. Since the client is a public client, this request is not authenticated, that is no client secret is used.
  7. Finally, having received the access token, the client application running on the device can access the User’s resources.

On both scenarios, the authorization code communication path, from the AS to the Client via User’s browser, is very security sensitive. This is specially relevant in the native scenario since the Client is public and the knowledge of that authorization code is enough to obtain the access token.

Hijacking the redirect

On the Web application scenario, the GET request with the authorization response has a HTTPS URL, which means that the browser will only send the code if the server correctly authenticates itself. However, on the native scenario, the intent will be sent to any installed application that registered the custom scheme. Unfortunately, there isn’t a central entity controlling and validating these scheme registrations, so an application can hijack the message from the browser to the client application, as shown in the following diagram.

Slide4

Having obtained the authorization code, the attacker’s application has all the information required to retrieve a token and access the User’s resources.

The PKCE protection

The PKCE specification mitigates this vulnerability by requiring an extra code_verifier parameter on the exchange of the authorization code for the access token.Slide5

  • On step 1, the Client application generates a random secret, stores it and uses its hash value on the new code_challenge authorization request parameter.
  • On step 4, the AS somehow associates the returned code to the code_challenge.
  • On step 6, the Client includes a code_verifier parameter with the secret on the token request message. The AS computes the hash of the code_verifier value and compares it with the original code_challenge associated with the code. Only if they are equals is the code accepted and an access token returned.

This ensures that only the entity that started the flow (sent the code_challenge on the authorization request) can end the flow and obtain the access token. By using a cryptographic hash function on the code_challenge, the protocol is protected from attackers that have read access to the original authorization request. However, the protocol also allows the secret to be used directly on the code_challenge.

Finally, the PKCE support by an AS can be advertised on the OAuth 2.0 or OpenID Connect discovery document, using the code_challenge_methods_supported field. The following is the Google’s OpenID Connect discovery document, located at https://accounts.google.com/.well-known/openid-configuration.

{
 "issuer": "https://accounts.google.com",
 "authorization_endpoint": "https://accounts.google.com/o/oauth2/v2/auth",
 "token_endpoint": "https://www.googleapis.com/oauth2/v4/token",
 "userinfo_endpoint": "https://www.googleapis.com/oauth2/v3/userinfo",
 "revocation_endpoint": "https://accounts.google.com/o/oauth2/revoke",
 "jwks_uri": "https://www.googleapis.com/oauth2/v3/certs",
 "response_types_supported": [
  "code",
  "token",
  "id_token",
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
  "none"
 ],
 "subject_types_supported": [
  "public"
 ],
 "id_token_signing_alg_values_supported": [
  "RS256"
 ],
 "scopes_supported": [
  "openid",
  "email",
  "profile"
 ],
 "token_endpoint_auth_methods_supported": [
  "client_secret_post",
  "client_secret_basic"
 ],
 "claims_supported": [
  "aud",
  "email",
  "email_verified",
  "exp",
  "family_name",
  "given_name",
  "iat",
  "iss",
  "locale",
  "name",
  "picture",
  "sub"
 ],
 "code_challenge_methods_supported": [
  "plain",
  "S256"
 ]
}

 

 

 

 

Using Vagrant to test ASP.NET 5 RC1

The recent Release Candidate 1 (RC1) for ASP.NET 5 includes support for Linux and OS X via .NET Core. After trying it out on OS X, I wanted to do some experiments on Linux as well. For that I used Vagrant to automate the creation and provision of the Linux development environments. In this post I describe the steps required for this task, using OS X as the host (the steps on a Windows host will be similar).

Short version

Start by ensuring Vagrant and VirtualBox are installed on your host machine.
Then open a shell and do the following commands.
The vagrant up may take a while since it will not only download and boot the base virtual machine image, but also provision ASP.NET 5 RC1 and all its dependencies.

git clone https://github.com/pmhsfelix/vagrant-aspnet-rc1.git (or your own fork URL instead)
cd vagrant-aspnet-rc1
vagrant up
vagrant ssh

After the last command completes you should have a SSH session into a Ubuntu Server with ASP.NET RC1 installed, running on a virtual machine (VM). Port 5000 on the host is mapped into port 5000 on the guest.

The vagrant-aspnet-rc1 host folder is mounted into the /vagrant guest folder, so you can use this to share files between host and guest.
For instance, a ASP.NET project published to vagrant-aspnet-rc1/published on the host will be visible on the /vagrant/published guest path.

For any comment or issue that you have, please raise an issue at https://github.com/pmhsfelix/vagrant-aspnet-rc1.

Longer (and perhaps more instructive) version

First, start by installing Vagrant and also VirtualBox, which will be required to run the virtual machine with Linux.

Afterwards, create a new folder (e.g. vagrant-aspnet-rc1) to host the Vagrant configuration.

dotnet pedro$ mkdir vagrant-aspnet-rc1
dotnet pedro$ cd vagrant-aspnet-rc1
vagrant-aspnet-rc1 pedro$

Then, initialize the Vagrant configuration using the init command.

vagrant-aspnet-rc1 pedro$ vagrant init ubuntu/trusty64
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
vagrant-aspnet-rc1 pedro$ ls
Vagrantfile

The second parameter, ubuntu/trusty64, is the name of a box available on the Vagrant public catalog, which in this case contains a Ubuntu Server 14.04 LTS.
Notice also how a Vagrantfile file, containing the Vagrant configuration, was created on the current directory. We will be using this file latter on.

The next step is to start the virtual machine.

vagrant-aspnet-rc1 pedro$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: vagrant-aspnet_default_1451428161431_85889
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2200
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default:
    default: Guest Additions Version: 4.3.34
    default: VirtualBox Version: 5.0
==> default: Mounting shared folders...
    default: /vagrant => /Users/pedro/code/dotnet/vagrant-aspnet-rc1

As can be seen in the command output, a VM was booted and SSH was configured. So the next step is to open a SSH session into the machine to check if everything is working properly. This is accomplished using the ssh command.

vagrant-aspnet-rc1 pedro$ vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Dec 29 22:29:41 UTC 2015

  System load:  0.35              Processes:           80
  Usage of /:   3.4% of 39.34GB   Users logged in:     0
  Memory usage: 25%               IP address for eth0: 10.0.2.15
  Swap usage:   0%

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

vagrant@vagrant-ubuntu-trusty-64:~$ hostname
vagrant-ubuntu-trusty-64
vagrant@vagrant-ubuntu-trusty-64:~$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant)

Notice how we end up with a session into a vagrant-ubuntu-trusty-64 machine, running under the vagrant user.
In addition to setting up SSH, Vagrant also mounted the vagrant-aspnet-rc1 host folder (the one were the Vagrantfile was created) into the /vagrant file on the guest.

vagrant@vagrant-ubuntu-trusty-64:~$ ls /vagrant
Vagrantfile

We could now start to install ASP.NET 5 following the procedure outlined at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html. However, that would be the “old way of doing things” and would not provide us with a reproducable development environment.
A better solution is to create a provision script, called bootstrap.sh, and use it with Vagrant.

The provision script is simply a copy of the procedures at http://docs.asp.net/en/latest/getting-started/installing-on-linux.html, slightly changed to allow unsupervised installation.

#!/usr/bin/env bash

# install dnvm pre-requisites
sudo apt-get install -y unzip curl
# install dnvm
curl -sSL https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.sh | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/dnvm.sh

# install dnx pre-requisites
sudo apt-get install -y libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
# install dnx via dnvm
dnvm upgrade -r coreclr

# install libuv from source
sudo apt-get install -y make automake libtool curl
curl -sSL https://github.com/libuv/libuv/archive/v1.4.2.tar.gz | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh autogen.sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
sudo ldconfig

The next step is to edit the Vagrantfile so this provision script is run automatically by Vagrant.
We also change the port forwarding rule so that is matches the default 5000 port used by ASP.NET.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.provision :shell, path: "bootstrap.sh", privileged: false
  config.vm.network "forwarded_port", guest: 5000, host: 5000
end

To check that everything is properly configured we redo the whole process by destroying the VM and creating it again.

vagrant-aspnet-rc1 pedro$ vagrant destroy
    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...
vagrant-aspnet-rc1 pedro$ vagrant up
( ... lots of things that take a while to happen ... )

Finally, do vagrant ssh and check that dnx is fully functional.

How about publish with runtime?

Instead of having to previously provision ASP.NET, wouldn’t it be nice to include all the dependencies on the published project so that we could deploy it on a plain vanilla Ubuntu or Debian machine?
Well, one one hand it is possible to configure the publish process to also include the runtime, via the --runtime parameter.

dnu publish --out ~/code/dotnet/vagrant-ubuntu/published --no-source --runtime dnx-coreclr-linux-x64.1.0.0-rc1-update1

On the other hand, in order to have the Linux DNX runtime available on OS X we just need to explicitly specify the OS on the dnvm command

dnvm install latest -OS linux -r coreclr

Unfortunately, this approach does not work because the published runtime is not self-sufficient.
For it to work properly it still requires some dependencies to be previously provisioned on the deployed machine.
This can be seen if we try to run the ASP.NET project

vagrant@vagrant-ubuntu-trusty-64:~$ /vagrant/published/approot/web
failed to locate libcoreclr with error libunwind-x86_64.so.8: cannot open shared object file: No such file or directory
vagrant@vagrant-ubuntu-trusty-64:~$ Connection to 127.0.0.1 closed by remote host.
Connection to 127.0.0.1 closed.

Notice how the libunwind-x86_64.so.8 failed to be opened.
So, for the time being, we need to provision at least the runtime dependencies on the deployed machine.
The runtime itself can be contained in the published project.