The OpenID Connect Cast of Characters


The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.


  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).


An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.


  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.


In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.


I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.

Using Fiddler for an Android and Windows VM development environment

In this post I describe the development environment that I use when creating Android apps that rely on ASP.NET based Web applications and Web APIs.

  • The development machine is a MBP running OS X with Android Studio.
  • Android virtual devices are run on Genymotion, which uses VirtualBox underneath.
  • Web applications and Web APIs are hosted on a Windows VM running on Parallels over the OS X host.

I use the Fiddler proxy to enable connectivity between Android and the ASP.NET apps, as well as to provide me full visibility on the HTTP messages. Fiddler also enables me to use HTTPS even on this development environment.

The main idea is to use Fiddler as the Android’s system HTTP proxy, in conjunction with a port forwarding rule that maps a port on the OS X host to the Windows VM. This is depicted in the following diagram.



The required configuration steps are:

  1. Start Fiddler on the Windows VM and allow remote computers to connect
    • Fiddler – Tools – Fiddler Options – Connections – check “Allow remote computers to connect”.
    • This will make Fiddler listen on
  2. Enable Fiddler to intercept HTTPS traffic
    • Fiddler – Tools – Fiddler Options –  HTTPS – check “Decrypt HTTPS traffic”.
    • This will add a new root certificate to the “Trusted Root Certification Authorities” Windows certificate store.
  3. Define a port forwarding rule mapping TCP port 8888 on the OS X host to port TCP 8888 on the Windows guest (where Fiddler is listening).
    • Parallels – Preferences – Network:change settings – Port forward rules  – add “TCP:8888 -> Windows:8888”.
  4. Check which “host-only network” is the Android VM using
    • VirtualBox Android VM – Settings – Network – Name (e.g. “vboxnet1”).
  5. Find the IP for the identified adapter
    • VirtualBox – Preferences – Network – Host-only Networks – “vboxnet1”.
    • In my case the IP is
  6. On Android, configure the Wi-Fi connection HTTP proxy (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Settings – Wi-Fi – long tap on choosen network – modify network – enable advanced options – manual proxy
      • Set “Proxy hostname” to the IP identified in the previous step (e.g.
      • Set “Proxy port” to 8888.
    • With this step, all the HTTP traffic will be directed to the Fiddler HTTP proxy running on the Windows VM
  7. The last step is to install the Fiddler root certificate, so that the Fiddler generated certificates are accepted by the Android applications, such as the system browser (based on “Configure Fiddler for Android / Google Nexus 7”).
    • Open the browser and navigate to http://ipv4.fiddler:8888
    • Select the link “FiddlerRoot certificate” and on the Android dialog select “Credential use: VPN and apps”.

And that’s it: all HTTP traffic that uses the Android system’s proxy settings will be directed to Fiddler, with the following advantages

  • Visibility of the requests and responses on the Fiddler UI, namely the ones using HTTPS.
  • Access to Web applications running on the Windows VM, using both IIS hosting or self-hosting.
  • Access to external hosts on the Internet.
  • Use of the Windows “hosts” file host name overrides.
    • For development purposes I typically use host names other than “localhost”, such as “” or “”.
    • Since the name resolution will be done by the Fiddler proxy, these host names can be used directly on Android.

Here is the screenshot of Chrome running on Android and presenting a ASP.NET MVC application running on the Windows VM. Notice the green “https” icon.

Screen Shot 2016-03-05 at 19.31.53

And here is the Chrome screenshot of a IdentityServer3 login screen, also running on the Windows VM.

Screen Shot 2016-03-05 at 19.34.42

Hope this helps!

OAuth 2.0 and PKCE


Both Google and IdentityServer have recently announced support for the PKCE (Proof Key for Code Exchange by OAuth Public Clients) specification defined by RFC 7636.

This is an excellent opportunity to revisit the OAuth 2.0 authorization code flow and illustrate how PKCE addresses some of the security issues that exist when this flow is implemented on native applications.


On the authorization code flow, the redirect from the authorization server back to client is one of the most security sensitive parts of the OAuth 2.0 protocol. The main reason is that this redirect contains the code representing the authorization delegation performed by the User. On public clients, such as native applications, this code is enough to obtain the access tokens allowing access to the User’s resources.

The PKCE specification addresses an attack vector where an attacker creates a native application that registers the same URL scheme used by the Client application, therefore gaining access to the authorization code. Succinctly, the PKCE specification requires the exchange of the code for the access token to use a ephemeral secret information that is not available on the redirect, making the knowledge of the code insufficient to use it. This extra information (or a transformation of it) is sent on the initial authorization request.

A slightly longer version

The OAuth 2.0 cast of characters

  • The User is typically an human entity capable of granting access to resources.
  • The Resource Server (RS) is the entity exposing an HTTP API to access these resources.
  • The Client is an application (e.g. server-based Web application or native application) wanting to access these resources, via a authorization delegation performed by the User. Clients can be
    • confidential – client applications that can hold a secret. The typical example are Web applications, where a client secret is stored and used only on the server side.
    • public – client application that cannot hold a secret, such as native applications running on the User’s mobile device.
  • The Authorization Server (AS) is the entity that authenticates the user, captures her authorization consent and issues access tokens that the Client application can use to access the resources exposed on the RS.

Authorization code flow for Web Applications

The following diagram illustrates the authorization code flow for Web applications (the Client application is a Web server).



  1. The flow starts with the Client application server-side producing a redirect HTTP response (e.g. response with 302 status) with the authorization request URL in the Location header. This URL will contain the authorization request parameters such as the state, scope and redirect_uri.
  2. When receiving this response, the User’s browser automatically performs a GET HTTP request to the Authorization Server (AS) authorization endpoint, containing the OAuth 2.0 authorization request.
  3. The AS then starts an interaction sequence to authenticate the user (e.g. username and password, two-factor authentication, delegated authentication), and to obtain the user consent. This sequence is not defined by OAuth 2.0 and can take multiple steps.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL points to the client application hostname and contains the the authorization response parameters, such as the state and the (security sensitive) code.
  5. When receiving this response, the user’s browser automatically performs a GET request to the Client redirect endpoint with the OAuth 2.0 authorization response. By using HTTPS on the request to the Client, the protocol minimises the chances of the code being leaked to an attacker.
  6. Having received that authorization code, the Client then uses it to obtain the access token from the AS token endpoint. Since the client is a confidencial client, this request is authenticated with the client credentials (client ID and client secret), typically sent in the Authorization header using the basic scheme. The AS checks if this code is valid, namely if it was issued to the requesting authenticated client. If everything is verified, a 200 response with the access token is returned.
  7. Finally, the client can use the received access token to access the protected resources.

Authorization code flow for native Applications

For a native application, the flow is slightly different, namely on the first phase (the authorization request). Recall that in this case the Client application is running in the User’s device


  1. The flow begins with the Client application starting the system’s browser (or a web view, more on this on another post) at a URL with the authorization request. For instance, on the Android platform this is achieved by sending an intent.
  2. The browser comes into the foreground and performs a GET request to the AS authorization endpoint containing the authorization request.
  3. The same authentication and consent dance occurs between the AS and the User’s browser.
  4. After having authenticated and obtained consent from the user, the AS returns a HTTP redirect response with the authorization response on the Location header. This URL contains the the authorization response parameters. However, there is something special in the redirect URL. Instead of using a http URL scheme, which would make the browser perform another HTTP request, the redirect URL contains a custom URI scheme.
  5. As a result, when the browser receives this response and processes the redirect an inter-application message (e.g. an intent in Android) is sent to the application associated to this scheme, which should be the Client application. This brings the Client application to the foreground and provides it with the authorization response parameters, namely the authorization code.
  6. From now on, the flow is similar to the Web based one. Namely, the Client application  uses the code to obtain the access token from the AS token endpoint. Since the client is a public client, this request is not authenticated, that is no client secret is used.
  7. Finally, having received the access token, the client application running on the device can access the User’s resources.

On both scenarios, the authorization code communication path, from the AS to the Client via User’s browser, is very security sensitive. This is specially relevant in the native scenario since the Client is public and the knowledge of that authorization code is enough to obtain the access token.

Hijacking the redirect

On the Web application scenario, the GET request with the authorization response has a HTTPS URL, which means that the browser will only send the code if the server correctly authenticates itself. However, on the native scenario, the intent will be sent to any installed application that registered the custom scheme. Unfortunately, there isn’t a central entity controlling and validating these scheme registrations, so an application can hijack the message from the browser to the client application, as shown in the following diagram.


Having obtained the authorization code, the attacker’s application has all the information required to retrieve a token and access the User’s resources.

The PKCE protection

The PKCE specification mitigates this vulnerability by requiring an extra code_verifier parameter on the exchange of the authorization code for the access token.Slide5

  • On step 1, the Client application generates a random secret, stores it and uses its hash value on the new code_challenge authorization request parameter.
  • On step 4, the AS somehow associates the returned code to the code_challenge.
  • On step 6, the Client includes a code_verifier parameter with the secret on the token request message. The AS computes the hash of the code_verifier value and compares it with the original code_challenge associated with the code. Only if they are equals is the code accepted and an access token returned.

This ensures that only the entity that started the flow (sent the code_challenge on the authorization request) can end the flow and obtain the access token. By using a cryptographic hash function on the code_challenge, the protocol is protected from attackers that have read access to the original authorization request. However, the protocol also allows the secret to be used directly on the code_challenge.

Finally, the PKCE support by an AS can be advertised on the OAuth 2.0 or OpenID Connect discovery document, using the code_challenge_methods_supported field. The following is the Google’s OpenID Connect discovery document, located at

 "issuer": "",
 "authorization_endpoint": "",
 "token_endpoint": "",
 "userinfo_endpoint": "",
 "revocation_endpoint": "",
 "jwks_uri": "",
 "response_types_supported": [
  "code token",
  "code id_token",
  "token id_token",
  "code token id_token",
 "subject_types_supported": [
 "id_token_signing_alg_values_supported": [
 "scopes_supported": [
 "token_endpoint_auth_methods_supported": [
 "claims_supported": [
 "code_challenge_methods_supported": [





Using Vagrant to test ASP.NET 5 RC1

The recent Release Candidate 1 (RC1) for ASP.NET 5 includes support for Linux and OS X via .NET Core. After trying it out on OS X, I wanted to do some experiments on Linux as well. For that I used Vagrant to automate the creation and provision of the Linux development environments. In this post I describe the steps required for this task, using OS X as the host (the steps on a Windows host will be similar).

Short version

Start by ensuring Vagrant and VirtualBox are installed on your host machine.
Then open a shell and do the following commands.
The vagrant up may take a while since it will not only download and boot the base virtual machine image, but also provision ASP.NET 5 RC1 and all its dependencies.

git clone (or your own fork URL instead)
cd vagrant-aspnet-rc1
vagrant up
vagrant ssh

After the last command completes you should have a SSH session into a Ubuntu Server with ASP.NET RC1 installed, running on a virtual machine (VM). Port 5000 on the host is mapped into port 5000 on the guest.

The vagrant-aspnet-rc1 host folder is mounted into the /vagrant guest folder, so you can use this to share files between host and guest.
For instance, a ASP.NET project published to vagrant-aspnet-rc1/published on the host will be visible on the /vagrant/published guest path.

For any comment or issue that you have, please raise an issue at

Longer (and perhaps more instructive) version

First, start by installing Vagrant and also VirtualBox, which will be required to run the virtual machine with Linux.

Afterwards, create a new folder (e.g. vagrant-aspnet-rc1) to host the Vagrant configuration.

dotnet pedro$ mkdir vagrant-aspnet-rc1
dotnet pedro$ cd vagrant-aspnet-rc1
vagrant-aspnet-rc1 pedro$

Then, initialize the Vagrant configuration using the init command.

vagrant-aspnet-rc1 pedro$ vagrant init ubuntu/trusty64
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`` for more information on using Vagrant.
vagrant-aspnet-rc1 pedro$ ls

The second parameter, ubuntu/trusty64, is the name of a box available on the Vagrant public catalog, which in this case contains a Ubuntu Server 14.04 LTS.
Notice also how a Vagrantfile file, containing the Vagrant configuration, was created on the current directory. We will be using this file latter on.

The next step is to start the virtual machine.

vagrant-aspnet-rc1 pedro$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: vagrant-aspnet_default_1451428161431_85889
==> default: Clearing any previously set forwarded ports...
==> default: Fixed port collision for 22 => 2222. Now on port 2200.
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2200 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address:
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default: Guest Additions Version: 4.3.34
    default: VirtualBox Version: 5.0
==> default: Mounting shared folders...
    default: /vagrant => /Users/pedro/code/dotnet/vagrant-aspnet-rc1

As can be seen in the command output, a VM was booted and SSH was configured. So the next step is to open a SSH session into the machine to check if everything is working properly. This is accomplished using the ssh command.

vagrant-aspnet-rc1 pedro$ vagrant ssh
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:

  System information as of Tue Dec 29 22:29:41 UTC 2015

  System load:  0.35              Processes:           80
  Usage of /:   3.4% of 39.34GB   Users logged in:     0
  Memory usage: 25%               IP address for eth0:
  Swap usage:   0%

  Graph this data and manage this system at:

  Get cloud support with Ubuntu Advantage Cloud Guest:

0 packages can be updated.
0 updates are security updates.

vagrant@vagrant-ubuntu-trusty-64:~$ hostname
vagrant@vagrant-ubuntu-trusty-64:~$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant)

Notice how we end up with a session into a vagrant-ubuntu-trusty-64 machine, running under the vagrant user.
In addition to setting up SSH, Vagrant also mounted the vagrant-aspnet-rc1 host folder (the one were the Vagrantfile was created) into the /vagrant file on the guest.

vagrant@vagrant-ubuntu-trusty-64:~$ ls /vagrant

We could now start to install ASP.NET 5 following the procedure outlined at However, that would be the “old way of doing things” and would not provide us with a reproducable development environment.
A better solution is to create a provision script, called, and use it with Vagrant.

The provision script is simply a copy of the procedures at, slightly changed to allow unsupervised installation.

#!/usr/bin/env bash

# install dnvm pre-requisites
sudo apt-get install -y unzip curl
# install dnvm
curl -sSL | DNX_BRANCH=dev sh && source ~/.dnx/dnvm/

# install dnx pre-requisites
sudo apt-get install -y libunwind8 gettext libssl-dev libcurl4-openssl-dev zlib1g libicu-dev uuid-dev
# install dnx via dnvm
dnvm upgrade -r coreclr

# install libuv from source
sudo apt-get install -y make automake libtool curl
curl -sSL | sudo tar zxfv - -C /usr/local/src
cd /usr/local/src/libuv-1.4.2
sudo sh
sudo ./configure
sudo make
sudo make install
sudo rm -rf /usr/local/src/libuv-1.4.2 && cd ~/
sudo ldconfig

The next step is to edit the Vagrantfile so this provision script is run automatically by Vagrant.
We also change the port forwarding rule so that is matches the default 5000 port used by ASP.NET.

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config| = "ubuntu/trusty64"
  config.vm.provision :shell, path: "", privileged: false "forwarded_port", guest: 5000, host: 5000

To check that everything is properly configured we redo the whole process by destroying the VM and creating it again.

vagrant-aspnet-rc1 pedro$ vagrant destroy
    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Forcing shutdown of VM...
==> default: Destroying VM and associated drives...
vagrant-aspnet-rc1 pedro$ vagrant up
( ... lots of things that take a while to happen ... )

Finally, do vagrant ssh and check that dnx is fully functional.

How about publish with runtime?

Instead of having to previously provision ASP.NET, wouldn’t it be nice to include all the dependencies on the published project so that we could deploy it on a plain vanilla Ubuntu or Debian machine?
Well, one one hand it is possible to configure the publish process to also include the runtime, via the --runtime parameter.

dnu publish --out ~/code/dotnet/vagrant-ubuntu/published --no-source --runtime dnx-coreclr-linux-x64.1.0.0-rc1-update1

On the other hand, in order to have the Linux DNX runtime available on OS X we just need to explicitly specify the OS on the dnvm command

dnvm install latest -OS linux -r coreclr

Unfortunately, this approach does not work because the published runtime is not self-sufficient.
For it to work properly it still requires some dependencies to be previously provisioned on the deployed machine.
This can be seen if we try to run the ASP.NET project

vagrant@vagrant-ubuntu-trusty-64:~$ /vagrant/published/approot/web
failed to locate libcoreclr with error cannot open shared object file: No such file or directory
vagrant@vagrant-ubuntu-trusty-64:~$ Connection to closed by remote host.
Connection to closed.

Notice how the failed to be opened.
So, for the time being, we need to provision at least the runtime dependencies on the deployed machine.
The runtime itself can be contained in the published project.

A first look at .NET Core and the dotnet CLI tool

A recent post by Scott Hanselman triggered my curiosity about the new dotnet Command Line Interface (CLI) tool for .NET Core, which aims to be a “cross-platform general purpose managed framework”. In this post I present my first look on using .NET Core and the dotnet tool on OS X.


For OS X, the recommended installation procedure is to use the “official PKG”. Unfortunately, this PKG doesn’t seem to be signed so trying to run it directly from the browser will result in an error. The workaround is use Finder to locate the downloaded file and then select “Open” on the file. Notice that this PKG requires administrative privileges to run, so proceed at your own risk (the .NET Core home page uses a https URI and the PKG is hosted on Azure Blob Storage, also using HTTPS –

After installation, the dotnet tool will be available on your shell.

~ pedro$ which dotnet

I confess that I was expecting the recommended installation procedure to use homebrew instead of a downloaded PKG.

Creating the application

To create an application we start by making an empty folder (e.g. HelloDotNet) and then run dotnet new on it.

dotnet pedro$ mkdir HelloDotNet
dotnet pedro$ cd HelloDotNet
HelloDotNet pedro$ dotnet new
Created new project in /Users/pedro/code/dotnet/HelloDotNet.

This newcommand creates three new files in the current folder.

HelloDotNet pedro$ tree .
├── NuGet.Config
├── Program.cs
└── project.json

0 directories, 3 files

The first one, NuGet.Config is an XML file containing the NuGet package sources, namely the feed containing .NET Core.

<?xml version="1.0" encoding="utf-8"?>
<!--To inherit the global NuGet package sources remove the <clear/> line below -->
<clear />
<add key="dotnet-core" value="" />
<add key="" value="" />

The second one is a C# source file containing the classical static void Main(string[] args) application entry point.

HelloDotNet pedro$ cat Program.cs
using System;

namespace ConsoleApplication
    public class Program
        public static void Main(string[] args)
            Console.WriteLine(&amp;amp;amp;amp;quot;Hello World!&amp;amp;amp;amp;quot;);

Finally, the third file is the project.json containing the project definitions, such as compilation options and library dependencies.

HelloDotNet pedro$ cat project.json
"version": "1.0.0-*",
"compilationOptions": {
"emitEntryPoint": true

"dependencies": {
"NETStandard.Library": "1.0.0-rc2-23616"

"frameworks": {
"dnxcore50": { }

Resolving dependencies

The next step is to ensure all dependencies required by our project are available. For that we use the restore command.

HelloDotNet pedro$ dotnet restore
Microsoft .NET Development Utility CoreClr-x64-1.0.0-rc1-16231

  OK 778ms
Restore complete, 40937ms elapsed

NuGet Config files used:

Feeds used:

    69 package(s) to /Users/pedro/.dnx/packages

After figuratively downloading almost half of the Internet, or 69 packages to be more precise, the restore process ends stating that the required dependencies where installed at ~/.dnx/packages.
Notice the dnx in the path, which shows the DNX heritage of the dotnet tool. I presume this names will change before the RTM version. Notice also that the only thing added to the current folder is the project.lock.json containing the complete dependency graph created by the restore process based on the direct dependencies.

HelloDotNet pedro$ tree .
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files

Namely, no dependencies where copied to the local folder.
Instead the global ~/.dnx/packages/ repository is used.

Running the application

After changing the greetings message to Hello dotnet we can run the application using the run command.

HelloDotNet pedro$ dotnet run
Hello dotnet!

Looking again into the current folder we notice that not extra files where created when running the application.

HelloDotNet pedro$ tree .
├── NuGet.Config
├── Program.cs
├── project.json
└── project.lock.json

0 directories, 4 files

This happens because the compilation produces in-memory assemblies, which aren’t persisted on any file. The CoreCLR virtual machine uses this in-memory assemblies when running the application.

Well, it seems I was wrong: the dotnet run command does indeed produce persisted files. This is a change when compare with dnx, which  did use in-memory assemblies.

We can see this behaviour by  using the -v switch

HelloDotNet pedro$ dotnet -v run
Running /usr/local/bin/dotnet-compile --output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --temp-output "/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba" --framework "DNXCore,Version=v5.0" --configuration "Debug" /Users/pedro/code/dotnet/HelloDotNet
Process ID: 20580
Compiling HelloDotNet for DNXCore,Version=v5.0
Running /usr/local/bin/dotnet-compile-csc @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile.HelloDotNet.rsp"
Process ID: 20581
Running csc -noconfig @"/Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/dotnet-compile-csc.rsp"
Process ID: 20582

Compilation succeeded.
0 Warning(s)
0 Error(s)

Time elapsed 00:00:01.4388306

Running /Users/pedro/code/dotnet/HelloDotNet/bin/.dotnetrun/3326e7b6940b4d50a30a12a02b5cdaba/HelloDotNet
Process ID: 20583
Hello dotnet!

Notice how it first calls the compile command (addressed in the next section) before running the application.

Compiling the application

The dotnet tool also allows the explicit compilation via its compile command.

HelloDotNet pedro$ dotnet compile
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.4249439

The resulting artifacts are stored in two new folders

HelloDotNet pedro$ tree .
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           └── NuGet.Config
├── obj
│   └── Debug
│       └── dnxcore50
│           ├── dotnet-compile-csc.rsp
│           ├── dotnet-compile.HelloDotNet.rsp
│           └── dotnet-compile.assemblyinfo.cs
├── project.json
└── project.lock.json
6 directories, 12 files

The bin/Debug/dnxcore50 contains the most interesting outputs from the compilation process. The HelloDotNet is a native executable, visible by a _main symbol inside of it, that loads the CoreCLR virtual machine and uses it to run the application.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep _main

otool is the object file displaying tool for OS X.

We can also see that the libcoreclr dynamic library is used by this bootstrap executable

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;

The HelloDotNet.dll file is a .NET assembly (has dll extension and starts with the 4d 5a magic number) containing the compiled application.

HelloDotNet pedro$ hexdump -n 32 bin/Debug/dnxcore50/HelloDotNet.dll
0000000 4d 5a 90 00 03 00 00 00 04 00 00 00 ff ff 00 00
0000010 b8 00 00 00 00 00 00 00 40 00 00 00 00 00 00 00

Directly executing the HelloDotNet file runs the application.

HelloDotNet pedro$ bin/Debug/dnxcore50/HelloDotNet
Hello dotnet!

We can also see that the CoreCLR is hosted in executing process by examining the loaded libraries.

dotnet pedro$ ps | grep Hello
18981 ttys001    0:00.23 bin/Debug/dnxcore50/HelloDotNet
19311 ttys002    0:00.00 grep Hello
dotnet pedro$ sudo vmmap 18981 | grep libcoreclr
__TEXT                 0000000105225000-000000010557a000 [ 3412K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557b000-000000010575a000 [ 1916K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575b000-0000000105813000 [  736K] r-x/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__LINKEDIT             0000000105859000-00000001059e1000 [ 1568K] r--/rwx SM=COW  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010557a000-000000010557b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__TEXT                 000000010575a000-000000010575b000 [    4K] rwx/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105813000-0000000105841000 [  184K] rw-/rwx SM=PRV  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib
__DATA                 0000000105841000-0000000105859000 [   96K] rw-/rwx SM=ZER  /usr/local/share/dotnet/runtime/coreclr/libcoreclr.dylib

Native compilation

One of the most interesting features on .NET Core and the dotnet tool is the ability to create a native executable containing the complete program and not just a boot strap into the virtual machine. For that, we use the --native option on the compile command.

HelloDotNet pedro$ ls
NuGet.Config        Program.cs      project.json        project.lock.json
HelloDotNet pedro$ dotnet compile --native
Compiling HelloDotNet for DNXCore,Version=v5.0

Compilation succeeded.
    0 Warning(s)
    0 Error(s)

Time elapsed 00:00:01.1267350

The output of this compilation is a new native folder containing another HelloDotNet executable.

HelloDotNet pedro$ tree .
├── NuGet.Config
├── Program.cs
├── bin
│   └── Debug
│       └── dnxcore50
│           ├── HelloDotNet
│           ├── HelloDotNet.deps
│           ├── HelloDotNet.dll
│           ├── HelloDotNet.pdb
│           ├── NuGet.Config
│           └── native
│               ├── HelloDotNet
│               └── HelloDotNet.dSYM
│                   └── Contents
│                       ├── Info.plist
│                           └── Resources
│                               └── DWARF
│                                   └── HelloDotNet
├── obj
│   └── Debug
│   └── dnxcore50
│   └── HelloDotNet.obj
├── project.json
└── project.lock.json

11 directories, 13 files

Running the executable produces the expected result

HelloDotNet pedro$ bin/Debug/dnxcore50/native/HelloDotNet
Hello dotnet!

At first sight, this new executable is rather bigger that the first one, since it isn’t just a bootstrap into the virtual machine: it contains the complete application.

HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/HelloDotNet
-rwxr-xr-x  1 pedro  staff  66368 Dec 28 10:12 bin/Debug/dnxcore50/HelloDotNet
HelloDotNet pedro$ ls -la bin/Debug/dnxcore50/native/HelloDotNet
-rwxr-xr-x  1 pedro  staff  987872 Dec 28 10:12 bin/Debug/dnxcore50/native/HelloDotNet

There are two more signs that this new executable is the application. First, there aren’t any references to the libcoreclr dynamic library.

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/HelloDotNet | grep libcoreclr.dylib
00000001000025b3    leaq    0x7351(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
00000001000074eb    leaq    0x2419(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
000000010000784b    leaq    0x20b9(%rip), %rsi      ## literal pool for: &amp;amp;amp;amp;quot;libcoreclr.dylib&amp;amp;amp;amp;quot;
HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep libcoreclr.dylib
HelloDotNet pedro$

Second, it contains a ___managed__Main symbol with the static void Main(string[] args) native code

HelloDotNet pedro$ otool -tvV bin/Debug/dnxcore50/native/HelloDotNet | grep -A 8 managed__Main:
0000000100001b20    pushq   %rax
0000000100001b21    movq    ___ThreadStaticRegionStart(%rip), %rdi
0000000100001b28    movq    (%rdi), %rdi
0000000100001b2b    callq   _System_Console_System_Console__WriteLine_13
0000000100001b30    nop
0000000100001b31    addq    $0x8, %rsp
0000000100001b35    retq
0000000100001b36    nop

In addition to the HelloDotNet executable, the compile --native command also creates a bin/Debug/dnxcore50/native/HelloDotNet.dSYM folder the native debug information.

Unfortunately, the .NET Core native support seems to be in the very early stages and I was unable to compile anything more complex than a simple “Hello World”. However, I’m looking forward to further developments in this area.

How to fail in HTTP APIs

In the HTTP protocol, clients use request messages to perform operations, defined by request methods, on resources identified by request URIs.
However, servers aren’t always able or willing to completely and successfully perform these requested operations.
The subject of this post is to present proper ways for HTTP servers to express these non-success outcomes.

Status codes

The primary way to communicate the request completion result is via the response message’s status code.
The status code is a three-digit integer divided into five classes, (list adapted from RFC 7231):

  • “1xx (Informational): The request was received, continuing process”
  • “2xx (Successful): The request was successfully received, understood, and accepted”
  • “3xx (Redirection): Further action needs to be taken in order to complete the request”
  • “4xx (Client Error): The request contains bad syntax or cannot be fulfilled”
  • “5xx (Server Error): The server failed to fulfill an apparently valid request”

The last two of these five classes, 4xx and 5xx, are used to represent non-success outcomes.
The 4xx class is used when the request is not completely understood by the server (e.g. incorrect HTTP syntax) or fails to satisfy the server requirements for successful handling (e.g. client must be authenticated).
These are commonly referred as client errors.
On the other hand, 5xx codes should be strictly reserved for server errors, i.e., situations where the request is not successfully completed due to a abnormal behavior on the server.

Here are some of basic rules that I tend to use when choosing status codes:

  • Never use a 2xx to represent a non-success outcome.
    Namely, always use a 4xx or 5xx to represent those situations, except when the request can be completed by taking further actions, in which a 3xx could be used.
  • Reserve the 5xx status code for errors where the fault is indeed on the server side.
    Examples are infrastructural problems, such as the inability to connect to external systems, such as a database or service, or programming errors such as an indexation out of bounds or a null dereference.
    Inability to successfully fulfill a request due to malformed or invalid information in the request must instead be signaled with 4xx status codes.
    Some examples are: the request URI does not match any known resource; the request body uses an unsupported format; the request body has invalid information.

As a rule of thumb, and perhaps a little hyperbolically, if an error does not require waking up someone in the middle of night then probably it shouldn’t be signaled using a 5xx class code, because it does not signals a server malfunction.

The HTTP specification also defines a set of 41 concrete status codes and associated semantics, from which 19 belong to the 4xx class and 6 belong to the 5xx class.
These standard codes are a valuable resource for the Web API designer, which should simultaneously respect and take advantage of this semantic richness when designing the API responses.
Here are some rule of thumb:

  • Use 500 for server unexpected errors, reserving 503 for planned service unavailability.
  • Reserve the 502 and 504 codes for reverse proxies.
    A failure when contacting an internal third-party system should still use a 500 when this internal system is not visible to the client.
  • Use 401 when the request has invalid or missing authentication/authorization information required to perform the operation.
    If this authentication/authorization information is valid but the operation is still not allowed, then use 403.
  • Use 404 when the resource identified by the request URI does not exist or the server does not want to reveal its existence.
  • Use 400 if parts of the request are not valid, such as fields in the request body.
    For invalid query string parameters I tend to use 404 since the query string is an integral part of the URI, however using 400 is also acceptable.

HTTP status codes are extensible, meaning that other specifications, such as WebDav can define additional values.
The complete list of codes is maintained by IANA at the Hypertext Transfer Protocol (HTTP) Status Code Registry.
This extensibility means that HTTP clients and intermediaries are not obliged to understand all status codes.
However, they must understand each code class semantics.
For instance, if a client receives the (not yet defined) 499 status code, then it should treat it as a 400 and not as a 200 or a 500.

Despite its richness, there aren’t HTTP status code for all possible failure scenarios.
Namely, by being uniform, these status code don’t have any domain-specific semantics.
However, there are scenarios where the server needs to provide the client with a more detailed error cause, namely using domain-specific information.
Two common anti-patterns are:

  • Redefining the meaning of standard code for a particular set of resources.
    This solution breaks the uniform interface contract: the semantics of the status code should be the same independently of the request’s target resource.
  • Using an unassigned status code in the 4xx or 5xx classes.
    Unless this is done via a proper registration of the new status code in IANA, this decision will hinder evolution and most probably will collide with future extensions to the HTTP protocol.

Error representations

Instead of fiddling with the status codes, a better solution is to use the response payload to provide a complementary representation of the error cause.
And yes, a response message may (and probably should) contain a body even when it represents an error outcome – response bodies are not exclusive of successful responses.

The Problem Details for HTTP APIs is an Internet Draft defining JSON and XML formats to represent such error information.
The following excerpt, taken from the draft specification, exemplifies how further information can be conveyed on a response with 403 (Forbidden) status code, stating the domain specific reason for the request prohibition.

HTTP/1.1 403 Forbidden
Content-Type: application/problem+json
Content-Language: en

    "type": "",
    "title": "You do not have enough credit.",
    "detail": "Your current balance is 30, but that costs 50.",
    "instance": "/account/12345/msgs/abc",
    "balance": 30,
    "accounts": ["/account/12345","/account/67890"]

The application/problem+json media type informs the receiver that the payload is using this format and should be processed according to its rules.
The payload is comprised by a JSON object containing both fields defined by the specification and fields that are kept domain specific.
The type, title, detail and instance are of the first type, having their semantics defined by the specification

  • type – URI identifier defining the domain-specific error type. If it is URL, then its dereference can provide further information on the error type.
  • title – Human-readable description of the error type.
  • detail – Human-readable description of this specific error occurrence.
  • instance – URI identifier for this specific error occurrence.

On the other hand, the balance and accounts fields are domain specific extensions and their semantics is scoped to the type identifier.
This allows the same extensions to be used by different Web APIS with different semantics as long as the use type identifiers are different.
I recommend an HTTP API to have a central place documenting all type values as well as the domain specific fields associated to each one of these values.

Using this format presents several advantages when compared with constantly “reinventing the wheel” with ad-hoc formats:

  • Taking advantage of rich and well defined semantics for the specification defined fields – type, title, detail and instance.
  • Making the non-success responses easier to understand and handle, namely for developers that are familiar with this common format.
  • Being able to use common libraries to produce and consume this format.

When using a response payload to represent the error details one might wonder if there is still a need to use proper 4xx or 5xx class codes to represents error.
Namely, can’t we just use 200 for every response, independently of the outcome and have the client use the payload to distinguish them?
My answer is an emphatic no: using 2xx status to represent non-success breaks the HTTP contract, which can have consequences on the behavior of intermediary components.
For instance, a cache will happily cache a 200 response even it’s payload is in the application/problem+json format.
Notice that the operation of most intermediaries is independent of the messages payload.
And yes, HTTP intermediaries are still relevant on an HTTPS world: intermediaries can live before (e.g. client caching) and after (e.g. output caching) the TLS connection endpoints.

The HTTP protocol and associated ecosystem provides richer ways to express non-success outcomes, via response status codes and error representations.
Taking advantage of those is harnessing the power of the Web for HTTP APIs.

Additional Resources

Some thoughts on the recent JWT library vulnerabilities

Recently, a great post by Tim McLean about some “Critical vulnerabilities in JSON Web Token libraries” made the headlines, bringing the focus to the JWT spec, its usages and apparent security issues.

In this post, I want to share some of my assorted ideas on these subjects.

On the usefulness of the “none” algorithm

One of the problems identified in the aforementioned post is the “none” algoritm.

It may seem strange for a secure packaging format to support “none” as a valid protection, however this algorithm is useful in situations where the token’s integrity is verified by other means, namely the transport protocol.
One such example happens on the authorization code flow of OpenID Connect, where the ID token is retrieved via a direct TLS protected communication between the Client and the Authorization Server.

In the words of the specification: “If the ID Token is received via direct communication between the Client and the Token Endpoint (which it is in this flow), the TLS server validation MAY be used to validate the issuer in place of checking the token signature”.

Supporting multiple algorithms and the “alg” field

Another problem identified by Tim’s post was the usage of the “alg” field and the way some libraries handle it, namely using keys in an incorrect way.

In my opinion, supporting algorithm agility (i.e. the ability to support more than one algorithm in a specification) is essential for having evolvable systems.
Also, being explicit about what was used to protect the token is typically a good security decision.

In this case, the problem lies on the library side. Namely, having a verify(string token, string verificationKey) function signature seems really awkard for several reasons

  • First, representing a key as a string is a typical case of primitive obsession. A key is not a string. A key is a potentially composed object (e.g. two integers in the case of a public key for RSA-based schemes) with associated metadata, namely the algorithms and usages for which it applies. Encoding that as a string is opening the door to ambiguity and incorrect usages.
    A key representation should always contain not only the algorithm to which applies but also the usage conditions (e.g. encryption vs,. signature for a RSA key).

  • Second, it makes phased key rotation really difficult. What happens when the token signer wants to change the signing key or the algorithm? Must all the consumers synchronously change the verification key at the same moment in time? Preferably, consumers should be able to simultaneous support two or more key to be used, identified by the “kid” parameter.
    The same applies to algorithm changes and the use of the “alg” parameter.
    So, I don’t think that removing the “alg” header is a good idea

A verification function should allow a set of possible keys (bound to explicit algorithms) or receive a call back to fetch the key given both the algorithm and the key id.

Don’t assume, always verify

Verifying a JWT before using the claims that it asserts is alway more than just checking a signature. Who was the issuer? Is the token valid at the time of usage? Was the token explicitly revoked? Who is the intended audience? Is the protection algorithm compatible with the usage scenario? These are all questions that must be explicit verified by a JWT consumer application or component.

For instance, OpenID Connect lists the verification steps that must done by a client application (the relying party) before using the claims in a received ID token.

And so it begins …

If the recent history of SSL/TLS related problems has taught us anything is that security protocol design and implementation is far from easy, and that “obvious” vulnerabilities can remain undetected for long periods of time.
If these problems happen on well known and commonly used designs and libraries such as SSL and OpenSSL, we must be prepared for similar occurrences on JWT based protocols and implementations.
In this context, security analysis such as the one described in Tim’s post are of uttermost importance, even if I don’t agree with some of the proposed measures.