Should I PUT or should I POST? (Darling you gotta let me know)

(yes, it doesn’t rhyme however I couldn’t resist the association)

Selecting the proper methods (e.g. GET, POST, PUT, …) to use when designing HTTP based APIS is typically a subject of much debate, and eventually some bike-shedding. In this post I briefly present the rules that I normally follow when presented with this design task.

Don’t go against the HTTP specification

First and foremost, make sure the properties of the chosen methods aren’t violated on the scenario under analysis. The typical offender is using GET for an interaction that requests a state change on the server.
This is because GET is defined to have the safe property, defined as

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.

Another example is choosing PUT for requests that aren’t idempotent, such as appending an item to a collection.
The idempotent property is defined by RFC 7231 as

A request method is considered “idempotent” if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.

Violating these properties is harmful because there may exist system components whose correct behavior depends on them being true. An example is a crawler program that freely follows all GET links in a document, assuming that no state change will be performed by these requests, and that ends up changing the system state.

Another example is an intermediary (e.g. reverse proxy) that automatically retries any failed PUT request (e.g. timeout), assuming they are idempotent. If the PUT is appending items to a collection (append is not idempotent), and the first PUT request was successfully performed and only the response message was lost, then the retry will end up adding two replicated items to the collection.

This violation can also have security implications. For instance, most server frameworks don’t protect GET requests agains CSRF (Cross-Site Request Forgery) because this method is not supposed to change state and reads are already protected by the same-origin browser policy.

Take advantage of the method properties

After ensuring the correctness concerns, i.e., ensuring requests don’t violate any property of chosen methods, we can revert our analysis and check if there aren’t any methods that best fit the intended functionality. After having ensured correctness, in this stage our main concern is going to be optimization.

For instance, if a request defines the complete state for a resource and is idempotent, perhaps a PUT is a best fit than a POST. This is not because a POST will produce incorrect behavior but because using a PUT may induce better system properties. For instance, an intermediary (e.g. reverse proxy or framework middleware) may automatically retry failed requests, and by this provide some fault recovery.

When nothing else fits, use POST

Contrary to some HTTP myths, the POST is not solely intended to create resources. In fact, the new RFC 7231 states

The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics

The “according to the resource’s own specific semantics” effectively allows us to use POST for requests with any semantics. However the fact that it allows us doesn’t mean that we always should. Again, if another method (e.g. GET or PUT) best fits the request purpose, not choosing it may mean throwing away interesting properties, such as caching or fault recovery.

Does my API look RESTful in this method?

One thing that I always avoid is deciding based on the apparent “RESTfullness” of the method – For instance, an API doesn’t have to use PUT to be RESTful.

First and foremost we should think in terms of system properties and use HTTP accordingly. That implies:

  • Not violating its rules – what can go wrong if I choose PUT for this request?
  • Taking advantage of its benefits – what do I loose if I don’t choose PUT for this request?

Hope this helps.

Health check APIs, 500 status codes and media types

A status or health check resource (or endpoint, to use the more popular terminology) is a common way for a system to provide an aggregated representation of its operational status. This status representation typically includes a list with the individual system components or health check points and their individual status (e.g. database connectivity, memory usage threshold, deadlocked threads).

For instance, the popular Dropwizard Java framework already provides an out-of-the-box health check resource, located by default on the /healthcheck URI of the administration port, for this purpose.

The following is an example of such representation, defined by a JSON object containing a field by each health check verification.


Apparently, it is also a common practice for a GET request to these resources to return a 500 status code if any of the internal components reports a problem. For instance, the Dropwizard documentation states

If all health checks report success, a 200 OK is returned. If any fail, a 500 Internal Server   Error is returned with the error messages and exception stack traces (if an exception was  thrown).

In my opinion, this practice goes against the HTTP status code semantics because the server  was indeed capable of processing the request and producing a valid response with a correct resource state representation, that is, a correct representation of the system status. The fact that this status includes the information of an error does not changes that.

So, why is this incorrect practice used so often? My conjecture has two reasons for it.

  • First, an incomplete knowledge of the HTTP status code semantics that may induce the following reasoning: if the response contains an error then a 500 must be used.
  •  Second, and perhaps more important, because this practice really comes in handy when using external monitoring systems (e.g. nagios) to periodically check these statuses. Since these monitoring systems do not commonly understand the healthcheck representation, namely because each API or framework uses a different one, the easier solution is to rely solely on the status code: 200 if everything is apparently working properly, 500 if something is not ok.

Does this difference between a 200 and a 500 matters, or are we just being pedantic here? Well, I do think it really matters: by returning a 500 status code on a correctly handled request, the status resource is hiding errors on its own behaviour. For instance, lets consider the common scenario where the status resource is implemented by a third-party provider. A failure of this provider will be indistinguishable of a failure on the system under checking, because a 500 will be returned in both cases.

This example shows the consequences of the lack of effort on designing and standardizing media types. The availability of a standard media type would allow a many-to-many relation between monitoring systems and health check resources.

  • A health check resource could easily be monitored/queried by any monitoring system.
  • A monitoring system could easily inspect multiple health check resources, implemented over different technologies.



Also, by a using a media-type, the monitoring result could be much richer than “ok” vs. “not ok”.

To conclude with a call-to-action, we really need to create a media type to represent health check or status outcomes, eventually based on an already existing media type:

  • E.g. building upon the “application/problem+json” (RFC 7807), extended to represent multiple problem status (e.g example).
  • E.g. building upon the “application/status+json” media type proposal.

Comments are welcomed.




On contracts and HTTP APIs

Reading the twitter conversation started by this tweet

made me put in written words some of the ideas that I have about HTTP APIs, contracts and “out-of-band” information.
Since it’s vacations time, I’ll be brief and incomplete.

  • On any interface, it is impossible to avoid having contracts (i.e. shared “out-of-band” information) between provider and consumer. On a HTTP API, the syntax and semantics of HTTP itself is an example of this shared information. If JSON is used as a base for the representation format, then its syntax and semantics rules are another example of shared “out-of-band” information.
  • However not all contracts are equal in the generality, flexibility and evolvability they allow. Having the contract include a fixed resource URI is very different from having the contract defining a link relation. The former prohibits any change on the URI structure (e.g. host name, HTTP vs HTTPS, embedded information), while the later one enables it. Therefore, designing the contract is a very important task when creating HTTP APIs. And since the transfer contract is already rather well defined by HTTP, most of the design emphasis should be on the representation contract, include the hypermedia components.
  • Also, not all contracts have the same cost to implement (e.g. having hardcoded URIs is probably simpler than having to find links on representations), so (as usual) trade-offs have to be taken into account.
  • When implementing HTTP APIs is also very important to have the contract-related areas clearly identified. For me, this typically involves being able to easily answering questions such as: – Will I be breaking the contract if
    • I change this property name on this model?
    • I add a new property to this model?
    • I change the routing rules (e.g. adding a new path segment)?

Hope this helps
Looking forward for feedback


Focus on the representation semantics, leave the transfer semantics to HTTP

A couple of days ago I was reading the latest OAuth 2.0 Authorization Server Metadata document version and my eye got caught on one sentence. On section 3.2, the document states

A successful response MUST use the 200 OK HTTP status code and return a JSON object using the “application/json” content type (…)

My first reaction was thinking that this specification was being redundant: of course a 200 OK HTTP status should be returned on a successful response. However, that “MUST” in the text made me think: is a 200 really the only acceptable response status code for a successful response? In my opinion, the answer is no.

For instance, if caching and ETags are being used, the client can send a conditional GET request (see Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests) using the If-None-Match header, for which a 304 (Not Modified) status code is perfectly acceptable. Another example is if the metadata location changes and the server responds with a 301 (Moved Permanently) or a 302 (Found) status code.Does that means the request was unsuccessful? In my opinion, no. It just means that the request should be followed by a subsequent request to another location.

So, why does this little observation deserve a blog post?
Well, mainly because it reflects two common tendencies when designing HTTP APIs (or HTTP interfaces):

  • First, the tendency to redefine transfer semantics that are already defined by HTTP.
  • Secondly, a very simplistic view of HTTP, ignoring parts such as caching and optimistic concurrency.

The HTTP specification already defines a quite rich set of mechanisms for representation transfer, and HTTP related specifications should take advantage of that. What HTTP does not define is the semantics of the representation itself. That should be the focus of specifications such as the OAuth 2.0 Authorization Server Metadata.

When defining HTTP APIs, focus on the representation semantics. The transfer semantics is already defined by the HTTP protocol.


Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g.,,, as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google” –args –proxy-server= –proxy-bypass-list=localhost, where is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.

Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use subdomains (e.g.,,, which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.

The OpenID Connect Cast of Characters


The OpenID Connect protocol provides support for both delegated authorization and federated authentication, unifying features that traditionally were provided by distinct protocols. As a consequence, the OpenID Connect protocol parties play multiple roles at the same time, which can sometimes be hard to grasp. This post aims to clarify this, describing how the OpenID Connect parties related to each other and to the equivalent parties in previous protocols, namely OAuth 2.0.

OAuth 2.0

The OAuth 2.0 authorization framework introduced a new set of characters into the distributed access control story.


  • The User (aka Resource Owner) is a human with the capability to authorize access to a set of protected resources (e.g. the user is the resources owner).
  • The Resource Server is the HTTP server exposing access to the protected resources via an HTTP API. This access is dependent on the presence and validation of access tokens in the HTTP request.
  • The Client Application is the an HTTP client that accesses user resources on the Resource Server. To perform these accesses, the client application needs to obtain access tokens issued by the Authorization Server.
  • The Authorization Server is the party issuing the access tokens used by the Clients Application on the requests to the Resource Server.
  • Access Tokens are strings created by the Authorization Server and targeted to the Resource Server. They are opaque to the Client Application, which just obtains them from the Authorization Server and uses them on the Resource Server without any further processing.

To make things a little bit more concrete, leet’s look at an example

  • The User is Alice and the protected resources are her repositories at GitHub.
  • The Resource Server is GitHub’s API.
  • The Client Application is a third-party application, such as Huboard or Travis CI, that needs to access Alice’s repositories.
  • The Authorization Server is also GitHub, providing the OAuth 2.0 protocol “endpoints” for the client application to obtain the access tokens.

OAuth 2.0 models the Resource Server and the Authorisation Server as two distinct parties, however they can be run by the same organization (GitHub, in the previous example).


An important characteristics to emphasise is that the access token does not directly provide any information about the User to the Client Application – it simply provides access to a set of protected resources. The fact that some of these protected resources may be used to provide information about the User’s identity is out of scope of OAuth 2.0.

Delegated Authentication and Identity Federation

However delegated authentication and identity federation protocols, such as the SAML protocols or the WS-Federation protocol, use a different terminology.


  • The Relying Party (or Service Provider in the SAML protocol terminology) is typically a Web application that delegates user authentication into an external Identity Provider.
  • The Identity Provider is the entity authenticating the user and communicating her identity claims to the Relying Party.
  • The identity claims communication between these two parties is made via identity tokens, which are protected containers for identity claims
    • The Identity Provider creates the identity token.
    • The Relying Party consumes the identity token by validating it and using the contained identity claims.

Sometimes the same entity can play both roles. For instance, an Identity Provider can re-delegate the authentication process to another Identity Provider. For instance:

  • An Organisational Web application (e.g. order management) delegates the user authentication process to the Organisational Identity Provider.
  • However, this Organisational Identity Provider re-delegate user authentication into a Partner Identity Provider.
  • In this case, the Organisational Identity Provider is simultaneously
    • A Relying Party for the authentication made by the Partner Identity Provider.
    • An Identity Provider, providing identity claims to the Organisational Web Application.


In these protocols, the main goal of the identity token is to provide identity information about the User to the Relying Party. Namely, the identity token is not aimed to provide access to a set of protected resources. This characteristic sharply contrasts with OAuth 2.0 access tokens.

OpenID Connect

The OpenID Connect protocol is “a simple identity layer on top of the OAuth 2.0 protocol”, providing both delegated authorisation as well as authentication delegation and identity federation. It unifies in a single protocol the functionalities that previously were provided by distinct protocols. As consequence, now there are multiple parties that play more than one role

  • The OpenID Provider (new term introduced by the OpenID Connect specification) is an Identity Provider and an Authorization Server, simultaneously issuing identity tokens and access tokens.
  • The Relying Party is also a Client Application. It receives both identity tokens and access tokens from the OpenID Provider. However, there is a significant different in how these tokens are used by this party
    • The identity tokens are consumed by the Relying Party/Client Application to obtain the user’s identity.
    • The access tokens are not directly consumed by the Relying Party. Instead they are attached to requests made to the Resource Server, without ever being opened at the Relying Party.


I hope this post shed some light into the dual nature of the parties in the OpenID Connect protocol.

Please, feel free to use the comments section to place any question.