Accessing the HTTP Context on ASP.NET Core

TL;DR

On ASP.NET Core, the access to the request context can be done via the new IHttpContextAccessor interface, which provides a HttpContext property with this information. The IHttpContextAccessor is obtained via dependency injection or directly from the service locator. However, it requires an explicit service collection registration, mapping the IHttpContextInterface to the HttpContextAccessor concrete class, with singleton scope.

Not so short version

System.Web

On classical ASP.NET, the current HTTP context, containing both request and response information, can be accessed anywhere via the omnipresent System.Web.HttpContext.Current static property. Internally, this property uses information stored in the CallContext object representing the current call flow. This CallContext is preserved even when the same flow crosses multiple threads, so it can handle async methods.

ASP.NET Web API

On ASP.NET Web API, obtaining the current HTTP context without having to flow it explicitly on every call is typically achieved with the help of the dependency injection container.
For instance, Autofac provides the RegisterHttpRequestMessage extension method on the ContainerBuilder, which allows classes to have HttpRequestMessage constructor dependencies.
This extension method configures a delegating handler that registers the input HttpRequestMessage instance into the current lifetime scope.

ASP.NET Core

ASP.NET Core uses a different approach. The access to the current context is provided via a IHttpContextAccessor service, containing a single HttpContext property with both a getter and a setter. So, instead of directly injecting the context, the solution is based on injecting an accessor to the context.
This apparently superfluous indirection provides one benefit: the accessor can have singleton scope, meaning that it can be injected into singleton components.
Notice that injecting a per HTTP request dependency, such as the request message, directly into another component is only possible if the component has the same lifetime scope.

In the current ASP.NET Core 1.0.0 implementation, the IHttpContextAccessor service is implemented by the HttpContextAccessor concrete class and must be configured as a singleton.
 

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();
    services.AddSingleton<IHttpContextAccessor, HttpContextAccessor>();
}

Notice that this registration is not done by default and must be explicitly performed. If not, any IHttpContextAccessor dependency will result in an activation exception.
On the other hand, no additional configuration is need to capture the context at the beginning of each request, because this is automatically done.

The following implementation details shed some light on this behavior:

  • Each time a new request starts to be handled, a common IHttpContextFactory reference is used to create the HttpContext. This common reference is obtained by the WebHost during startup and used for all requests.

  • The used HttpContextFactory concrete implementation is initialized with an optional IHttpContextAccessor implementation. When available, this accessor is assigned with each created context. This means that if any accessor is registered on the services, then it will automatically be used to set all created contexts.

  • How can the same accessor instance hold different contexts, one for each call flow? The answer lies in the HttpContextAccessor concrete implementation and its use of AsyncLocal to store the context separately for each logical call flow. It is this characteristics that allows a singleton scoped accessor to provide request scoped contexts.

To conclude:

  • Everywhere the HTTP context is needed, declare an IHttpContextAccessor dependency and use it to fetch the context.

  • Don’t forget to explicitly register the IHttpContextAccessor interface on the service collection, mapping it to the concrete HttpContextAccessor type.

  • Also, don’t forget to make this registration with singleton scope.

Should I PUT or should I POST? (Darling you gotta let me know)

(yes, it doesn’t rhyme however I couldn’t resist the association)

Selecting the proper methods (e.g. GET, POST, PUT, …) to use when designing HTTP based APIS is typically a subject of much debate, and eventually some bike-shedding. In this post I briefly present the rules that I normally follow when presented with this design task.

Don’t go against the HTTP specification

First and foremost, make sure the properties of the chosen methods aren’t violated on the scenario under analysis. The typical offender is using GET for an interaction that requests a state change on the server.
This is because GET is defined to have the safe property, defined as

Request methods are considered “safe” if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource.

Another example is choosing PUT for requests that aren’t idempotent, such as appending an item to a collection.
The idempotent property is defined by RFC 7231 as

A request method is considered “idempotent” if the intended effect on the server of multiple identical requests with that method is the same as the effect for a single such request.

Violating these properties is harmful because there may exist system components whose correct behavior depends on them being true. An example is a crawler program that freely follows all GET links in a document, assuming that no state change will be performed by these requests, and that ends up changing the system state.

Another example is an intermediary (e.g. reverse proxy) that automatically retries any failed PUT request (e.g. timeout), assuming they are idempotent. If the PUT is appending items to a collection (append is not idempotent), and the first PUT request was successfully performed and only the response message was lost, then the retry will end up adding two replicated items to the collection.

This violation can also have security implications. For instance, most server frameworks don’t protect GET requests agains CSRF (Cross-Site Request Forgery) because this method is not supposed to change state and reads are already protected by the same-origin browser policy.

Take advantage of the method properties

After ensuring the correctness concerns, i.e., ensuring requests don’t violate any property of chosen methods, we can revert our analysis and check if there aren’t any methods that best fit the intended functionality. After having ensured correctness, in this stage our main concern is going to be optimization.

For instance, if a request defines the complete state for a resource and is idempotent, perhaps a PUT is a best fit than a POST. This is not because a POST will produce incorrect behavior but because using a PUT may induce better system properties. For instance, an intermediary (e.g. reverse proxy or framework middleware) may automatically retry failed requests, and by this provide some fault recovery.

When nothing else fits, use POST

Contrary to some HTTP myths, the POST is not solely intended to create resources. In fact, the new RFC 7231 states

The POST method requests that the target resource process the representation enclosed in the request according to the resource’s own specific semantics

The “according to the resource’s own specific semantics” effectively allows us to use POST for requests with any semantics. However the fact that it allows us doesn’t mean that we always should. Again, if another method (e.g. GET or PUT) best fits the request purpose, not choosing it may mean throwing away interesting properties, such as caching or fault recovery.

Does my API look RESTful in this method?

One thing that I always avoid is deciding based on the apparent “RESTfullness” of the method – For instance, an API doesn’t have to use PUT to be RESTful.

First and foremost we should think in terms of system properties and use HTTP accordingly. That implies:

  • Not violating its rules – what can go wrong if I choose PUT for this request?
  • Taking advantage of its benefits – what do I loose if I don’t choose PUT for this request?

Hope this helps.
Cheers.

Health check APIs, 500 status codes and media types

A status or health check resource (or endpoint, to use the more popular terminology) is a common way for a system to provide an aggregated representation of its operational status. This status representation typically includes a list with the individual system components or health check points and their individual status (e.g. database connectivity, memory usage threshold, deadlocked threads).

For instance, the popular Dropwizard Java framework already provides an out-of-the-box health check resource, located by default on the /healthcheck URI of the administration port, for this purpose.

The following is an example of such representation, defined by a JSON object containing a field by each health check verification.

{
    "deadlocks":{
        "healthy":true
    },
    "database":{
        "healthy":true
    }
}

Apparently, it is also a common practice for a GET request to these resources to return a 500 status code if any of the internal components reports a problem. For instance, the Dropwizard documentation states

If all health checks report success, a 200 OK is returned. If any fail, a 500 Internal Server   Error is returned with the error messages and exception stack traces (if an exception was  thrown).

In my opinion, this practice goes against the HTTP status code semantics because the server  was indeed capable of processing the request and producing a valid response with a correct resource state representation, that is, a correct representation of the system status. The fact that this status includes the information of an error does not changes that.

So, why is this incorrect practice used so often? My conjecture has two reasons for it.

  • First, an incomplete knowledge of the HTTP status code semantics that may induce the following reasoning: if the response contains an error then a 500 must be used.
  •  Second, and perhaps more important, because this practice really comes in handy when using external monitoring systems (e.g. nagios) to periodically check these statuses. Since these monitoring systems do not commonly understand the healthcheck representation, namely because each API or framework uses a different one, the easier solution is to rely solely on the status code: 200 if everything is apparently working properly, 500 if something is not ok.

Does this difference between a 200 and a 500 matters, or are we just being pedantic here? Well, I do think it really matters: by returning a 500 status code on a correctly handled request, the status resource is hiding errors on its own behaviour. For instance, lets consider the common scenario where the status resource is implemented by a third-party provider. A failure of this provider will be indistinguishable of a failure on the system under checking, because a 500 will be returned in both cases.

This example shows the consequences of the lack of effort on designing and standardizing media types. The availability of a standard media type would allow a many-to-many relation between monitoring systems and health check resources.

  • A health check resource could easily be monitored/queried by any monitoring system.
  • A monitoring system could easily inspect multiple health check resources, implemented over different technologies.

 

monitoring

Also, by a using a media-type, the monitoring result could be much richer than “ok” vs. “not ok”.

To conclude with a call-to-action, we really need to create a media type to represent health check or status outcomes, eventually based on an already existing media type:

  • E.g. building upon the “application/problem+json” (RFC 7807), extended to represent multiple problem status (e.g example).
  • E.g. building upon the “application/status+json” media type proposal.

Comments are welcomed.

 

 

 

On contracts and HTTP APIs

Reading the twitter conversation started by this tweet

made me put in written words some of the ideas that I have about HTTP APIs, contracts and “out-of-band” information.
Since it’s vacations time, I’ll be brief and incomplete.

  • On any interface, it is impossible to avoid having contracts (i.e. shared “out-of-band” information) between provider and consumer. On a HTTP API, the syntax and semantics of HTTP itself is an example of this shared information. If JSON is used as a base for the representation format, then its syntax and semantics rules are another example of shared “out-of-band” information.
  • However not all contracts are equal in the generality, flexibility and evolvability they allow. Having the contract include a fixed resource URI is very different from having the contract defining a link relation. The former prohibits any change on the URI structure (e.g. host name, HTTP vs HTTPS, embedded information), while the later one enables it. Therefore, designing the contract is a very important task when creating HTTP APIs. And since the transfer contract is already rather well defined by HTTP, most of the design emphasis should be on the representation contract, include the hypermedia components.
  • Also, not all contracts have the same cost to implement (e.g. having hardcoded URIs is probably simpler than having to find links on representations), so (as usual) trade-offs have to be taken into account.
  • When implementing HTTP APIs is also very important to have the contract-related areas clearly identified. For me, this typically involves being able to easily answering questions such as: – Will I be breaking the contract if
    • I change this property name on this model?
    • I add a new property to this model?
    • I change the routing rules (e.g. adding a new path segment)?

Hope this helps
Looking forward for feedback

 

Focus on the representation semantics, leave the transfer semantics to HTTP

A couple of days ago I was reading the latest OAuth 2.0 Authorization Server Metadata document version and my eye got caught on one sentence. On section 3.2, the document states

A successful response MUST use the 200 OK HTTP status code and return a JSON object using the “application/json” content type (…)

My first reaction was thinking that this specification was being redundant: of course a 200 OK HTTP status should be returned on a successful response. However, that “MUST” in the text made me think: is a 200 really the only acceptable response status code for a successful response? In my opinion, the answer is no.

For instance, if caching and ETags are being used, the client can send a conditional GET request (see Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests) using the If-None-Match header, for which a 304 (Not Modified) status code is perfectly acceptable. Another example is if the metadata location changes and the server responds with a 301 (Moved Permanently) or a 302 (Found) status code.Does that means the request was unsuccessful? In my opinion, no. It just means that the request should be followed by a subsequent request to another location.

So, why does this little observation deserve a blog post?
Well, mainly because it reflects two common tendencies when designing HTTP APIs (or HTTP interfaces):

  • First, the tendency to redefine transfer semantics that are already defined by HTTP.
  • Secondly, a very simplistic view of HTTP, ignoring parts such as caching and optimistic concurrency.

The HTTP specification already defines a quite rich set of mechanisms for representation transfer, and HTTP related specifications should take advantage of that. What HTTP does not define is the semantics of the representation itself. That should be the focus of specifications such as the OAuth 2.0 Authorization Server Metadata.

When defining HTTP APIs, focus on the representation semantics. The transfer semantics is already defined by the HTTP protocol.

 

Client-side development on OS X using Windows hosted HTTP Web APIs

In a recent post I described my Android development environment, based on a OS X host, the Genymotion Android emulator, and a Windows VM to run the back-end HTTP APIs.
In this post I’ll describe a similar environment but now for browser-side applications, once again using Windows hosted HTTP APIs.

Recently I had to do some prototyping involving browser-based applications, using ES6 and React, that interact with IdentityServer3 and a HTTP API.
Both the IdentityServer3 server and the ASP.NET HTTP APIs are running on a Windows VM, however I prefer to use the host OS X environment for the client side development (node, npm, webpack, babel, …).
Another requirement is that the server side uses HTTPS and multiple name hosts (e.g. id.example.com, app1.example.com, app2.example.com), as described in this previous post.

The solution that I ended up using for this environment is the following:

  • On the Windows VM side I have Fiddler running on port 8888 with “Allow remote computer to connect” enabled. This means that Fiddler will act as a proxy even for requests originating from outside the Windows VM.
  • On the OS X host I launch Chrome with open -a “/Applications/Google Chrome.app” –args –proxy-server=10.211.55.3:8888 –proxy-bypass-list=localhost, where 10.221.55.3 is the Windows VM address. To automate this procedure I use the automator tool  to create a shell script based workflow.

The end result, depicted in the following diagram, is that all requests (except for localhost) will be forwarded to the Fiddler instance running on the Windows VM, which will use the Windows hosts file to direct the request to the multiple IIS sites.

hosting
As a bonus, I also have full visibility on the HTTP messages.

And that’s it. I hope it helps.

Using multiple IIS server certificates on Windows 7

Nowadays I do most of my Windows development on a Windows 7 VM running on OS X macOS (Windows 8 and Windows Server 2012 left some scars so I’m very reluctance on moving to Windows 10). On this development environment I like to mimic some production environment characteristics, namely:

  • Using IIS based hosting
  • Having each site using different host names
  • Using HTTPS

For the site names I typically use example.com subdomains (e.g. id.example.com, app1.example.com, app2.example.com), which are reserved by IANA for documentation purposes (see RFC 6761). I associate these names to local addresses via the hosts file.

For generating the server certificates I use makecert and the scripts published at Appendix G of the Designing Evolvable Web APIs with ASP.NET.

However, having multiple sites using distinct certificates hosted on the same IP and port address presents some challenges. This is because IIS/HTTP.SYS uses the Host header to demultiplex the incoming requests to the different sites bound to the same IP and port.
However, when using TLS, the server certificate must be provided on the TLS handshake, well before the TLS connection is established and the Host header is received. Since at this time HTTP.SYS does not know the target site it also cannot select the appropriate certificate.

Server Name Indication (SNI) is a TLS extension (see RFC 3546) that addresses this issue, by letting the client send the host name in the TLS handshake, allowing the server to identity the target site and use the corresponding certificate.

Unfortunately, HTTP.SYS on Windows 7 does not support SNI (that’s what I get for using 2009 operating systems). To circumvent this I took advantage of the fact that there are more loopback addresses other than 127.0.0.1. So, what I do is to use different loopback IP addresses for each site on my machine as illustrated by the following my hosts file excerpt

127.0.0.2 app1.example.com
127.0.0.3 app2.example.com
127.0.0.4 id.example.com

When I configure the HTTPS IIS bindings I explicitly configure the listening IP addresses using these different values for each site, which allows me to use different certificates.

And that’s it. Hope it helps.