Idempotency, APIs, and Retries — Oh My!

Written by rogerjin12 | Published 2017/03/30
Tech Story Tags: api | web-development | dotnet | csharp | architecture

TLDRvia the TL;DR App

In today’s world, it’s common for applications to be distributed across many networked components. Whether these components are microservices within your own stack or third-party SaaS APIs, the components that depend on them need to be able to talk to them. This is most commonly done with an API client, often just a simple class that provides easy-to-use methods that wrap HTTP requests.

An example client

ButterCMS is a good example of a SaaS API with associated clients. ButterCMS is a “Content Management System as a service” — the database, logic, and administrative dashboard of a CMS is provided as a hosted service and its content is made available through a web API. With Butter’s API-first CMS and content API you can retrieve the content through its API clients and plug it into your website. In C#, the API methods can be called through a single class.

Let’s take a look at the structure of the class. It has a number of public methods that send API requests through the private Execute(string queryString) and ExecuteAsync(string queryString) methods. We'll just deal with the Executemethod and its synchronous callers for simplicity's sake. Here's one of the public methods, used for retrieving a list of blog posts:

private string authToken; // Authorization token set in the ButterCMSClient constructorprivate const string retrievePostsEndpoint = "v2/posts/{0}"; // Base URL for blog posts on the API

// ... Code excluded for brevity ...

public PostResponse RetrievePost(string postSlug){var queryString = new StringBuilder();queryString.Append(string.Format(retrievePostEndpoint, postSlug));queryString.Append("?");queryString.Append(authTokenParam);var postResponse = JsonConvert.DeserializeObject<PostResponse>(Execute(queryString.ToString()), serializerSettings);return postResponse;}

Nice and simple. As you can see, it takes a postSlug parameter (which is just the unique URL segment that identifies the blog post we want to load), assembles it into the post's URL on the ButterCMS server, and passes it to the Execute(string queryString) method, which gets a JSON response and returns it for marshaling into our PostResponseclass. We can then take that data and render it in a page template on our public website.

Let’s dive a little deeper into what happens inside the Execute method:

private HttpClient httpClient; // System.Net.Http.HttpClient instance, set in the ButterCMSClient constructor

// ... Code excluded for brevity ...

private string Execute(string queryString){try{var response = httpClient.GetAsync(queryString).Result;if (response.IsSuccessStatusCode){return response.Content.ReadAsStringAsync().Result;}if (response.StatusCode == System.Net.HttpStatusCode.Unauthorized){throw new InvalidKeyException("No valid API key provided.");}if (response.StatusCode >= System.Net.HttpStatusCode.InternalServerError){throw new Exception("There is a problem with the ButterCMS service");}}catch (TaskCanceledException taskException){if (!taskException.CancellationToken.IsCancellationReques‌​ted){throw new Exception("Timeout expired trying to reach the ButterCMS service.");}throw taskException;}catch (HttpRequestException httpException){throw httpException;}catch (Exception ex){throw ex;}return string.Empty;}

This method simply makes an HTTP GET request to the given URL and returns the response body as a string, which can be parsed by the caller as JSON, XML, etc. It has some built-in error checking which is used to throw exceptions in case of a bad response. This prevents callers from accidentally trying to parse them as legitimate data.

This API client gets the job done, but you know what would be nice to have? The ability to automatically retry failed requests. Requests may occasionally fail because of intermittent connection problems. Suppose the connection is broken as we’re making the request, or the server recieves the request but the connection is dropped before it finishes sending a response. These are likely to be intermittent problems that can be resolved by simply re-sending the request. It would be a shame to show an error page to the user when we could have just tried again and showed them the content they wanted.

Idempotency and Safety

Let’s try to implement auto-retry functionality in this API client. It’s important to note here that this is relatively simple with a client like this one — all we have to do is catch any exceptions thrown by the Execute method and call it again with the same parameters up to a limited number of tries (we don't want the retries to go on forever if there's a persistent problem). This is because this client only makes GET requests. GET requests, if implemented and used correctly, have an important property called idempotency.

Idempotency and Safety

Idempotency sounds like a fancy word, but it’s a simple concept — it’s the ability to perform the same action multiple times while only producing “side effects” on the server once. Side effects are defined as changes to the persistent data on the server. Properly implemented GET requests are only used to retrieve data from the server, never to make changes to it, so they're naturally idempotent. This is a special case called being safe. Safe methods are methods that are idempotent because they never produce any side effects. The HTTP OPTIONS and HEAD verbs also share this property.

Idempotency and Unsafety

There are two HTTP verbs which are idempotent but unsafe, the PUT and DELETE methods. That is, they produce side effects the first time they succeed, but do nothing on subsequent requests. For example, if I call DELETE on the resource at myrestapi.com/resources/{id}, the resource at that URL will be deleted. If I call it again, nothing happens because that resource no longer exists. Same thing with PUT—call it once to replace a resource with some new data, then call it again and nothing happens because now you're "updating" it to the same data that's already there.

Now that we understand idempotency, it’s easy to see why a simple retry mechanism isn’t safe for all types of requests. Any time we make a non-idempotent request that succeeds on the server, but the response fails to reach us, a “dumb” retry mechanism would send that request again. If it’s non-idempotent, that could be disastrous (or at least lead to some angry customers — “DOUBLE CHARGED MY CREDIT CARD, EH?!”).

Since our example API client is effectively read-only (only makes GET requests), we can use a "dumb" retry mechanism that simply re-sends requests until one succeeds or we exceed our maximum allowed number of retries. Constructing a retry mechanism for non-idempotent requests requires cooperation from the server. Namely, the client attaches a unique ID to each request (a GUID/UUID would suffice). When the server processes a request successfully, it saves the ID and a copy of the response it wants to send back. If that response never makes it back to the client, the client will send the request again, reusing the same ID. The server will recognize the ID, skip the actual processing of the request, and just send back the stored response. This makes all requests effectively idempotent from the client's point of view. While not a particularly complicated mechanism to implement on either the client or the server, this article is only an introduction to idempotency and retries, so we'll stick with the simpler case of GET requests and "dumb" retries for our example.

Implementing Auto-Retry

Let’s get back to the code. We need to “watch” the Execute method so that we can re-execute it if it throws an exception. This can be done with a simple wrapper method that catches the exceptions. First, let's rename our old Execute method to ExecuteSingle to more accurately express its purpose.

- private string Execute(string queryString)+ private string ExecuteSingle(string queryString)

Now let’s build our wrapper method. We’ll call it Execute so that our existing public methods will call it instead instead of the function we just renamed. For now we'll just make it a simple wrapper that doesn't add any functionality:

private string Execute(string queryString){return ExecuteSingle(queryString);}

The API client should now function exactly as before, so we really haven’t accomplished anything yet. Let’s start by writing a simple loop to retry the request up to a certain number of times. To “keep the loop going” in the event that ExecuteSingle throws an exception, we need to catch those exceptions inside the loop.

private string Execute(string queryString){// maxRequestTries is a private class member set to 3 by default,// optionally set via a constructor parameter (not shown)var remainingTries = maxRequestTries;

do   
{  
    --remainingTries;  
    try   
    {  
        return ExecuteSingle(queryString);  
    }  
    catch (Exception)   
    {  

    }  
}  
while (remainingTries > 0)  

}

This code will escape the loop via the return statement if the request is successful. If an exception is thrown by ExecuteSingle it will be swallowed and the loop will continue up to maxRequestTries times. The do { ... } while ()syntax ensures that requests will always execute at least once, even if maxRequestTries is misconfigured and set to something like 0 or -10.

Of course, this code has a glaring problem — it swallows all the exceptions. If all the requests fail, it will just return a null string. But how can we handle this? We can't throw the exceptions from inside the catch (Exception) { } block or execution will escape the loop, defeating the purpose of the entire method. We should throw the exceptions after, and only if, all of the requests fail. We can do this by aggregating them in a List<Exception> and throwing an AggregateException at the end of the method.

private string Execute(string queryString){var remainingTries = maxRequestTries;var exceptions = new List<Exception>();

do   
{  
    --remainingTries;  
    try   
    {  
        return ExecuteSingle(queryString);  
    }  
    catch (Exception e)   
    {  
        exceptions.Add(e);  
    }  
}  
while (remainingTries > 0)  

throw new AggregateException(exceptions)  

}

If all the requests fail, this method will now throw an AggregateException containing a list of all the exceptions thrown on each request. If any request succeeds, no exceptions will be thrown and we'll just get our response string. This is definitely sufficient. But let's make it just a little nicer—most repeated failures will be caused by a persistent problem, so each request will throw the exact same exception. If all our requests throw an InvalidKeyException (which happens when our API auth token is invalid), do we really want to return an AggregateException with, say, 3 identical InvalidKeyExceptions? Wouldn't it be more ergonomic to just throw a single InvalidKeyException? To do this, we need to "collapse" any duplicates in our exceptions list into a single "representative" exception. We can use Linq's Distinct method to do this, but it won't collapse the exceptions by default because they're...well...distinct objects and Distinct will compare them by reference. We can use its overload, which accepts a custom IEqualityComparer<T>that we can use to identify exceptions that can be considered duplicates for our purposes. Here's our implementation:

private class ExceptionEqualityComparer : IEqualityComparer<Exception>{public bool Equals(Exception e1, Exception e2){if (e2 == null && e1 == null)return true;else if (e1 == null | e2 == null)return false;else if (e1.GetType().Name.Equals(e2.GetType().Name) && e1.Message.Equals(e2.Message))return true;elsereturn false;}

public int GetHashCode(Exception e)  
{  
    return (e.GetType().Name + e.Message).GetHashCode();  
}  

}

This equality comparer considers two exceptions to be equal if they share the same type and Message property. For our purposes, this is a good enough definition of "duplicates".

Now we can collapse the duplicate exceptions thrown by our request attempts:

private string Execute(string queryString){var remainingTries = maxRequestTries;var exceptions = new List<Exception>();

do   
{  
    --remainingTries;  
    try   
    {  
        return ExecuteSingle(queryString);  
    }  
    catch (Exception e)   
    {  
        exceptions.Add(e);  
    }  
}  
while (remainingTries > 0)  

var uniqueExceptions = exceptions.Distinct(new ExceptionEqualityComparer());  

if (uniqueExceptions.Count()) == 1)  
    throw uniqueExceptions.First();  

return new AggregateException("Could not process request", uniqueExceptions);  

}

This is a little more ergonomic. In short, we throw only distinct exceptions generated by the request attempts. If there’s only one, either because we only made one attempt or because multiple attempts all failed for the same reason, we throw that exception. If there are multiple exceptions, we throw an AggregateException with one of each type/message combo.

Wrapping Up

Implementing retry functionality for idempotent requests on an API client is as simple as that. Even for non-idempotent requests, we could just create a new Guid before our loop and include it in each request attempt. The server would be responsible for keeping track of the request IDs and responses.

Be sure to check out ButterCMS, a hosted API-first CMS and content API that lets you build CMS-powered apps using any programming language including Ruby, Rails, Node.js.NET, Python, Phoenix, Django, Flask, React, Angular, Go, PHP, Laravel, Elixir, and Meteor.

I hope you found this tutorial helpful. May your APIs be always ergonomic and may your websites be reliable. And may you never double-charge a customer.


Published by HackerNoon on 2017/03/30