Why We Love Concurrency (And You Should, Too!)

Written by drizzentic | Published 2017/06/18
Tech Story Tags: payments | golang | concurrency | mobile-money | api-gateway

TLDRvia the TL;DR App

Normally Fridays evenings are my off days, time to unwind and drain a few MLs of Tennessee gold. But on this night we started talking about how to handle many transactions in our system without affecting the performance. It was an interesting brain draining discussion. Our non-technical friends ended up sleeping before we reached a consensus with my friend Chebon Dennis.

After a few discussions we decided to give this discussion a name, we called it concurrency.

concurrency is when computation task can be executed in overlapping times. Unlike parallelism, in concurrency the execution of subtasks does not necessarily run at the same time. Concurrency beats parallelism when there are limited resources.

The definition sounded so obvious but in the real world it is a whole lot of complexities.

Genesis of the discussion

For almost 5 months i have been working on a payment aggregation platform, it has been an interesting journey and i have learned loads of things. I didn’t do it alone but had a backing of 6 guys composed of beginners and intermediates developers eager to learn new things. Initially we had all things figured out from the system analysis, design and modelling. Hell broke loose when i went down to do integration with card processing platform. At this point i realised PHP runs a single thread and can not allow me to process any other transaction when the thread is busy.

I discovered this as i was sending a payment request to a third party gateway. While the asynchronous request was being processed i realised that the system could not handle any other transaction. So ideally what this meant was that i cannot run anything else on the system whenever a visa/mastercard payment is being processed. This was disheartening and ideally it wasn’t something i had experienced before.

This prompted me to find a quicker solution, so i decided to embrace a microservice architecture. I developed a php service to specifically handle all card payments. So what would happen is the main application would receive a request and then invoke a synchronous http request to the php service. This worked like a charm at first. Later i discovered that i was pushing the problem to another instance and not actually providing a solution. The process was exhibiting a fisi-like (hyena-like) tendency i.e hoarding all resources until it is done with them.

This wasn’t something i would recommend anyone to use. I wasn’t proud of it at all. So i shared my concerns with Chebon Dennis.

Our initial discussion was to find a scalable workaround for the problem. So we decided to explore various alternatives. PHP was suggested but since i had worked with it before i knew some of it problems. First, it doesn’t support multi-threading unless you perform a few hacks.

Multithreading is a technique by which a single set of code can be used by several processors at different stages of execution.

So we ultimately had to look for a language built with multithreading in mind. The first thing that hit me was java. So i thought of building a microservice to run the payment processing aspects of our system using java. After a few hours of research i realised that java provides multithreading and concurrency. But the problem is it doesn’t manage the OS threads to ensure efficient use of resources.

Let’s go to GO

After a few hours, silence had struck the living room. My pal was still struggling to find ways in which PHP could handle the issue (he is a PHP diehard) and the rest of the guys were asleep. I stumbled upon a Google IO presentation on concurrency . So they were describing this new language they had created to help them make searches more efficient etc.

I jumped into the documentation after watching the presentation. I finally found what i was looking for, Golang could handle my concurrency issues. It was a joyful moment since i had won a soul on this concurrency debate. Based on our understanding of our system we divided our problem into two services.

1. Notification service

2. Payment processing service.

Notification processing problem

In the payment ecosystem, things have to be realtime. This means as payment information hit your system, you have an obligation to notify the systems integrated to your payment gateway. One challenge we faced was the variable amount of time each system took to process the asynchronous notification. This meant that we had to wait for the response for a request and thus block other requests from being sent to other systems.

To solve this problem we developed a go service that used goroutines for every transaction detail we received.

A goroutine is a function that is capable of running concurrently with other functions

This means the systems that are capable of processing transactions faster could not be delayed by slow systems. This allowed us to handle payment requests without any delay since we had achieved concurrency.

One very important thing that also helped us was the go channels. We reused them to process transaction responses.

Payment processing service

In the payment processing service we had two main challenges. First, was how to handle incoming mobile payment notifications from the MNOs (Mobile Network Operators). And then how to handle card payment processing. We wanted a non-blocking, effective service that could handle any load of transaction without depleting the allocated resources. We designed and developed a service that could process each incoming transaction as a go routine. This meant all the requests from the MNO would never experience a delay given that their is no wait at all. The same technique got reused for the card payment. But the process is a bit different from processing a single push request. In card payments there are two steps, payment authorization and payment completion. This means we had to do an asynchronous call that would return data we used for the second request. The assumption we made was that the payment processing system could handle concurrent calls from our system.

Service discovery layer

After all was done and dusted we had achieved concurrency in our system. One last thing that kept disturbing me is how to discover the location of this services dynamically. Given that most of the service links were hardcoded. In the next article am going to discuss how we achieved this because we are still building on it. Any feedback or suggestions would be appreciated.

5 hours later we had dashed almost 7 cups of coffee down out throats and killed at least 3 brain cells. All was done and it was our time to “keep walking”, the sleeping lions woke up and we dashed out of the house to go mark our register at a local joint. It’s just a few meters from HQ.


Published by HackerNoon on 2017/06/18