APIPosted by Craig Hughes Fri, November 09, 2018 15:52:45
for the first time in history, we registered over a billion passenger cars on our
roads, globally. It is estimated that by 2050, there will be 2.5 billion cars
on our roads! Clearly, cars are a big part of our lives and will continue to
stay for a while.
said, have you ever opened the bonnet of your car and had a look inside the
engine compartment? It looks incredibly complex, with many individual parts each
responsible for their own task. Look underneath your car the next time it’s
raised up on a lift and notice how complicated the drivetrain and suspension are.
Yet, despite being a massively complex machine, cars have become a fundamental
part of lives – most of us are able to operate them with relative ease.
this possible are the incredibly simple interfaces cars use to hide us from the
complex internals of the car. The steering wheel allows us it change the
direction of the car. The pedals allow us to change the speed (accelerate of decelerate)
of the car. In fact, all the human interfaces within the car allow us to
operate this incredibly complex piece of machinery with relative ease, by
following simple standards.
adopt the same principle. The interface you expose to your API consumers should
be as simple and easy as possible to understand. Complex internal integrations
and business logic should not be exposed to your consumer – this is called
abstraction; and it’s something the car does incredibly well. What it basically
means is that a consumer of your API should not have to be concerned with
understanding the internals of your API in order to use it.
That is not
to say they should be completely oblivious or ignorant of the boundaries or relevant
ancillary requirements of your API. Just like a driver of a car needs to know
that a car needs fuel to run, air in its tires, keys to start, etc., so to should
the consumer of your API be aware of any external constraints like intended
audience, security, data structure, etc.
important aspect of a car that comes to mind is modularization, and I’ve mentioned
it already. The complexity of a car is the sum of all its components working
together; each performing their own unique function. Your API could be composed
of internal components, orchestrated or composed to produce the functionality
defined. I’ve discussed in my previous articles on “Making coffee to explain
APIs” and “Most of us think we know what an API is but cannot seem to agree on
what an API should be”
rawpixel.com from Pexels
APIPosted by Craig Hughes Thu, November 01, 2018 13:15:33
has at least one set of keys; don’t they? You can recognize your keys and you make
sure you take them wherever you go. Look at each key on your keyring and you know
which door or cabinet it opens (well, we should know anyway). Some of us have
keys we no longer know the purpose of – what we do know is that the lock for
that key is probably useless without it.
the content or function they provide can also be protected by one or more keys.
Each key has a purpose and it’s important to understand what each key is for.
The only difference here, is we often refer to these keys as tokens.
These are used to protect the API endpoint from consumption by non-registered
consumers. They are usually issued by an API gateway, who’s responsibility it
is to manage the APIs published via them. API keys are issued to the person or
organization who intends to use the API for their application – this key allows
them to access the API endpoint, but not necessarily access to a user’s data or
to execute a user function behind the endpoint. That’s potentially the purpose
of another key – the JWT (JSON Web Token).
perform a number of key functions (pun intended). They can be used to provide
the information of an authenticated user as well as their permissions within
the context of their authentication. They can also include custom data and other
values relevant to the API endpoint, including relevant time bounds, issuer,
subject and intended audience of the request.
To ensure JWTs
are URL safe, they are transported as base64 encoded strings. This means that although
they may not seem readable, any decoder can reconstitute them as the characters
they represent (JSON object); well, almost. A single JWT is made up of three
- the header contains information about
the JWT including the algorithm used to sign it – this part is readable.
- the body contains the actual payload
(claims) of the JWT as a JSON object – this part is also readable.
- the signature of the JWT – this is
not readable but used to verify that the header and body of the JWT have not
been tampered with (changed).
important to understand this – simply because the JWT being sent seems illegible,
does not mean is it. JWTs protect the integrity of the information inside them,
not the visibility. So, please don’t think they can contain sensitive data – that’s
another kind of key; an encrypted JWT or JWE (JSON Web Encryption).
started to get technical; let’s bring it back up a level.
A JWT carries
the information about what an authenticated user can do, their authorization. This
information is signed using a defined algorithm and a secret key (yep, another
one…), thereby enforcing its integrity. Essentially, we use a JWT to transport information
about the user, their permissions, and any other details about the request in a
way that any changes can be recognized.
In the same
way that we can look at the keys on our keyring and recognize what each is for,
we can also see the contents of a JWT. Like the profile of a key needs to
line-up with the tumblers in a lock in order to open it, the API producer
should use the profile (user and scopes) of the JWT to decide whether access to
the resource should be allowed. If you tamper with the profile of a key, it
will stop opening its corresponding lock. Tamper with the contents of a JWT and
the API producer should not trust it and therefore not allow access.
you would not leave your house keys outside your front door, so please don’t leave
your API keys where they can be used.
Photo by PhotoMIX Ltd. from Pexels
APIPosted by Craig Hughes Wed, October 17, 2018 14:21:53
Have you’ve ever visited India? How did you cope with the traffic there? It’s clearly different to the regular, organised approach we are used to in Europe.
After spending a couple of days in Bangalore recently, I became aware that my reaction to traffic had changed; I was not fazed (read terrified) by traffic behaviour anymore and actually started to notice how well it seemed to work. This surprised me and got me thinking: “is this an example of choreography and if so, is driving in Europe orchestration?” Let me explain.
In Europe (as in most of the world), traffic is managed by lanes, signs and traffic lights. Obeying these is paramount and heavy fines are imposed on those who flout the law. Everything seems to flow in unison and order seems to be present. But is it? What happens if we don’t respond to instructions in time, or don’t follow proper lane etiquette? Traffic blockages and potentially; road rage.
Counter that with the scenario in India; everyone seems to react to changes by other drivers, lanes seem to be indicators of general direction and traffic signals are only placed on major intersections (sometimes enforced by a traffic official). This apparent chaos often scares those of us not form the region, but yet it still seems to work.
APIs, by nature and design, decouple us from the complexity of the underlying systems. Inversely, when we expose our data via APIs we cannot expose our complexity (like we may have done in the past). So, with APIs, the general practice seems to follow our driving behaviour; we want to control what and who we share our data with. We want to be the masters of orchestration to maintain control. We want to feel like we are still empowered.
Choreography is a completely different approach, here we are no longer the master of what happens to our data or who uses it. As I mentioned in my article on the Hollywood Principle; we expose our data as events when our work is done. Essentially, we’re sharing the result of that piece of work with the world – “I’ve done something you may want to know about”. How and what the consumers do with the data should be of no consequence to me, but it feels like I’ve lost control. Consumers (or subscribers) react to my data changes – each choreographing their own behaviour. It’s synonymous with driving in India.
Now, I am by no means saying we should abandon all structure and formality and start driving like we are in India – that was merely a metaphor to explain choreography. Perceived chaos seems to work in India but would not necessarily work in Europe. An event driven architecture, which induces choreography, has its place, but then so does orchestration – it depends on the agenda, use and purpose of the integration. I find the problem is that people appear to be afraid of choreography, because they cannot control it, and therefore dismiss a reactive event driven architecture and the opportunities that go with it.
In summary, Orchestration is the process where a single API consumer gathers information from various API endpoints, using the data received from initial calls to make further API calls to other endpoints. Choreography is the process where multiple subscribers react to a single event, using the data received for their own purposes.
I introduced the concept of a layered API architecture in a previous article where I mentioned the creation of core domain API. These core domain API, being data-based, are in my mind the best data components for sharing in an event driven architecture – since they describe a single entity within a single domain – allowing maximum coverage by multiple subscribers.
Finally, what about composition - another strategy used with APIs which I have not mentioned? Using the traffic metaphor; composition is similar to putting the passengers into the car – a collection of objects (people) defined as one (car). Composition is the process where a single API consumer gathers data from multiple endpoints. However, unlike Orchestration, Composition does not required one or more previous API calls to provide data for subsequent calls.
Image sourced from medium.com
APIPosted by Craig Hughes Wed, October 10, 2018 08:03:47
I was out looking
for a sandwich for lunch the other day and found myself faced by two options; I
could either go to the convenience store and pick up a pre-made sandwich or go
to the deli and have one made for me.
pre-made option sounds simple, but choice it usually limited to the imagination
of the sandwich maker. Going to a deli to have a sandwich made offers freedom
of choice, but I’d have to deal with queues of people trying to make up their
minds about which ingredients they want (including me when I get there).
stores perspective, the pre-made option is easy – display what’s on offer and
let the customer pick one; it’s quick and simple. However, reduced choice may
push customers towards the Deli. The Deli owner can offer his customers choice,
but at the cost of extra staff to make sandwiches based on each customers requirement.
The little effort required by customers and possible delays could push customers
to the convenience store for the “best fit” option.
APIs could follow a similar approach.
an API producer, using the inside-out-approach, I could decide “what” my
consumers need and make a range of specific endpoints for them to consume. If
one endpoint does not cover all their requirements, they could compose the data
they need by consuming a number of my APIs; extracting the bits needed from
each. This will give me full control over my API implementation and allow me to
optimize them for better performance. Eventually however, my consumers will
complain and ask for more “specific” APIs to meet their needs. In my
experience, this is all too often the reality.
As an API producer, using the outside-in-approach, I could offer my consumers
the ability to select which elements in my API domain should be returned. This would
require me to provide proper documentation to explain my domain and I’d need to
implement the selection logic (and consequences) in my services. Like the Deli
needs to maintain the cost of a sandwich maker, I’d need to accept the cost of
maintaining this logic and implementing it for new elements in the future. The benefit,
of course, is that my API will offer complete customization, making it as
consumable as possible by more than one consumer – a desirable trait for APIs.
There is an
alternative – I could provide the full data object in the response. In other
words, provide all the data elements of the object being modelled and let the
customer extract the values needed, ignoring the rest (pun intended). This approach
creates less work for the producer – only one simple endpoint is required per
object. The effort is passed to the consumer, similar to the Deli style approach,
only the selection of elements is made on the response instead of providing
input into the request. Taking advantage of HATEAOS will reduce the payload
size and offer discoverability but could force the consumer to perform
composition in order to get all the data elements required.
So, as API
producers we have options. We can offer our consumers flexibility through
choosing data elements before or after request, or we can offer them what we
think they need – the choice is ours. Each approach has its merits and its consequences,
but they are nonetheless choices.
on APIs that follow all three of these approaches. The Deli style approach offers
benefit to the consumer and reduces payload. The challenge comes in selecting data
elements for the request input in a hierarchical data structure. The approach
we took was simple; use dot.notation to identify the data elements just as you
would when reading the JSON response.
Image provided by Pexels.com
APIPosted by Craig Hughes Sun, September 30, 2018 14:15:56
The general consensus amongst API providers is that we
should build reusable APIs, but consumers want more - they want an API that
gives them everything they need. These tactics challenge one another.
Building for the consumer
Consumers of data or functionality want everything in one single and simple
package. Facilitating this approach can lead to a plethora of bespoke
interfaces each fulfilling only one single user’s requirement. Prospect
consumers will find it hard to find the “right” API and, due to the precedent set,
may ask the API producer to create a new interface for them.
Building for reuse
On the contrary, if an API is too generic then the API consumer will
feel that they have too much work to do. These consumers will have to use composition
and/or orchestration on these generic APIs to get the level of functionality
they need. Issues of performance and the need to understand the underlying
business may become strong arguments for the consumer.
How do we address these two divergent methodologies?
My wife and I do not work in the same industry - she does people and I
do technology. I often write as if I was explaining the topic at hand to her,
but for some reason, this topic has me flummoxed. I’m struggling to explain
this one to my wife in terms she can understand. So, please bear with me, and let me know in your feedback or comments how to simplify the following descriptions.
Domain Driven Design (DDD) allows us to break business domains into
smaller objects called domain entities or value objects – this concept is briefly explained in my previous post on Making
Coffee. In banking we have a number of business domains: customer, account,
product, etc. Customer, in turn, can be represented as a number of domain entities
including: core information, addresses, contact information, etc. The same applies for the other business domains. Domain entities can be exposed as our most reusable
API. We call these APIs core or domain APIs.
However, business practices are not only about managing domain entities, we also need to be able
to use and or manipulate them to fulfil some business process. To facilitate this, we can compose or orchestrate core APIs to perform a defined function, which we expose
as a reusable interface (payments are a good example, see my previous post on The
Butterfly Effect). We call these process or composite APIs.
Finally, we have the user interaction channels. This is where the most
specific APIs are usually required by consumers and where it is ok to have bespoke
APIs for a defined purpose. As long as these APIs are composed of core or
process APIs and do not go direct to our core systems. We call these presentation or experience APIs.
Essentially, we create a layered API Architecture. Core API, the building
blocks, form the foundation and are predominantly data driven APIs. Process APIs offer the reusable business functionality
by manipulating the core API for a defined process. Finally, experience APIs offer custom user
interaction by orchestrating or composing core and/or process APIs for a specific user interaction.
Now, I admit this is overly simplistic and does not cover the
performance argument. That can perhaps be addressed by an event driven
architecture which publishes core domain entities (see the Hollywood
Principle) and implementing the CQRS (Command Query Responsibility
Segregation) pattern. Perhaps the topic for a future discussion.
Image from Pexels.com
APIPosted by Craig Hughes Thu, September 06, 2018 12:24:37
Inversion of Control: Don't call us, we'll call you.
Remember when you were a child, going on a long trip with your parents. Remember the frustration; “Are we there yet?”, “Are we there yet?” As an adult now - imagine or remember your own frustration from being nagged!
For both parties, this pattern is not ideal; children are frustrated with not knowing, while parents are frustrated with constantly being nagged. So, why do we think this pattern is OK in software architecture? Why do we ask systems to constantly check other systems for updates? Why do we allow this tight coupling?
Fortunately, computers are not human; they have no emotion nor a concept of time. They don’t mind constantly asking other systems for an update and other systems don’t mind being asked. All they do is check again and again so they can do something. This constant chatter between systems can, and does, have an impact. So, instead of asking systems to constantly "check for an update", why not configure them to react to an event – tell them; “I’ve done something you may want to know about”.
When something happens in Hollywood – the world knows about it; whether we choose to do something about it is our own decision. A reporter writing the story about an event does not know who will read the story or what they will do with the information they receive. The reporter only expects the newspaper will publish the story for all its subscribers to read.
In software architecture, we can publish business events using the publish-subscribe pattern (PubSub). Interested systems can subscribe to events based on an event topic and decide, for themselves, what to do with the data received. In essence, when the source system done it's bit, it can publish data about the event via PubSub to the subscribers. Decoupling is important; the source system does not, and should not, care about what the subscribers do with the data – they have already completed their task.
Since the subscriber consumes the event data for their own purposes, it is their responsibility to complete their task, within their bounded context. So what happens if the subscriber fails to react to the event (via an error or broken communication)? Is it viable to ask the publisher to replay the event? No! The subscriber cannot assume they are the only audience of the event. Just like the publisher is not aware of the subscribers, subscribers should also not be aware of one another. Imagine republishing an event that leads to duplicate communications to customers or duplicate actions!
1. publishing event data via persistent queues to ensure delivery, and/or
2. expose APIs for retrieving published event data for reconstitution queries in the event of an error by the subscriber.
While PubSub does seem like a simple pattern, implementation can be challenging. Publishing a single event using Change Data Capture (CDC), off multiple tables, is just one of those challenges. That's a whole new ball-game, and a subject for another day.
Photo provided by pexels.com
APIPosted by Craig Hughes Wed, September 05, 2018 21:11:36
We are currently doing some domain driven design to understand our primary business domains and identify the appropriate core APIs required. This is an exciting yet daunting time - some domains look far too simple while others seem overly complex and almost impossible to model. One such domain is the payments domain.
A few weeks ago, during one of the payment domain workshops, one of my colleagues compared payments to a butterfly and challenged me with the question: "How do you model a butterfly?" My knee-jerk reaction was to rise to the challenge and prove it possible, but I soon ran into the problem. Butterflies are the end state of metamorphosis!
The butterfly has four stages to it's lifecycle; egg, caterpillar, pupa, butterfly. Each stage is different and has a different goal. The challenging part is, while the creature is the same, its life stages are completely different. The egg is simply a sphere with something growing inside. The caterpillar is a long cylindrical creature with a lot of legs. The pupa is a mass of thread woven into a cocoon shape. The butterfly has wings, and six legs. How can the same creature be modeled if it is a completely different object during the stages of it's life?
This stuck with me for a while as I consciously left it to percolate in the back of my mind. Yesterday, while on my way home on the metro, I had the "Aha" moment!
The answer is you can model a butterfly - by the individual stages of the creature at any moment. The question leads you to assume you need to model all stages of the creature as one model since it is one creature. This you cannot model. The relationship between metamorphosis and the butterfly is the same as payment to transaction. Both are a process, not an entity.
So, simply put, model the stages of the payment process as separate entities and serve these as core domain APIs. Next create process APIs that manage and orchestrate these core domain APIs to fulfil the payment process.
It's just as important to phrase the question as it is to understand it.
Photo by rawpixel.com
APIPosted by Craig Hughes Wed, September 05, 2018 20:08:17
I was invited to a meeting recently, to explain APIs and why they are important for our digital blueprint. I had no idea who the audience was nor their level of understanding of the basics of APIs. So, with no handy deck prepared for the meeting I entered the room, armed with my laptop full of previous presentations.
I was introduced to the team and the scene was set - "we've heard about APIs and need to plan to do them next year". This prompted the obvious question: "does anyone know what an API is?". Being an honest audience, the response was "no, but we've heard they are reusable and will make our life easier when we have them".
I first explained the simple stuff; that an API is simply a resource made available to a computer in the same way a website is made available to us humans - via an address. Next came the challenging part; time for coffee.
Making coffee is second nature to us, but imagine trying to get a computer to make you some coffee? You have to tell it explicitly what you need done and the more information we give the computer, the more questions may be raised: What is coffee? How much milk? Do you want milk? What is Milk? Where does milk come from? What is a cow? Where do I put the coffee? What do you do with coffee? These are some of the notions we take for granted as humans, because we've learnt about coffee. Now, computers can also learn about coffee, but that is a completely different topic; not one we will cover here. Suffice to say, getting a computer to make coffee requires a lot of attention to detail.
So, how do we do this? Well, we do that same thing we do with most problems; we break them down into smaller pieces. Making coffee involves a number of "things"; water, cup, teaspoon, coffee grains, sugar, milk, kettle, etc. Lets call these domains; the 'coffee grain' domain, the 'cup' domain, the 'milk' domain, and so on. Now, we can take each domain and define the attributes (properties) of it. For example 'milk'; it's white in colour, is a liquid, is usually cold. The 'cup'; it's a solid, it can hold a certain volume, it can hold liquids, it has a handle. Once we have defined these domains and their properties, we can now tell the computer how to use them to make coffee in simple steps.
- Boil water in the kettle
- Put one teaspoon of coffee in a cup
- Put one teaspoon of sugar into the same cup
- When the water in the kettle has boiled, pour 200ml of the water from kettle into the same cup
APIs can be created in the same way. We define the basic domains of the functionality we want to expose as services (another word for API). In our world, these domains could be customer, account, product, transaction, etc. These form the reusable core APIs. Now, we can use these core APIs to do more complex stuff, like creating a payment API (process API) which orchestrate the core APIs.
- Authenticate the customer
- Get the customers accounts
- Select the right account
- Create an instruction to pay the customers registered beneficiary from the selected account
- Authorise the payment
- View the transaction
With these basic core domain APIs we can now perform multiple other processes. Using most of the domains associated with making coffee, I can replace the 'coffee grains' domain with 'tea bag' domain and have the computer can make me some tea instead.
Photo by Jessica Lewis from Pexels