The Complete Guide to BFF (backend for frontend)

Share on facebook
Share on google
Share on twitter
Share on linkedin

A backend for frontend (BFF) is one of these newer architectural patterns that especially became relevant after the rise of microservices and domain-driven design, as it is a way to simplify the communication between the frontend and backend and make the frontend development simpler.

In this post, I will cover BFFs with Angular and if and how you should do it in your project.

What is a BFF?

A BFF is a dedicated backend for the frontend. It is a server responsible for translating communication between domain services and the frontend, easing the work on the frontend.

Benefits of a BFF

Let’s go over a few benefits of having a BFF

Simplifying the interface

It simplifies the communication between the frontend and backend. This is especially valuable when you have an architecture with many microservices as the frontend only needs to communicate with one service that has an interface tailored to the use cases in the frontend.

Aggregating requests

One immediate drawback of adding an extra service to delegate the requests is that it does add extra latency. And it sure can outweigh the pros if you are only delegating it to one other downstream service/database. But the big benefits come in when you have multiple domain services/databases to get the data from and the BFF can aggregate these requests while being on the same network. Having the BFF on the same network and the downstream services ensures low latency between them, compared to doing these requests all the way from the client’s browser. Especially sequential requests could benefit from being executed from the BFF.

Better security

Thus you shouldn’t rely on security by obscurity, you are hiding a lot of the infrastructure details if your frontend only communicates with the BFF. Also, you might only need to expose the BFF externally and can thus secure your domain services better.

More independent teams

When the teams are able to work on their own servers, they can develop features more independently. To get the most of out this benefit, you want to have at least one BFF per team.

Error handling

The BFF is a translation layer between the frontend and the domain services so it should also handle error handling and map server errors to meaningful error messages.

Mocking

The BFF is a good place to mock out data when eg. the domain services/business logic is not ready yet. It can also be good for automated tests to use mock data from the BFF.

Caching

The BFF is a good place to set up the server caching logic. Both using caching headers as well as using it in combination with a cache database like Redis.

Server side rendering (SSR)

If you do server-side rendering, you are going to need a BFF anyways to render the frontend.

What technologies to use for the BFF?

First of all, I recommend you got with a Node-based server as this allows you to share the javascript/typescript between frontend and backend as well as keep it all in the same monorepo for easier development. This will allow complete vertical feature development and make it easier for the frontend developers to access the BFF servers. If you put BFFs in a separate repo you are creating distance between the FE and BFF thus creating resistance for the FE engineers to also work on the BFFs. Ideally, you want FE engineers to also do the BFF development rather than dedicated BFF/BE developers.

So given this, we got the following common options:

  • Express
  • NestJS
  • GraphQL/Apollo

Express

Express is the most commonly used NodeJS server framework and is a common option eg. when doing SSR with Angular. It is REST-based and is fairly simple to setup

NestJS

NestJS is based on the Angular syntax as it supports decorators to set up the server. In contrast to Express, it is more of an out-of-the-box framework including all the basic tooling where Express takes a more minimalistic approach.

GraphQL/Apollo

GraphQL with Apollo is a good solution for BFFs as it allows the FE to query exactly what it needs, fixing the problem of over-fetching while removing the need for data transfer objects (DTOs). In GraphQL there is only one endpoint and the client simply queries the data it needs, which makes this a good option for aggregating requests as for a BFF.

GraphQL is my preferred way to do BFFs, so in this post, we will dig deeper into how to actually set such a BFF up with Apollo hosted on Firebase functions.

GraphQL basics

Let’s get a quick overview of what GraphQL is and how it is different from REST services.

Basically, GraphQL is different from REST as it only has one endpoint and the client runs queries against a schema to get data. Each part of the schema is related to resolvers, that are triggered when a specific part of the schema is queried.

Mutating commands are called mutations, they also have a schema and resolvers being triggered the relevant part of the schema is receiving a mutation.

Setting up a BFF with GraphQL and Firebase functions

My favorite tech stack for smaller projects is to host it all with Firebase. So I would have a Firestore database (Firestore) and a Node GraphQL server hosted with Firebase functions. Firebase functions is a serverless technology, so using this we don’t need to worry about the infrastructure and we have a basis for scalability.

Overall architecture

The app is created with Nx; having an app and a service. Both can be hosted on Firebase.

We can create a new Nx project with Angular with:

npx create-nx-workspace --preset=angular

Then we can create a service with:

nx generate @nrwl/express:application service

Let’s look at how to set up the service with GraphQL

Server setup

GraphQL is setup as:

The setup includes access token validation as well as adding the decoded access token to the context, so they are available for the requests.

Introspection is enabled, so you got an interactive playground to try out your server.

Persisted queries are set up to give better network performance by sending smaller requests as it will ensure the queries will not become too large but instead be cached as a sha-256 hash.

The schema is also added here as well as the mutations.

From here the client is using the Apollo client to perform queries/mutations which is outside of the scope for this post.

Conclusion

In this post, we looked at what a BFF is and how it can benefit your architecture.
Basically, it is most beneficial when you have a microservice architecture and you want to simplify the frontend by only having it communicate with the BFF.

Also, we saw how to set up a BFF server with GraphQL and Apollo, that you can host on firebase functions.

Next steps

This post is only giving a general overview of how to work with BFFs. For the complete interactive step-by-step guide check out the next cohort for Angular Architect Accelerator.

Resources

Sam Newman BFF post

Do you want to become an Angular architect? Check out Angular Architect Accelerator.

Hi there!

I’m Christian, a freelance software developer helping people with Angular development. If you like my posts, make sure to follow me on Twitter.

Related Posts and Comments

Angular App with Firebase

We are continuing with our series on building our Angular applications, this week we are adding Firebase to our application. In the video I will

Read More »