Thought leadership from the most innovative tech companies, all in one place.

7 Key Lessons I Learned While Building Backends-for-Frontends

Crucial takeaways from building production-ready BFFs that every developer should know.

A Backend-for-Frontend (BFF) is a specialized server-side API that serves as an intermediary between the frontend (client-side) applications and various downstream APIs, aggregating and transforming data as needed before delivering it to the frontend.

Why build BFFs? They’re a façade — shielding your frontend from the complexities of dealing directly with diverse (and potentially inconsistent) data sources — making your frontend codebase more focused, more maintainable.

You’ve read Sam Newman’s famous blog post, and a bunch of other resources on BFFs, I’m sure, and while those give you a great idea on what the pattern is and why it’s useful, it’s not immediately obvious how best to build a BFF. Or the mistakes you’re likely to make along the way.

So, without further ado, here are some gotchas, tips and tricks, and general developer advice about Backends-for-Frontends drawn from my firsthand experience in building them for data-heavy apps. The stuff I wish I’d known when I was just starting out.

Let’s dive right in. Hope these are useful!

1. Understand that you’re not building an API Gateway.

I’ve found that it’s incredibly easy to play it safe and end up building an API gateway instead of a proper BFF.

API gateways are conceptually simple, and they’re fairly attractive. Put a HTTP-based abstraction in front of multiple downstream services, insulating the client(s) from changes when these downstream services change — easy, right? Not exactly.

As your app grows, a pure API Gateway approach inevitably turns into an all-encompassing monolithic API for multiple clients and experiences, and any new feature (on any of your supported clients) will have to ensure compatibility with this one API before shipping anything at all. Plus, this is yet another giant responsibility — and one with muddy ownership to boot. Does the backend team work on this? Do you create a new team altogether? Either way, the frontend teams have to interface with this team every time they need to either consume or modify downstream APIs.

More friction, less fun.

Backends-for-Frontends differ from API gateways in being specifically built for one client/user experience, with one BFF per client/user experience.

Does your product consist of a React desktop app, an Android/iOS app, and an app for Xbox/PlayStation? With the BFF pattern you won’t have three clients talking to one API gateway that takes on multiple responsibilities, but instead:

  • One “backend” purpose built for each one of them, owned by each client team.
  • Each being smaller and less complex than an API gateway, and easier to maintain because there is an inherent separation of concerns that this pattern promotes.
  • Each doing exactly what the UI for its particular client needs — and nothing else.

The idea is simple: since you own both the client and the “server” components, you can always create the perfect “backend”, with a function that when called, returns exactly the data needed, in exactly the right format. One of these client + BFF teams doesn’t even have to worry about how downstream resources work.

2. Consider using a BFF framework.

Building production-ready BFFs requires you to reinvent the wheel for a bunch of parts — request routing and dispatching, API aggregation and orchestration, data transformation/formatting, middleware, caching, logging and error handling, security… and that’s not even considering the actual BFF API design.

There’s no established spec, or even a consensus among the community as to how you actually build and bring together all of these layers.

For this reason, I use WunderGraph— a free and open-source (Apache 2.0 license) Backend-for-Frontend framework that is deployable using Docker, that saves me the trouble of writing and gluing together boilerplate.

If you’re like me:

  1. Love TypeScript (and understand why type safety is necessary),
  2. Have to build data-heavy apps,
  3. Need to bring together dozens of microservices, databases, and auth/payment APIs from SaaS providers

Then WunderGraph is a great fit.

It allows me to compose all these dependencies — doesn’t matter if they are built using different technologies, using different authentication/authorization workflows, returning data in different formats — into one unified, secure, extensible API. Then, I can then write either GraphQL or TypeScript operations to aggregate, process, validate, or otherwise get the data I need, served as JSON over RPC.

I don’t need to manually compose and orchestrate all these dependencies. I just define them declaratively as config-as-code, let the WunderGraph SDK generate a unified API for me, and then have typesafe access on both the frontend and the backend (with support for all major frontend frameworks like React, NextJS, Remix, Astro, Svelte, Expo, Vue, etc.).

You’ll never have to work with anything but TypeScript code, and you’ll have built-in caching, testing/mocking, security, analytics, monitoring, and tracing to boot.

To get started with WunderGraph for building BFFs, check out their docs here.

3. Caching, Auth, and Logging work great in the BFF layer.

The BFF layer is the perfect place to relieve some of the burden on both your client and your backend services, making their code much more simple. It’s usually a good idea to bring in ancillary concerns like caching, auth, and normalized error handling to the BFF.

Again, this comes back to a BFF having the benefit of knowing its client perfectly. Since we know exactly the data, auth techniques, and caching requirements/strategies we’ll need for a given client (and the format of it) we can offload these operations to the BFF.

Caching

A BFF knows the exact aggregations a client will need, so we can place a reverse proxy in front of the BFF to store a copy of the needed view-specific response in its cache, and serve it to subsequent clients who request the same aggregation. We could also produce data models/aggregates that are expensive operations, ahead of time.

WunderGraph, for example, automatically hashes all BFF operations, turning them into persisted queries that only respond to client requests for valid hashes. The WunderGraph BFF server generates an Entity Tag (ETag) for each response — comparing it to the ETag on the server for each subsequent request — and if they’re the same, this indicates that nothing has changed, and the client’s cached version is still valid. This opens up very fast stale-while-revalidate strategies on the client, without the need to manually set expiry times, and carefully calculating it to be equal to the freshest piece of content needed for a given aggregation.

Auth

Auth often involves integrating with external identity providers, user directories, or Single Sign-On (SSO) systems. This is a functionality that is a perfect fit for the BFF layer for a bunch of reasons that go beyond just simplifying the frontend/backend codebases:

  1. Each client (or more accurately, user experience) may have unique authentication requirements. By implementing auth in the BFF, you can tailor authentication logic to match the specific needs/standards of each client. This allows for fine-grained, context-aware auth.
  2. It makes much more sense to have auth implemented in the BFF, than on yet another Nginx server further upstream that you’ll have to test, deploy, and maintain independently.
  3. Plus, having auth in a BFF is just another layer of security since a BFF inherently hides all backend architecture/implementation from the client.
  4. If your app needs to support auth using multiple credentials — classic username/password, external OAuth providers like Google/GitHub, 2FA/MFA, etc. — the BFF can integrate multiple identity providers, mapping it to a unified interface for your client. (And with WunderGraph, you can add auth providers like you’d add packages with NPM — see more here.)
  5. The BFF can also implement granular access control based on user roles — commonly known as Role Based Access Control (RBAC). RBAC implemented in the BFF simplifies the maintenance and updates of access control rules since the authorization logic resides in a centralized location.

Logging

The BFF essentially acts as a mediator of requests, and given the sheer volume of inbound and outbound traffic it handles, makes for an excellent place to implement logging. You’ll have centralized logging regardless of which client made the request.

Plus, since so much of the data handled by a BFF is aggregates, logging at this level can actually surface performance-related issues, helping out both frontend and backend devs.

But it’s more than just that. The real value added by logging in the BFF layer is context. BFFs possess valuable contextual information about each request. They can extract crucial details like the user’s identity, the type of frontend application used, the API endpoints accessed, and the parameters sent. You could actually enrich logs with this crucial context, making debugging orders of magnitude easier.

Finally, since the BFF serves as a security barrier between frontend and backend services, logging here would also allow the detection of potentially malicious patterns in incoming requests.

4. Normalize Your Errors in the BFF.

BFFs are an aggregator of requests on the server layer, sending multiple requests to one or more downstream services, gathering all the responses asynchronously, stitching them together when it’s all ready, and sending them back to the client application.

But these downstream services can fail in wildly different ways, and they may return errors very differently, too. Some might throw a generic HTTP 500 (and you might not want that), some throw HTTP 200 OK but include error data in the body, and some don’t even return JSON at all but XML/HTML.

The BFF essentially being a translation layer between the frontend and domain services, is well equipped for translating and mapping these disparate errors/error messages — and critically, doing so with normalized error states.

Here’s an example. In a conventional REST API, a request that fails validation would get you back a 4xx or 5xx HTTP status code, but what if one of your domain services is a GraphQL API?

Let’s say this mutation request fails (obviously, because the input is missing a name).

mutation {
  updateUserProfile(input: { name: "" }) {
    id
    name
    email
  }
}

But this will always get you a 200 status code regardless, with the response payload containing specific error information. If you pass on this responsibility to the client, the required error-handling logic (with proper UI/UX feedback) is going to make it bloated and harder to maintain.

{
  "data": {
    "updateUserProfile": null
  },
  "errors": [
    {
      "message": "Field 'updateUserProfile' is missing required arguments: input",
      "locations": [
        {
          "line": 2,
          "column": 3
        }
      ],
      "extensions": {
        "code": "BAD_USER_INPUT"
      }
    }
  ]
}

This is where the BFF comes into play. The BFF can be responsible for handling the GraphQL response from the backend, and then normalizing any potential errors into a consistent format that the client application can interpret and display unambiguously.

{
  "status": "error",
  "code": "BAD_USER_INPUT",
  "message": "Field 'updateUserProfile' is missing required arguments: input"
}

You could now return this as an HTTP 400 Bad Request, with specific information about malformed syntax or missing required data. The BFF acts as the intermediary that normalizes your downstream error responses, providing its client with the necessary information to understand the outcome of its requests, and handling errors in a standardized manner.

And you could do much, much more with it — adding a canonical timeout period, for example. Or adding custom headers whenever necessary.

5. Integration testing gives you the best bang for your buck in the BFF.

BFFs aggregate and orchestrate data from multiple downstream services, before passing on a final response to the client, so it’s obvious that it would be a great place to test and validate data against an agreed-upon API specification, and the format its client needs.

But it’s also a great place to test specific use cases that might be difficult to achieve with real backend data. For example, simulating error responses, edge cases, resource-constrained or degraded service scenarios. Relying solely on real backend data for testing can lead to bottlenecks and inconsistencies, so mocking that data in the BFF allows developers to proceed with testing even if the actual backend systems are not fully developed or accessible.

But mocking data can come in handy for more than just testing. It also means that you’re going to have a much faster time to market, as frontend teams won’t have to wait on a backend team to deliver the updated API they need. They could just mock the response during development.

See how WunderGraph’s testing and mocking servers make type-safe testing a cinch. 👉

The frontend and backend teams only have to agree on an API contract together, and if the domain services/business logic are not ready yet, the client teams can just mock out the data on their own BFF layer. A monolithic backend team serving the needs of competing frontend teams will never be the bottleneck.

6. Don’t worry about DRY.

As Sam Newman mentions in his seminal post about the BFF pattern, some duplication is inevitable with BFFs. The more BFFs (and user experiences) you have, the more overlap between their codebases — duplicated code for the aggregation if some user experiences are similar enough, duplicated code for interfacing with common downstream services, and duplicated code when some user experiences have a common auth or caching strategy, for example.

While our first instinct as developers would be to see this duplication as an opportunity to DRY things up, that inevitably leads us back to the inefficient, monolithic general-purpose HTTP abstraction again. So that’s a no.

But leaving in duplication might actually be advantageous. Once again, it boils down to agility and team autonomy. BFFs work best when they are purpose-built and tightly coupled to a user experience. If each client + BFF team has total control over their domain, they can ship faster, take more risks, and try out new things whenever they want, without having to consider the impact of their decisions on other teams.

If you were to merge back this duplication into an abstraction, this would no longer be the case. Multiple teams/apps would now depend on a shared service, and you would not be able to move fast because no matter who owned responsibility for the shared service/library now, they’d frequently have to work around other teams and come up with strategies for breaking changes, latency requirements, and more. You’d just have created another bottleneck.

That’s not to say you should never, ever create a shared service out of duplicated functionality — these could be opportunities for collaboration among teams that could lead to new features and improvements, or shared bugs being found and fixed much faster.

Like everything in software development: observe, understand the tradeoffs, and make an informed decision, rather than prematurely optimizing for abstractions just because that’s what you were taught in school.

7. Documentation is going to be an ongoing chore.

A BFF is purpose-built for a specific client, and so each BFF will need detailed accompanying API documentation that covers all of the BFF’s available endpoints, their corresponding HTTP methods, the expected request and response payloads, aggregate/data models, error handling, input validation, and guidelines for usage.

Documentation needs to be a living resource, maintained and updated regularly as the BFF evolves. Any changes made to the BFF should be reflected promptly in the documentation. The only thing worse than no documentation is bad or outdated documentation, as that only leads to misunderstandings, inefficiencies, and show-stopping bugs.

If you don’t get ahead of documentation for a BFF, you’re going to move fast and break things, sure, until you only break things.

Conclusion

The points discussed here should help you design and build production-ready BFFs that not only meet the demands of modern web applications, but also end up being scalable, robust, and maintainable solutions.

When should you build BFFs? The ideal scenario would be when you have to support multiple client platforms, each with unique needs and constraints. Adopting the BFF pattern could also solve organizational issues with communication, much like GraphQL could, except BFFs have the edge when shifting the data responsibility to the client isn’t an option (bundle size concerns, API consumers needing to learn a new paradigm, security issues, etc.)

Along the way, the best thing you could do to ensure a good developer experience would be to adopt a BFF framework like WunderGraph. Writing a dedicated ‘server’ component in the same codebase as your client, sharing types, and parsing/transforming incoming data on a server layer (with much faster interconnect) without filling up the client bundle? Fantastic DX. Plus, WunderGraph Cloud makes deploying the BFF together with the frontend dead simple.




Continue Learning