I have done several projects in Elixir with great teams. In the beginning, it is usually just about writing some code. But as more and more developers get onboard and more and more code is churned, erhm refactored, the architecture becomes increasingly important.

We tend to start with a single Phoenix app. We then see that modules start to grow to unbearable sizes. We need to implement various supporting services and internal subscribers to queues, but it is not apparent where these go. The next natural step would be to employ something like Umbrella and manage it as multiple microservices. However, that is one heavy beast and makes it harder to reuse components.

The extended contexts framework came to be in collaboration with Aleksander Rendtslev. We relax the ownership of the persistency layer and the web layer to support velocity and as a pragmatic trade-off.

Code Bases for High-Velocity Teams

We use this framework with high-velocity teams. This means a couple of things. Firstly, code is never staged. We trust that when the code passes tests, is reviewed, and is approved by the developer, it is ready for production. Secondly, we deploy the production environment several times a day. Lastly, our time is prioritized towards developing software. In particular, we avoid spending too much time on low-level technical discussions and trust that each individual asks if there are any questions.

This approach puts particular strains on a codebase and the team:

  1. Consensus is hard. It takes time and reduces velocity. The codebase should work with a minimal amount of consensus and support divergent understandings of the product we develop.
  2. There will be dead code. When pivoting features with hard deadlines it is not feasible to expect complete cleanup. When dead code is left behind the code base should not start to look like trash.
  3. Implementation complexity is a constant factor of the codebase. This means that it takes the same amount of time today as in two years to implement a feature.

The Extended Context Framework

The extended context framework is a tradeoff between the Umbrella-type microservice infrastructure and a single Phoenix app. We still divide code into semantic chunks but build them in a way that highly supports horizontal scaling. The folder structure of aa implementation looks like the following.

/persistency    schemas/user.ex    repo.ex/web    /controller    /graphql/context_one    /actions        perform_some_actions.ex    /jobs        internal_subscriber.ex    /service        external_service.ex    /logic        parse_some_thing.ex        validate_something_else.ex    /workflows        place_reservation.ex    module.ex/context_two    ...

On the top level, we share the persistency layer and the web layer between the contexts. We do this as they tend to be very hard to divide into semantic categories – as a developer we often need that blog-posts object even though we are working on the discord context. The same is the case with the web interface. They are very shallow and would introduce unnecessary noise in the context.

The Context

A context collects functionality regarding a single semantic domain for the application. Because of the context's construction, we prefer fat contexts. The context consists of the following parts:

  • Actions: The actions do a single thing and never fetched data. These should be thought of as mutations to the application state with a semantics layer.
  • Jobs: Jobs set up listeners, services, cron jobs, etc. They dispatch everything to workflows why they are slim.
  • Services: Very slim and mockable wrappers for external services. They implement behaviors to allow testing and mocks to allow disabling a service in non-production environments.
  • Logic: Functions without state changes. Ideally, I would want to call these pure functions, but for pragmatic reasons, they are able to call date and randomness functions.
  • Workflows: Workflows should be seen as entry points to the code base. They are responsible for fetching data, mutating using actions and dispatching external effects using the sevices. They are called from the interface and from internal and external subscribers.
  • The module.ex file: This file is the context external interface. This file should merely contain defdelegate's to the code in the others files. In particular, functions that should not be exposed outside of the context, should not be added to this file.

Principles

Data fecthing: We prefer to fetch data as far up in the hierarchy as poassible. Some data is provided on the outer layer, that is in resolvers / controllers / listeners etc. other data is fetched in the workflow. We do this to avoid loading data mutliple places.

Code Changes

As developers, we spend a lot of time changing and tweaking code. It is said that developers spend time "reading code". But for most commercial projects reading the code is boring, and is an activity we want to reduce. Instead, we focus on optimizing for changing the code. This involves the following:

  1. Identify what part(s) of the code needs to be changed
  2. Identify potential side effects of this change
  3. Implement the change
  4. Verify that everything still works as we expect

To identify what parts of the code should be changed we investigate from a known endpoint. Indifferent to have painful this sounds it is usually the case for developers of commercial software. To assist this process we architecture around two principles: 1) Reduce indirections and 2) Contain branching.

This is where the workflows come in. They act as the entry point and are directly called from resolvers and controllers.

Reading the workflow allows us to concisely identify the side effects of the change. If the workflow uses another workflow, we need to expand our analysis to that workflow also.

We can then implement the change. This requires a couple of things: We alter existing code, we add new code, or we do both. In a traditional one-file-for-a-module approach we tend to have ever-expanding files. The canonical answer to this is that "if a module becomes too big, it is two modules". However, in high-velocity environments, this is not feasible. Instead, we use a "one file, one responsibility" approach. This ensures that no single file needs to be split as they usually stay very small. The tradeoff is that we risk getting a lot of files, though this does not appear to be the case in the real world.

Lastly, we ensure that the changes do not introduce regressions and that the new features are implemented correctly. We do this by testing. For tests, we mirror the application folder hierarchy. This makes it easy for us to understand the test coverage at a glance. When a file is missing a test file, it is easy to create one and implement it with a single simple test. Are tests failing after the change, then either the test of the implementation is corrected.

Final Thoughts

Versatility: As pointed out by my colleague Matti, this architecture is not bound to Elixir. It can be used with most languages. The main point to where this fits, is the high-velocity team settings.