Site icon Blog

The pain of microservices can be avoided, but not with traditional databases

Much has been written about the problems of microservices. Many argue against them entirely. Yet an architecture that allows teams to develop, deploy, and scale independently is clearly desirable. Microservices often cause more problems than they solve, but the blame for that is misplaced. The complexity comes not from the idea of microservices but from how they’re implemented.

Here are problems microservices commonly cause. As I’ll show, every one is either completely avoidable or can have its severity greatly reduced:

  • Explosion of operational complexity from using so many systems
  • Eventual consistency where it’s not wanted
  • Data duplication across services
  • Debugging difficulty when requests span a tree of services
  • Development and testing pain: mocking services is brittle, and integration testing requires complex infrastructure setup
  • Latency increases from network requests replacing function calls
  • Fault-tolerance complexity from needing retries and backoffs at every service boundary
  • Some features require complex deployment coordination across many services
  • Data needed by one service is isolated inside another team’s database with no good way to access it

The damage these problems inflict on engineering velocity and software quality is profound. This state of affairs should not be acceptable, yet countless teams accept these problems because they think there’s no other choice.

AI coding agents don’t solve these problems, as this kind of complexity is exactly what kills the effectiveness of AI agents. Repeating the same architectural decisions or swapping in small variants on individual tools doesn’t fix the issues either. Fixing these issues requires a major rethink of software architecture.

Not confusing good ideas with bad implementations is a theme of this post. A lot of great ideas in software have baggage associated with them which has nothing to do with the ideas. This baggage causes engineers to prematurely reject great solutions using those ideas which don’t have that baggage. As you’ll soon see, some of the key ideas needed to solve the problems of microservices have been around for decades, but many engineers struggle to distinguish them from specific implementations.

Databases and microservice pain

There’s clearly tons of problems with microservices implementations, and it’s easy to think these problems are unavoidable. Splitting an architecture into microservices means adding more pieces, and it’s that infrastructure sprawl that makes everything so painful: databases, caches, web servers, queues, stream processors, batch processors, load balancers, and on and on.

Infrastructure sprawl is a major contributor to most of the issues listed above:

  • Operational complexity: each system has its own deployment, monitoring, backup strategy, and failure modes
  • Debugging: a request might touch five systems that all log differently and have different tools for inspection
  • Development and testing: some systems have good local or embedded modes, others require Docker or external services, and the mix makes setup brittle
  • Latency: each hop between systems adds time, and a single request can accumulate dozens of hops across the service tree
  • Fault-tolerance: every boundary between systems needs retries, timeouts, and backpressure handling
  • Deployment coordination: changing behavior that spans systems requires careful rollout ordering and rollback plans

Backends largely revolve around how data is stored and accessed. But databases are limited by design. They handle one particular indexing/retrieval pattern, and they only handle storage. Though some business logic can be inserted into some databases with user-defined functions, most computation happens outside databases. Synchronous work is usually done by direct calls from an application server, while background work goes through a separate queue system and is handled by workers or a stream processor. Simply put, databases require infrastructure sprawl.

Here are more issues with databases:

  • Data isolation: There’s no first-class way to subscribe to changes as they happen. You can set up Change Data Capture using a tool like Debezium, but that adds to infrastructure sprawl. You get row-level mutations, not domain events, and consumers must reconstruct multi-table transactions themselves. And because CDC is asynchronous, it’s inherently eventually consistent.
  • Testing pain: Very few have a simple in-process mode for easy setup/teardown in a test. Postgres, MongoDB, Redis, ElasticSearch, and Cassandra need to run as separate processes, perhaps via Docker. Third-party libraries sometimes provide an embedded version, but they often have subtle differences.
  • Migrations pain: Postgres has arguably the best support for migrations, but even something as simple as changing a column type can take hours or days to backfill. MongoDB, Redis, and ElasticSearch have no first-class support. The reality of the state of migrations in databases means evolving an application is an expensive, complex engineering effort especially to achieve zero downtime.
  • Fixed data models: Oftentimes multiple databases are needed because no single database satisfies all the access patterns an application needs. Postgres + Redis + ElasticSearch is a common combination. Besides worsening infrastructure sprawl, this creates consistency challenges as there aren’t transactions across databases. This pushes the burden of distributed transactionality to the application layer.

Given all this, it should be clear there’s no way to address the problems of microservices without rethinking data storage. Reducing infrastructure sprawl requires fewer systems handling the combined functionality of storage, synchronous computation, background computation, queuing, and caching. Solving data isolation requires a source of truth that can be streamed and replayed, not just queried for current state. Fixing painful test setup requires tooling with a first-class in-process mode that behaves identically to production. Eliminating migration complexity requires tooling that makes migrations instant regardless of dataset size. And solving cross-database transaction issues requires storage flexible enough to support multiple data models with transactions across them.

Every one of these requirements can be met.

Logs, the misunderstood starting point

Let’s explore how data storage can be rethought to accomplish these goals. This requires rethinking some fundamentals, so it will take a few sections.

My proposition starts with this: instead of writing directly to databases, all events should be written to a log first. Then code reacts to appends to perform downstream effects like datastore reads/writes, API calls, and so on. This is by no means a new proposal – I’m just describing event sourcing.

As I said in the beginning of the post, it’s important not to confuse good ideas with bad implementations. And to say this topic has baggage is an understatement.

To be clear, using Kafka or any other standalone queue system creates new problems that in many cases are worse than the problems being solved:

  • Since processing is disconnected from appends, downstream effects are eventually consistent. Appenders don’t have direct feedback about what happens downstream.
  • A queue system is another thing that needs to be deployed, monitored, and scaled, adding to infrastructure sprawl.
  • Appending to a queue adds significant additional latency to the total work to process an event.

Event sourcing with a traditional log system can still be worthwhile, but these are serious tradeoffs. Oftentimes they make event sourcing not worth the cost.

What I’m leading toward is a log-first system without these tradeoffs. The eventual consistency problem isn’t inherent to logs – it’s a consequence of the log being separate from processing. If logs and compute are integrated into the same tool, running in the same process with no network hop between them, appends can coordinate with downstream work. The append can return success only when downstream work completes, and it can return information about what happened. This is the same kind of feedback as a direct database write that enables event sourcing to be used for interactive, responsive applications.

This eliminates having separate synchronous and asynchronous codepaths for incoming events. Traditionally, you have to handle processing and database writes for an event immediately or put it on a queue for later. Handling immediately can have problems. Each request does individual calls to downstream services with no opportunity to batch work. Traffic spikes can cause cascading failures since there’s no buffer. A downstream service being down causes the request to fail.

So teams often move to asynchronous processing where the request handler publishes an event to a queue and returns. Background workers batch events, combining database writes and downstream calls. This is far more efficient, and traffic spikes are handled by the queue growing. A downstream service being down doesn’t prevent eventual processing. However, the caller often has no direct way to know when downstream work completes or if it succeeded. The tradeoffs for efficiency and reliability are eventual consistency and increased infrastructure sprawl.

With the integrated log-first approach, there are no tradeoffs. Processing and writes are always batched, and appenders choose whether to wait for downstream processing. If a downstream service is temporarily unavailable, the request is still guaranteed to eventually process. The appender may time out, but the work won’t be lost. You get the best of both worlds of the efficiency of batched processing and coordination with downstream processing, plus a unified way to build both interactive work and background processing.

The benefits of a log-first system directly address several microservices problems I listed earlier. Data isolation is solved: logs can be efficiently streamed by any consumer, including other microservices, without overwhelming the source system. Data duplication becomes unnecessary: since logs are easy and efficient to consume, other services don’t need their own copy. Debugging improves dramatically: logs are an audit trail of what’s happened between and within services, preserving history that would otherwise be lost to database overwrites. And since logs and compute are integrated, infrastructure sprawl is reduced.

This isn’t hypothetical. This is how Rama works. But the ideas matter independent of any particular tool, so in this post I’ll focus on the ideas and only return to Rama at the end to see what else is needed to make this practical.

Since this is a very different way to structure code, let’s ground it with a simple example.

Example of log-first system

Suppose you’re building user registration for a website. In a traditional stack, a request handler would validate the registration, write to a users table, and send a welcome email. In a log-first approach, a request handler instead appends a signup event to a log. Reactive code listening to that log validates the registration, and if valid, updates the users index and sends a welcome email. The result (success or failure) is returned to the request handler, which displays success or failure back to the user.

The signup event could be defined as a regular type in your language. For example, in Java:

1
public record UserSignUp(String email, String pwdHash, long timestampMillis);

There are many ways to design the API for an integrated log + compute tool. To keep the focus on the ideas, here’s pseudocode with a minimal API:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public class RegistrationModule {
  public static boolean handleRegistration(UserSignUp signup) {
    // registerUser and sendWelcomeEmail are not defined here and would
    // contain the business logic for updating the users index and sending
    // the email
    boolean alreadyRegistered = registerUser(signup.email(), signup.pwdHash());
    if(!alreadyRegistered) sendWelcomeEmail(signup.email());
    return alreadyRegistered;
  }

  public void define(ModuleDefinition module) {
    Log registrations = module.defineLog("registrations");
    registrations.subscribe("registrationHandler",
                            RegistrationModule::handleRegistration);
  }
}

Here a “module” includes both log definitions and reactive code attached to those logs. The log isn’t a separate system you connect to. Logs are part of the module system’s API, defined and deployed alongside your processing code. You can define as many logs and handlers as you wish. A log called “registrations” is defined and then handleRegistration is specified to react to all appends to that log. The return value is returned to the appender.

The code isn’t different from a traditional approach where the logic is in a web server handler. It’s just moved to a reactive handler integrated with the log. The actual logic and incremental updates to datastores is the same.

Appending would look something like this pseudocode:

1
2
3
4
ClusterManager manager = ClusterManager.open();
LogClient log = manager.clusterLog("com.mycompany.RegistrationModule", "registrations");
Map res = log.append(new UserSignUp("foo@bar.com", 12345));
boolean success = (boolean) res.get("registrationHandler");

The result is a map because multiple handlers can subscribe to the same log, each returning their own result. So the return is a map from handler name to whatever that handler returned.

If you didn’t care about downstream work and wanted your append to finish when the log append is complete, you could do this:

1
log.append(new UserSignUp("foo@bar.com", 12345), AckLevel.APPEND_ACK);

This guarantees the event is appended and will be processed, but that processing happens in the background. There’s no difference between interactive code and background processing. The difference is whether the client cares about waiting.

Impact on microservices

This approach is similar to write-ahead logging in databases, except applied to the whole backend. Instead of the WAL being an internal implementation detail, it’s a first-class part of the system.

Each microservice using this approach becomes simpler internally. There’s no awkward choice of whether to write to the database first or append to a queue first, and what to do when one fails. There’s no separate codepaths for sync vs async work. Less infrastructure means fewer failure boundaries, easier testing, easier debugging, and less glue code.

The boundaries between microservices change too, connecting not just with APIs but with logs. Logs contain high-level events like “Alice transfers $500 to Bob” that may have many downstream datastore writes and other effects. Any service can subscribe to another’s events without negotiating database access or setting up CDC pipelines. Each appender chooses whether to wait for processing or let it happen in the background, so you get consistency where you need it and eventual consistency where that’s acceptable.

This approach also enables replay and recomputation. New features can be backfilled from history, and bugs can be corrected by reprocessing from a point in the past.

Not every microservice needs to use this approach. Services built this way can still interact with traditional APIs and external systems. But services that do use it benefit from simpler internals and easier integration with each other. You can adopt it incrementally, starting with new services or where infrastructure complexity hurts most.

Integrating indexed storage

Integrating logs and compute as the foundation of backend services goes a long way, but this isn’t enough to address all the issues I listed. Those can be addressed by also integrating indexed storage into the same system.

But what kind of indexed storage? There are so many databases with different data models: relational, document, key-value, graph, time-series, etc. And I mentioned earlier that needing multiple databases is itself a source of pain, causing infrastructure sprawl and consistency issues since there are no transactions across them. I’ll address data models in the next section. For now, let’s look at what issues are addressed from integration alone.

To be clear, the requirements of integrated indexed storage are no different than databases. You still need strong ACID semantics, the ability to store and query terabytes of data, transactions, incremental replication, and migration support. All of these can be met.

You also need performance at least as good as direct database operations. A reasonable concern is that adding a durable log write before every storage update increases latency. But integration offsets that cost. When logs, compute, and storage are separate systems, every interaction crosses a process/network boundary. When integrated, those boundaries disappear. The log write, business logic, and storage update happen in the same process without serialization or network hops. In practice, this makes the combined operation as good or better than direct database writes.

Integrating indexed storage addresses several more microservices issues. Operational complexity drops by eliminating separately managed databases. Latency improves because there are no network hops between processing and storage. Fault-tolerance is simpler because the module executor handles retries and backpressure rather than you implementing that logic at every system boundary. Development and testing become easier with fewer systems to set up. And deployment coordination becomes easier because a module is a self-contained unit that includes its storage.

Let’s expand the registration module to show how integrated storage could work, continuing with more pseudocode:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class RegistrationModule {
  public static boolean handleRegistration(ModuleExecutor m, UserSignUp signup) {
    boolean alreadyRegistered = checkExists(m.storage("users"), signup.email());
    if(!alreadyRegistered) {
      addUser(m.storage("users"), signup.email(), signup.pwdHash());
      incrementSignups(m.storage("signupsByDay"), signup.timestampMillis());
      sendWelcomeEmail(signup.email());
    }
    return alreadyRegistered;
  }

  public void define(ModuleDefinition module) {
    Log registrations = module.defineLog("registrations");
    module.defineStorage("users");
    module.defineStorage("signupsByDay");
    registrations.subscribe("registrationHandler",
                            RegistrationModule::handleRegistration);
  }
}

This expands the registration module with a “users” store to track registered users and a “signupsByDay” store for light analytics. The details of defining data models are omitted until the next section, as are the read/write APIs. A new argument ModuleExecutor gives the handler access to module datastores.

What’s key is that writes to multiple stores happen atomically. In this design, the writes to “users” and “signupsByDay” become visible together without any explicit transaction. With separate databases, this is either impossible or requires significant effort.

Unifying multiple data models

The previous section hand-waved over how data models are defined and what their read/write APIs look like. One way to unify multiple data models would be to implement each separately within the integrated system, with distinct APIs for declaring, reading, and writing each. This would reduce infrastructure sprawl while preserving the specialized capabilities of relational, document, key-value, graph, and time-series models.

But implementing each data model separately means learning different APIs for each and being limited to the data models provided. What if you need a hybrid that’s part document, part graph? Or a structure optimized for an access pattern that doesn’t fit neatly into any of them? Ideally you’d have a smaller set of primitives that compose to form any data model, with a single API for reading and writing to all of them.

The key insight is the difference between data structures and data models. A data model is a high-level abstraction like “relational” or “document” that comes with its own query language and schema system. A data structure is a lower-level building block like a map, list, or set. Data models are just compositions of data structures with specialized query APIs on top.

Consider what a relational table actually is: a map from primary key to row, where a row is a map from field names to values. Secondary indexes are maps from column values to sets of primary keys. A document store is a map from ID to nested maps. A graph database is a map from node ID to node data, plus maps of lists or sets of edges. Once you see data models as compositions of data structures, you can build exactly what you need rather than choosing from a fixed menu.

Since this is a different way to think about data storage, let’s look at an example. Suppose you’re building an e-commerce system and need to store orders. An order has metadata, a list of line items each with their own fields, and a shipping address. You need to fetch whole orders, drill into specific fields, and update individual line items.

First, define types for incoming events:

1
2
3
4
public record LineItem(UUID productId, int quantity, int priceInCents) {}
public record Address(String street, String city, String state, String zip) {}
public record NewOrder(UUID orderId, UUID customerId, List<LineItem> items, Address shipping) {}
public record UpdateLineItemQuantity(UUID orderId, int itemIndex, int newQuantity) {}

Now define the module with a store composing maps and lists to match this structure directly. A real implementation would want something more expressive for how the stores are read and written, with a SQL-like API being one option. But data structure interfaces are a useful way to illustrate the idea since they’re universally familiar:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
public class OrderModule {
  public static void handleNewOrder(ModuleExecutor m, NewOrder order) {
    Map orders = m.storage("orders");

    Map record = new HashMap<>();
    record.put("customerId", order.customerId());
    record.put("items", order.items());
    record.put("shipping", order.shipping());
    orders.put(order.orderId(), record);
  }

  public static void handleLineItemUpdate(ModuleExecutor m, UpdateLineItemQuantity update) {
    Map orders = m.storage("orders");
    List items = orders.get(update.orderId()).get("items");
    LineItem item = items.get(update.itemIndex());
    items.set(update.itemIndex(),
              new LineItem(item.productId(),
                           update.newQuantity(),
                           item.priceInCents()));
  }

  public void define(ModuleDefinition module) {
    Log newOrders = module.defineLog("newOrders");
    Log lineItemUpdates = module.defineLog("lineItemUpdates");

    module.defineStorage("orders",
      Schema.map(UUID.class,
        Schema.fixedKeys(
          "customerId", UUID.class,
          "shipping", Address.class,
          "items", Schema.list(LineItem.class)
        )));

    newOrders.subscribe("newOrderHandler",
                        OrderModule::handleNewOrder);
    lineItemUpdates.subscribe("lineItemUpdateHandler",
                              OrderModule::handleLineItemUpdate);
  }
}

The schema mirrors exactly how your application thinks about orders. Unlike in-memory collections, these operations go to disk.

Compare this to Postgres. With normalized tables, you’d have orders , line_items , and addresses with foreign keys. Fetching a complete order requires joining three tables and reassembling the object in application code – exactly the indirection ORMs exist to hide. Postgres does offer JSONB, letting you store the whole order as a document. But updates are coarse-grained as changing a single line item’s quantity rewrites the entire document, making frequent partial updates expensive.

With composable data structures, you get the nested document shape your application wants, fine-grained reads fetching only needed fields, fine-grained updates modifying only what changed, and no joins to reconstitute the full object.

This flexibility makes ORMs unnecessary. ORMs bridge the gap between how your application models data and how your database stores it. When you compose data structures into the exact shape needed, there’s no gap to bridge. This alone is a huge reduction in complexity.

You’re also not limited to types a database decides to support. Traditional databases give you strings, numbers, booleans, and maybe JSON or binary blobs. Here you can use any type directly, including ones you define. The NewOrder record, UUID values, and any other types your application uses can be stored as-is. Your storage layer speaks the same language as your application.

One conceptual shift worth noting is the role of normalization. In traditional databases, indexed storage is the source of truth, so normalization matters as redundant data can become inconsistent. But normalized data often isn’t efficient to query, so you denormalize for performance. Now your source of truth has redundancy, and your application keeps it consistent, a burden easy to get wrong. In this model, logs are the source of truth, not indexed stores. Logs are append-only and unindexed, so there’s no redundancy to worry about. The indexed stores are derived views, and you’re free to denormalize them however you want. Instead of carefully normalizing indexed stores to avoid inconsistency, you denormalize freely and rely on the log as the authoritative record.

Clients query these stores remotely. The specifics of the query API matter less than the fact that clients can navigate into the structure and fetch only the data they need. Here’s what this could look like using data structure interfaces:

1
2
3
4
5
6
7
8
9
10
11
ClusterManager manager = ClusterManager.open();
StoreClient store = manager.clusterStore("com.mycompany.OrderModule", "orders");

// get whole order object
Map order = store.get(orderId).query();

// get just the shipping address
Address shipping = store.get(orderId).get("shipping").query();

// get the third line item
LineItem item = store.get(orderId).get("items").get(2).query();

An API like this navigates into the structure using familiar Map and List operations, with only the requested data being transferred. As with the module-side API, in practice you’d want something more expressive, with SQL being one option.

How this addresses microservices issues

A developer tool that integrates logs, compute, and indexed storage is a paradigm shift, so it’s a lot to take in. Here’s a table summarizing how this avoids or significantly mitigates every microservices issue I raised:

Issue How it’s addressed
Operational complexity One system replaces queues, databases, caches, and compute infrastructure
Eventual consistency Integrating logs and compute enables coordination with downstream processing
Data duplication Easy to consume logs, so services don’t need their own copies
Debugging difficulty Logs preserve history that database overwrites lose
Testing pain Easier test setup with greatly reduced infrastructure sprawl
Latency No network hops between log, compute, and storage
Fault-tolerance complexity Module executor handles retries and backpressure across much more of the stack
Deployment coordination Fewer systems to coordinate since modules are self-contained
Data isolation Logs can be efficiently streamed by any consumer without overwhelming the source
Cross-datastore transactions Multiple data models share transactions within integrated system
Migration pain Instant migrations eliminate deployment bottlenecks for most schema changes

The practical effect is much of the work that traditionally exists is gone:

  • Managing separate queues, caches, databases, compute systems, and CDC pipelines
  • Setting up monitoring and alerting for each component
  • Configuring backups for each component
  • Coding retry and backpressure logic at every boundary
  • Writing glue code between components, like adapters/ORMs, serialization, and routing
  • Debugging inconsistencies between datastores
  • Implementing complex deployments
  • Building zero-downtime schema migrations
  • Figuring out test setups for integration tests

These benefits apply wherever the integrated model is used, even if not everywhere. There are legitimate reasons some services might not use the same approach. Existing services might be stable and not worth rewriting, or an acquired company might come with its own tech stack. The complexity reduction still applies to services that do use this model, and all services can still communicate through logs and APIs. If 50% of your services use this approach and 50% don’t, you’ve still eliminated a lot of the complexity I’ve described. You don’t need organization-wide buy-in to see benefits.

Rama

I’ve deliberately kept the discussion tool-neutral, exploring these ideas from first principles. Whether or not you ever use Rama, I hope you can now see complexity in traditional approaches that was invisible before because it’s so normalized.

The core ideas – log-first architecture, integration with compute and indexed storage, and flexible data models – leave room for different implementations. To my knowledge, Rama is the only tool implementing all these ideas end-to-end. It’s not the only possible implementation, just the only one that exists. So I’ll briefly expand on how Rama specifically addresses the problems I raised.

Key implementation choices

I’ve talked a lot about operational complexity, and with traditional systems scale causes a great deal of infrastructure sprawl. So Rama is horizontally scalable, partitioning both storage and compute across threads and nodes. This greatly affects Rama’s API, which gives first-class control on where code runs.

A scalable system needs operational infrastructure to manage it, so Rama runs as a cluster anywhere from one node to thousands. It has CLI commands for deploying, updating, and scaling modules. And since node failures are a fact of life in distributed systems, Rama incrementally replicates all state across nodes with a configurable replication factor.

I also talked about the pain of testing systems that lack good in-process modes. Rama clusters can be simulated in-process with InProcessCluster, which behaves like a production cluster. This greatly eases writing tests since it eliminates test setup pain for much or all of a backend.

Migrations are another pain point I raised. With traditional databases, even simple schema changes can take hours or days to backfill. Rama provides instant migrations for both logs and indexed storage. The migration function applies on read while data on disk is backfilled in the background. This eliminates deployment pain for most schema changes.

I described how composable data structures can replace multiple databases. In Rama, indexed stores are called PStates. Under the hood, they use LSM trees through RocksDB, similar to FoundationDB in that the underlying storage is sorted key-value with particular data models specified on top. LSM trees can match the data models and performance of many databases (e.g. relational, document, graph, wide-column), but they don’t replace every database. When a specialized database is needed, Rama has an integration API that makes it easy to use other databases from module code. The integration API also enables Rama modules to consume data from external queues like Kafka.

Finally, I made multiple references to a “module executor” handling retries and backpressure. In Rama, reactive handlers are fault-tolerant with guaranteed processing of every record. There are two processing modes: streaming with very low latency (1-2 millis) and configurable at-least-once or at-most-once guarantees, and microbatching with higher latency (at least 200 millis) but higher throughput and exactly-once semantics. With microbatching, if there’s a failure and it has to retry, the resulting updates to indexed storage will be as if there were no failures at all.

Example of microservices with Rama

Our Twitter-scale Mastodon implementation is an example of microservices with Rama. It splits the application into six modules:

Our implementation also demonstrates how much these ideas simplify development. It’s 40% less code than the official Mastodon backend, which uses Ruby on Rails, Postgres, and Redis, and scales far beyond it.

Conclusion

Microservices promised to solve the problems of monoliths but introduced their own problems. The debate over monoliths versus microservices misses the point. The real question is which complexities are unavoidable and which are artifacts of our tools. The goal should be avoiding complexity, not just managing it.

The canonical description of microservices, from Martin Fowler’s influential post, emphasizes each service choosing its own storage technologies and programming languages. This maximizes independence but also maximizes the surface area for complexity. The ideas in this post offer a different path, a middle ground where services share concepts to gain the benefits of microservices without exploding complexity.

Rama is one implementation of these ideas. It makes particular choices that won’t fit everyone: JVM-based, a learning curve that takes time to climb (this series of blog posts and the tutorial are the best places to start), and it’s not open source (though free to download and use up to two nodes, with licenses for larger clusters). Other implementations of these ideas could make different choices. We’re active on Discord, Clojurians, and the mailing list to help if you’d like to dig in.

Exit mobile version