Introducing Rama’s Clojure API: build end-to-end scalable backends in 100x less code

Today we’ve released Rama’s Clojure API, including detailed reference documentation and API docs. Information about how to add it as a dependency in your projects is on this page.

Rama is a new programming platform for building high-performance scalable backends, integrating and generalizing data ingestion, processing, indexing, and querying. It’s so general-purpose that it can build entire backends with extremely diverse computation and storage needs on its own, without any of the impedance mismatches that have plagued backend development for decades. One way to think of Rama is as a “programmable datastore on steroids”, where you mold your datastore to fit your application rather than the other way around. It can build interactive, consumer-facing applications just as easily as it can build complex analytics applications.

In August we revealed Rama for the first time by building and operating a Twitter-scale Mastodon instance that’s 100x less code than Twitter wrote to build the equivalent. We ran the instance with 100M bots posting 3,500 times per second at 403 average fanout. In the blog post, we explored the implementation of this instance in depth and how it makes use of Rama’s unique capabilities. The code for that instance is open source.

This post is a self-contained introduction to Rama and its Clojure API. The Clojure API is actually the native API to Rama, and the Java API we released in August is a thin wrapper around it. I’ll start the exploration with a brief overview of the concepts of Rama, and then I’ll dive into the ins and outs of using the Clojure API.

Overview of Rama

Every backend must balance what information gets precomputed versus what gets computed on the fly at query time. As a developer, you must tackle four things when building a backend: how to receive new data, what indexes to create with what structures, how to process new data, and how to query your indexes to serve application requests.

A typical architecture might look like this:

  • An API server receives new data
  • The API server writes that data, potentially in aggregate form, into one or more databases (e.g. Postgres, ElasticSearch, Cassandra, etc.)
  • The API server handles application requests by querying one or more databases

Sometimes a queue system is inserted between the API server and the backend, with a separate system deployed (e.g. custom workers, Kafka Streams, Storm) to process that queue and update the databases. There are tons of variations of what combinations of tools are used to construct a backend, and there’s further variation in what tools are used to deploy and monitor these systems.

There’s many performance, scalability, and fault-tolerance issues to tackle when building systems this way. But the most insidious problem is complexity. Just the sheer number of tools you have to operate creates huge integration and deployment complexity. Plus, each tool you use is narrow and only able to handle certain use cases. With databases, you frequently have to twist your domain model to fit the database’s data model, creating major impedance mismatches at the core of your backend.

Rama can build entire backends end-to-end, integrating and generalizing every aspect of what it takes to do so. Applications built with Rama are scalable, high-performance, and fault-tolerant. Rama has four concepts, corresponding to “data ingestion”, “data processing”, “indexing”, and “querying”:

Rama gives you total flexibility in designing what gets precomputed in your backend versus what gets computed at query time. At a high-level, this programming model is event sourcing with materialized views. When programming Rama, you materialize as many views as needed in whatever shapes are most optimal to serve your application’s use cases.

Let’s start with how indexes work in Rama, as this is a key part of how Rama is so flexible and reduces complexity so much. Indexes are called “partitioned states”, which we usually refer to as “PStates”. Unlike a database, which has a strict “data model” (e.g. relational, key/value, document, column-oriented, graph, etc.), PStates are specified in terms of data structures. Each “data model” is really just a specific combination of data structures – key/value is a map, document is a map of maps, column-oriented is a map of sorted maps, and so on. PStates allow you to express all those data models using the simpler primitive of data structures, and it can express infinite more “data models” by combining data structures in other ways.

PStates are durable, distributed, and replicated. That is, they’re not in-memory structures, and each partition can be much larger than the memory on a machine. Even nested data structures can be larger than memory, and you can still read and write to them extremely efficiently.

Data comes in to Rama through “depots”. A “depot” is a durable, distributed, and replicated log of data. They’re similar to Apache Kafka except integrated with the rest of Rama.

"ETL"s, extract-transform-load topologies, process incoming data from depots as it arrives to materialize PStates. Most of the time spent programming Rama is spent making ETLs. As you’ll see shortly, the ETL API is extremely expressive. It’s a Turing-complete dataflow-based API that seamlessly distributes computation.

The last concept in Rama is “query”. As will be no surprise to Clojure programmers hearing the description of PStates as arbitrary combinations of data structures, Specter’s paths are the core API for querying them. Rama has an internal fork of Specter that adds powerful reactive capabilities to them. In addition to this, you can also make “query topologies”. These are predefined, on-demand, realtime, distributed queries that can query and aggregate data across any number of PStates and any number of the partitions of those PStates. These are the analogue of “predefined queries” in traditional databases, except programmed with the same dataflow API as used to program ETLs and far more capable.

All these concepts are packaged together into a Rama application as a “module”. A module is an arbitrary collection of depots, ETLs, PStates, and query topologies. Modules are launched onto a Rama cluster, and they can later be updated with new code or scaled up/down.

For examples of different ways in which these concepts are combined towards extremely different use cases, you can read about our Mastodon implementation or check out the self-contained, thoroughly commented examples in the rama-demo-gallery project.

Basic example

Let’s now dive into the Clojure API. You can follow along at the REPL by cloning the rama-clojure-starter project and running lein repl .

We’ll start with a basic word count application, and then we’ll build an auction application with timed listings, bids, and notifications of winners and losers. As we go, I’ll explain the various pieces of the Clojure API.

First, let’s require the namespaces needed:

1
2
3
4
5
6
(use 'com.rpl.rama)
(use 'com.rpl.rama.path)
(require '[com.rpl.rama.ops :as ops])
(require '[com.rpl.rama.aggs :as aggs])
(require '[com.rpl.rama.test :as rtest])
(require '[clojure.string :as str])

com.rpl.rama.path is Rama’s internal fork of Specter, and for the most part it’s API-equivalent to the open-source version.

Next, let’s define the word count module:

1
2
3
4
5
6
7
8
9
10
11
(defmodule WordCountModule [setup topologies]
  (declare-depot setup *sentences-depot :random)
  (let [s (stream-topology topologies "word-count")]
    (declare-pstate s $$word-counts {String Long})
    (<<sources s
      (source> *sentences-depot :> *sentence)
      (str/split (str/lower-case *sentence) #" " :> *words)
      (ops/explode *words :> *word)
      (|hash *word)
      (+compound $$word-counts {*word (aggs/+count)})
      )))

This is concise, but there’s a lot to unpack here. defmodule defines a module as a regular Clojure function that takes in parameters setup and topologies . setup is used to declare depots and dependencies to depots, PStates, and query topologies in other modules, while topologies is used to declare ETL and query topologies.

This module defines one depot called *sentences-depot . Symbols beginning with * are variables in Rama dataflow code, and the declare-depot macro lets you declare a depot’s name as a symbol – just as it will be referred later in dataflow code.

The last argument to declare-depot specifies the depot’s partitioning scheme. In this case :random is used, which causes an appended sentence to go to a random depot partition. In cases where you want events for the same entity to go to the same depot partition, so that they’re processed in the order in which they were created, you would use the hash-by partitioner. The declare-depot documentation goes over all the ways you can define depot partitioners.

Next, the module declares a stream topology with the name "word-count" . There are two types of ETL topologies in Rama, streaming and microbatching, with different performance characteristics between them. In this case, using a stream topology means the PState will be fully updated within 1-2 milliseconds from appending a sentence. See the documentation on streaming and microbatching for more details.

Next, the PState $$word-counts is declared with the schema {String Long} . This means each partition of the PState stores a map with String keys and Long values. Each PState can have a completely different schema, and if you declare a subschema with “subindexing”, that nested data structure can efficiently contain more elements than fit into memory. Here are some more examples of schemas:

  • {String {clojure.lang.Keyword Long}}
  • {String {String #{Integer}}}
  • {Long (set-schema String {:subindex? true})}
  • Long
  • {Long (map-schema Long (set-schema Long {:subindex? true}) {:subindex? true})}

As you can see, you can have subindexed structures within subindexed structures, and the top-level schema doesn’t have to be a map. The Long schema specifies that each partition of the PState is a simple Long value. A PState like that is useful for ID generation, for example.

Critically, all PStates are durable on disk and replicated incrementally. This makes them suitable for any use case for which databases are currently used.

The last part of the module defines the ETL logic to process depot records and perform PState updates. The <<sources macro defines the ETL logic using Rama’s dataflow API. The dataflow API is different than regular Clojure programming. Whereas a Clojure function is based on “call and response” – you call a function and get a single result back – dataflow is “call and emit”. That is, you call an operation and it emits values to downstream code. Operations can emit one time, many times, or even zero times. They can also emit multiple fields per emit or emit to independent output streams. Dataflow operations also don’t have to emit synchronously – they can emit asynchronously on a completely different partition on a different machine.

For now, let’s focus on this particular example. We’ll look more at the dataflow API and the new programming paradigm it expresses in the next section.

Within a <<sources block, each call to source> subscribes the topology to a depot. Here, the topology subscribes to *sentence-depot and binds new sentences to the variable *sentence . Within dataflow code, symbols beginning with * , % , and, $$ are interpreted as variables, while other symbols resolve as normal Clojure values. * variables are values, % variables are anonymous operations, and $$ variables are PStates.

Processing of sentences happens in parallel across all partitions of subscribed depots. The next line, (str/split (str/lower-case *sentence) #" " :> *words) , executes regular Clojure functions to compute the list of words in each sentence by using a regex to split on whitespace. The :> keyword distinguishes the input from the output, and in this case the output is bound to the variable *words . As you can see here, you can nest expressions just like you can with regular Clojure. This code is equivalent to:

1
2
(str/lower-case *sentence :> *lowercase)
(str/split *lowercase #" " :> *words)

The next line, (ops/explode *words :> *word) , calls the built-in explode operation. explode emits each element of the provided list individually. So if *words contained ["hello" "world"] , the explode call would emit two times. The subsequent code runs for each emit.

The (|hash *word) call is a partitioner. Partitioners work just like any other operation by receiving input and emitting. The difference is they may emit on a completely different thread on a completely different machine. This call says to move the computation according to the hash of the value in *word . This causes the same word to always get processed by the same partition of the module while evenly distributing different words across all partitions.

Partitioners are a great example of how seamless it is to write distributed code with Rama. Because they’re based on the same “call and emit” paradigm as all other operations, code that’s moving around the cluster like this can be read linearly. And because they’re no different from other operations, they compose with other dataflow code just like any operation. As you’ll see in the next section, you can also express conditionals and loops with the dataflow API. So you can trivially do things like a looping computation with partitioners in the body that hops around the cluster with each iteration of the loop.

The last line of the ETL updates each word’s count in the $$word-counts PState. This write is expressed with “compound aggregation”, which specifies the write in the shape of the data structure being written to. In this case it aggregates a map with the key *word and the value updated with the aggs/+count aggregator. Aggregators automatically take care of initializing non-existent values. So the first time the word “hello” is written to the PState, it knows to start the aggregation at 0 instead of nil . The naming convention for aggregators is to prefix them with + . It’s not necessary, but we find this helps with readability.

Reads and writes to PStates in an ETL operate on the partition of the PState that is colocated with the ETL event. That is, the PStates don’t exist on separate processes or nodes. This is what we mean when we say Rama colocates computation and storage.

Running the word count module

Rama has a facility called InProcessCluster (“IPC”) that simulates a Rama cluster in process. It works just like a real cluster and is an ideal environment for experimentation and unit testing. Let’s run WordCountModule in this environment.

First, let’s create the cluster:

1
(def ipc (rtest/create-ipc))

Next, let’s launch the module on it. “Tasks” are Rama’s name for a module’s partitions and refer to the fact that partitions of a module perform both computation and storage. Here we run four tasks across two threads:

1
(rtest/launch-module! ipc WordCountModule {:tasks 4 :threads 2})

Next, let’s fetch clients to the depot and PState of the module:

1
2
(def sentences-depot (foreign-depot ipc (get-module-name WordCountModule) "*sentences-depot"))
(def word-counts (foreign-pstate ipc (get-module-name WordCountModule) "$$word-counts"))

The term “foreign” refers to Rama objects that live outside of modules. Now, let’s append some data to the depot:

1
2
3
(foreign-append! sentences-depot "Hello world")
(foreign-append! sentences-depot "hello hello goodbye")
(foreign-append! sentences-depot "Alice says hello")

By default, depot appends block until all colocated stream topologies have finished processing the record. So at this point, we know the $$word-counts PState has been updated. Let’s check the word counts for “hello” and “goodbye”:

1
2
(foreign-select-one (keypath "hello") word-counts) ; => 4
(foreign-select-one (keypath "goodbye") word-counts) ; => 1

Queries on PState clients use paths to express the query. These examples are extremely simple since we’re just fetching the values in a map, but you’ll see more complicated queries in the auctions example later in this post.

Lastly, we can shut down the InProcessCluster like this:

1
(close! ipc)

That’s all there is to it. The way in which depot and PState clients are fetched and used with IPC is exactly the same as you would interact with a real cluster.

While these examples used Rama’s blocking API, it’s also important to note there are non-blocking variants of all foreign methods that return CompletableFuture objects. These include foreign-append-async!, foreign-select-one-async, and others.

Exploring the dataflow API

Before building the auction application, let’s briefly explore Rama’s dataflow API. As described, operations in this API are based on “call and emit” rather than the “call and response” you’re used to from Clojure and most other languages.

You can explore the dataflow API from the REPL outside the context of modules. The only parts of Rama not available in this context are partitioners since they don’t make sense in this single-threaded context. Let’s start by printing “Hello, world!”:

1
2
3
4
(use 'com.rpl.rama)

(?<-
  (println "Hello, world!"))

This prints:

1
Hello, world!

The ?<- compiles and executes a block of dataflow code. So far, this is identical to how you would write it in regular Clojure.

Let’s define a custom operation that emits multiple times:

1
2
3
4
5
6
7
(deframaop foo [*a]
  (:> (inc *a))
  (:> (dec *a)))

(?<-
  (foo 5 :> *v)
  (println *v))

This prints:

1
2
6
4

deframaop defines a Rama operation, and when :> is used as an operation it emits to the :> output of the caller. This is also referred to as “invoking the continuation”. So when foo is called, the subsequent code is run for each emit.

Here’s an example of using an operation to filter data:

1
2
3
4
5
6
7
8
(deframaop my-filter [*v]
  (<<if *v
    (:>)))

(?<-
  (ops/range> 0 5 :> *v)
  (my-filter (even? *v))
  (println *v))

range> is like Clojure’s range except emits per value rather than returning a sequence. This prints:

1
2
3
0
2
4

my-filter uses a conditional to only emit when the value is true and is equivalent to the built-in operation filter>. <<if is the most common way to write conditional dataflow logic. Here’s another example of using <<if :

1
2
3
4
5
6
7
(?<-
  (<<if (= 1 2)
    (println "true branch 1")
    (println "true branch 2")
   (else>)
    (println "else branch 1")
    (println "else branch 2")))

This prints:

1
2
else branch 1
else branch 2

<<if is built upon the more primitive if> , which is a Rama operation that emits to the :then> and :else> output streams. Using that primitive, the previous code can be expressed like this:

1
2
3
4
5
6
7
(?<-
  (if> (= 1 2) :then> <then> :else>)
  (println "else branch 1")
  (println "else branch 2")
  (hook> <then>)
  (println "true branch 1")
  (println "true branch 2"))

This example is demonstrating a few new concepts. First, operations can emit to other output streams besides :> , in this case emitting to :then> and :else> . Second, dataflow code can branch, and you can explicitly manipulate the graph of computation. Symbols surrounded with < and > are called “anchors”, and they label a point in dataflow code. By default, dataflow code attaches to the previous code, but if you use hook> then you can change where the subsequent code attaches.

An interesting thing about if> is it’s not a special form, unlike Clojure and other programming languages. You can actually pass it around like so:

1
2
3
4
5
6
7
8
9
(deframaop bar [%f *v]
  (%f *v :then>)
  (:>))

(?<-
  (bar if> true)
  (println "A")
  (bar if> false)
  (println "B"))

This prints:

1
A

“B” does not print since %f emits to the :else> branch in that case, which has no code attached to it. Variables beginning with % are anonymous operations that can be invoked.

The general term for an operation in Rama is “fragment”. A fragment can be either a ramaop or ramafn . A ramafn is an operation that emits exactly one time to :> , and that’s the last thing they do (like Clojure functions). You can define a ramafn with Rama dataflow like so:

1
2
3
4
(deframafn myfn [*a *b]
  (:> (+ *a *b 10)))

(myfn 1 2)

This prints:

1
13

As you can see, a ramafn can be invoked from regular Clojure code as well as from Rama code. If a ramafn definition doesn’t emit or emits multiple times, you’ll get a runtime error. A ramafn executes more efficiently than a ramaop when the Rama compiler knows a callsite is invoking a ramafn rather than a ramaop .

Lastly, let’s take a look at a dataflow loop:

1
2
3
4
5
6
7
(?<-
  (loop<- [*v 0 :> *i]
    (println "Loop iter")
    (<<if (< *v 5)
      (:> *v)
      (continue> (inc *v))))
  (println "Emitted:" *i))

This prints:

1
2
3
4
5
6
7
8
9
10
11
Loop iter
Emitted: 0
Loop iter
Emitted: 1
Loop iter
Emitted: 2
Loop iter
Emitted: 3
Loop iter
Emitted: 4
Loop iter

Dataflow loops are similar to Clojure loops, but they can emit multiple times. The order of the prints also indicates how execution works: when emitting from the loop, the continuation of the loop is invoked immediately. The continue> call doesn’t happen until the continuation finishes executing.

This is only a taste of Rama dataflow, and there’s a lot more to explore. Fragments being a generalization of functions are a very potent concept, and much of Rama’s implementation is written in this language. Fragments and dataflow are excellent abstractions for writing parallel, asynchronous, and reactive code, which is why they’re the basis of ETLs and query topologies. It’s also worth noting that Rama dataflow compiles to very efficient bytecode. For more information about the dataflow API, check out this page from the documentation.

Building an auction application

Let’s take a look at a slightly larger example that showcases more of what Rama can do. We’ll build an auction application with timed listings, bids, and notifications of winners and losers. This application will utilize multiple PStates and demonstrate the advantages of colocating computation and storage.

First, let’s do the necessary requires:

1
2
3
4
5
6
7
(use 'com.rpl.rama)
(use 'com.rpl.rama.path)
(require '[com.rpl.rama.ops :as ops])
(require '[com.rpl.rama.aggs :as aggs])
(require '[com.rpl.rama.test :as rtest])
(import 'com.rpl.rama.helpers.ModuleUniqueIdPState)
(import 'com.rpl.rama.helpers.TopologyScheduler)

This example will utilize two small utilities from the open-source rama-helpers project, ModuleUniqueIDPState and TopologyScheduler . Even though those are written with the Java API, they can be used seamlessly from the Clojure API.

Let’s build up the module step by step, starting with making and viewing listings. Then we’ll add bids and notifications afterward.

Here’s the data type we’ll use to represent a listing:

1
(defrecord Listing [user-id post expiration-time-millis])

Next, let’s define the depot to receive new listings:

1
2
(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))

The depot partitioner (hash-by :user-id) controls on which partition processing begins in the ETL. Since the PState for listings will be partitioned by user IDs, setting the depot partitioner this way means no additional partitioning is needed in the ETL logic. This is simpler and more efficient than using a :random depot partitioner like was done in the word count example.

Next, let’s define the topology and needed PStates:

1
2
3
4
5
6
7
(let [s (stream-topology topologies "auction")
      idgen (ModuleUniqueIdPState. "$$id")]
  (declare-pstate s $$user-listings {Long ; user-id
                                      (map-schema Long ; listing-id
                                                  String ; post
                                                  {:subindex? true})})
  (.declarePState idgen s)

The $$user-listings PState stores every listing made by a user in a submap. The submap is marked as subindexed, which tells Rama to index its elements individually. This allows the submap to be read and written to efficiently even if it grows to huge size (like larger than memory). Since a user can have an arbitrary number of listings, subindexing that map is appropriate.

ModuleUniqueIDPState is a small utility from rama-helpers for generating unique 64-bit IDs. It works by declaring a PState tracking a counter on each task and combining that counter with the task ID when generating an ID. The .declarePState call declares the PState it uses.

Lastly, let’s define the ETL that maintains the $$user-listings PState:

1
2
3
4
5
6
(<<sources s
  (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
  (java-macro! (.genId idgen "*listing-id"))
  (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                    $$user-listings)
  )))

java-macro! allows dataflow code generated by the Java API to be used directly in the Clojure API. In this case .genId binds a new variable *listing-id with a newly generated ID. Then, the topology simply writes the listing into the PState under the correct keys.

The complete module definition look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
(defrecord Listing [user-id post expiration-time-millis])

(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))

  (let [s (stream-topology topologies "auction")
        idgen (ModuleUniqueIdPState. "$$id")]
    (declare-pstate s $$user-listings {Long ; user ID
                                        (map-schema Long ; listing ID
                                                    String ; post
                                                    {:subindex? true})})
    (.declarePState idgen s)

    (<<sources s
      (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
      (java-macro! (.genId idgen "*listing-id"))
      (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                        $$user-listings)
      )))

Let’s run a quick test to verify it works:

1
2
3
4
5
6
7
8
(with-open [ipc (rtest/create-ipc)]
  (rtest/launch-module! ipc AuctionModule {:tasks 4 :threads 2})
  (let [module-name (get-module-name AuctionModule)
        listing-depot (foreign-depot ipc module-name "*listing-depot")
        user-listings (foreign-pstate ipc module-name "$$user-listings")]
    (foreign-append! listing-depot (->Listing 1 "Listing 1" 0))
    (println "Listings:" (foreign-select [(keypath 1) ALL] user-listings))
    ))

All this test code does is add one listing and then print what was added to the $$user-listings PState. This prints:

1
Listings: [[0 Listing 1]]

Adding bids

Now, let’s add bids to the module. In the next section we’ll finish the module by adding expirations and notifications. Adding bids will require two new records:

1
2
(defrecord Bid [bidder-id user-id listing-id amount])
(defrecord ListingPointer [user-id listing-id])

The first record represents a new bid on a listing, and the second record will be used in one of the new PStates.

Next, let’s define the depots for the module:

1
2
3
(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))
  (declare-depot setup *bid-depot (hash-by :user-id))

*listing-depot is the same as before, and *bid-depot will receive Bid objects.

Next, let’s define the topology and its PStates:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(let [s (stream-topology topologies "auction")
      idgen (ModuleUniqueIdPState. "$$id")]
  (declare-pstate s $$user-listings {Long ; user ID
                                      (map-schema Long ; listing ID
                                                  String ; post
                                                  {:subindex? true})})
  (declare-pstate s $$listing-bidders {Long ; listing ID
                                        (set-schema Long ; user ID
                                                    {:subindex? true})})
  (declare-pstate s $$listing-top-bid {Long ; listing ID
                                        (fixed-keys-schema {:user-id Long
                                                            :amount Long})})
  (declare-pstate s $$user-bids {Long (map-schema ListingPointer
                                                  Long ; amount
                                                  {:subindex? true})})
  (.declarePState idgen s)

There are three new PStates here. $$listing-bidders tracks everyone who has bid on a listing. Besides being a useful view on its own, it will also be used later for delivering notifications. It’s a map from listing ID to a set of user IDs. $$listing-top-bid tracks who is currently the top bidder for each listing. It’s a map from listing ID to information about the top bidder.

The final PState $$user-bids tracks each bid made by a user. To understand the need for the ListingPointer type, let’s explore how these PStates are partitioned in this implementation.

Listings have their own ID, but their PStates will be partitioned by the user ID that made the listing. This keeps the bidder / “top bid” information for a listing colocated with its information in the $$user-listings PState. You don’t have to design the PStates this way, but keeping all information for the same entity on the same partition is generally a good idea since it speeds up queries that want to look at multiple PStates at the same time. So in this design, to look up information about a listing you need both the listing ID and the owning user ID. This is why the $$user-bids PState tracks a ListingPointer rather than just a listing ID.

Next, let’s begin defining the ETL logic:

1
2
3
4
5
(<<sources s
  (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
  (java-macro! (.genId idgen "*listing-id"))
  (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                    $$user-listings)

Adding bids doesn’t change anything about processing listings, so this part is exactly the same as the previous section. Next, let’s add the logic to process bids:

1
2
3
4
5
6
7
8
9
10
11
(source> *bid-depot :> {:keys [*bidder-id *user-id *listing-id *amount]})
(local-transform> [(keypath *listing-id) NONE-ELEM (termval *bidder-id)]
                  $$listing-bidders)
(local-transform> [(keypath *listing-id)
                   (selected? :amount (nil->val 0) (pred< *amount))
                   (termval {:user-id *bidder-id :amount *amount})]
                  $$listing-top-bid)
(|hash *bidder-id)
(->ListingPointer *user-id *listing-id :> *pointer)
(local-transform> [(keypath *bidder-id *pointer) (termval *amount)] $$user-bids)
)))

This ETL code updates all the bid-related PStates. First, it adds the bidder’s user ID to the $$listing-bidders PState. Then, it updates the $$listing-top-bid PState by checking whether the new bid is greater than the previous top bid. This logic is expressed as part of the path to update the PState.

A critical property of Rama used here is that only one event runs on a module task at a time. So while this logic is executing, nothing else is running on this task: other bid events, foreign PState queries, or other events. So it’s impossible for multiple bids to update the $$listing-top-bid PState at the same time. The colocation of computation and storage in Rama gives you the atomicity and transactional properties needed for this use case.

The last piece of this ETL records the user’s bid in the $$user-bids PState. This PState is partitioned by the bidder’s user ID, so a |hash partition is done first to relocate the computation to the correct task.

Here’s the complete module definition:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
(defrecord Listing [user-id post expiration-time-millis])
(defrecord Bid [bidder-id user-id listing-id amount])
(defrecord ListingPointer [user-id listing-id])

(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))
  (declare-depot setup *bid-depot (hash-by :user-id))

  (let [s (stream-topology topologies "auction")
        idgen (ModuleUniqueIdPState. "$$id")]
    (declare-pstate s $$user-listings {Long ; user ID
                                        (map-schema Long ; listing ID
                                                    String ; post
                                                    {:subindex? true})})
    (declare-pstate s $$listing-bidders {Long ; listing ID
                                          (set-schema Long ; user ID
                                                      {:subindex? true})})
    (declare-pstate s $$listing-top-bid {Long ; listing ID
                                          (fixed-keys-schema {:user-id Long
                                                              :amount Long})})
    (declare-pstate s $$user-bids {Long (map-schema ListingPointer
                                                    Long ; amount
                                                    {:subindex? true})})
    (.declarePState idgen s)

    (<<sources s
      (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
      (java-macro! (.genId idgen "*listing-id"))
      (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                        $$user-listings)

      (source> *bid-depot :> {:keys [*bidder-id *user-id *listing-id *amount]})
      (local-transform> [(keypath *listing-id) NONE-ELEM (termval *bidder-id)]
                        $$listing-bidders)
      (local-transform> [(keypath *listing-id)
                         (selected? :amount (nil->val 0) (pred< *amount))
                         (termval {:user-id *bidder-id :amount *amount})]
                        $$listing-top-bid)
      (|hash *bidder-id)
      (->ListingPointer *user-id *listing-id :> *pointer)
      (local-transform> [(keypath *bidder-id *pointer) (termval *amount)] $$user-bids)
      )))

Let’s run another quick test to verify this works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
(with-open [ipc (rtest/create-ipc)]
  (rtest/launch-module! ipc AuctionModule {:tasks 4 :threads 2})
  (let [module-name (get-module-name AuctionModule)
        listing-depot (foreign-depot ipc module-name "*listing-depot")
        bid-depot (foreign-depot ipc module-name "*bid-depot")
        user-listings (foreign-pstate ipc module-name "$$user-listings")
        listing-bidders (foreign-pstate ipc module-name "$$listing-bidders")
        listing-top-bid (foreign-pstate ipc module-name "$$listing-top-bid")
        user-bids (foreign-pstate ipc module-name "$$user-bids")

        larry-id 0
        hank-id 1
        artie-id 2
        beverly-id 3

        _ (foreign-append! listing-depot (->Listing larry-id "Listing 1" 0))
        larry1 (foreign-select-one [(keypath larry-id) LAST FIRST] user-listings)]
    (foreign-append! bid-depot (->Bid hank-id larry-id larry1 45))
    (foreign-append! bid-depot (->Bid artie-id larry-id larry1 50))
    (foreign-append! bid-depot (->Bid beverly-id larry-id larry1 48))

    (println "Listing bidders:" (foreign-select [(keypath larry1) ALL]
                                                listing-bidders
                                                {:pkey larry-id}))
    (println "Top bid:" (foreign-select-one (keypath larry1)
                                            listing-top-bid
                                            {:pkey larry-id}))
    (println "Hank's bids:" (foreign-select [(keypath hank-id) ALL] user-bids))
    ))

Running this prints:

1
2
3
Listing bidders: [1 2 3]
Top bid: {:user-id 2, :amount 50}
Hank's bids: [[#user.ListingPointer{:user-id 0, :listing-id 0} 45]]

Something new in this test code is the use of the :pkey option in the foreign select calls. A foreign PState query must determine which partition of the PState to query. It does this with a “partitioning key”. Without the :pkey option, it extracts the partitioning key from the first keypath in the query path. This is convenient since it’s very common for the partitioning key to be the same as the top-level key in the index. That’s not the case for these listing PStates, however. The :pkey option allows you to specify the partitioning key explicitly, which in this case is the user ID who owns the listing.

Adding listing expirations and notifications

Let’s finish the application by adding listing expirations and notifications. Let’s start by defining some helper functions needed by the implementation:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(defn owner-notification [listing-id winner-id amount]
  (if winner-id
    (str "Auction for listing " listing-id
          " finished with winner " winner-id
          " for the amount " amount)
    (str "Auction for listing " listing-id " finished with no winner")
    ))

(defn winner-notification [user-id listing-id amount]
  (str "You won the auction for listing " user-id "/" listing-id " for the amount " amount))

(defn loser-notification [user-id listing-id]
  (str "You lost the auction for listing " user-id "/" listing-id))

(defn sorted-set-last [^java.util.SortedSet set]
  (.last set))

We’ll also need one new record definition:

1
(defrecord ListingWithId [id listing])

A separate topology will be handling notifications, and this record will be needed by that topology.

Next, let’s once again define the depots for the module:

1
2
3
4
(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))
  (declare-depot setup *bid-depot (hash-by :user-id))
  (declare-depot setup *listing-with-id-depot :disallow)

There’s one new depot here called *listing-with-id-depot . When a Listing is assigned an ID, a ListingWithId object will be added to this depot. This allows the separate notifications topology to consume that data. You can efficiently have as many consumers as you want to a depot, whether in the same module or across multiple modules, so such a depot is useful for other use cases as well. For example, you could implement full-text search on listings by consuming *listing-with-id-depot .

The :disallow depot partitioner disallows records to be appended to this depot using foreign-append! . This depot will instead be appended to by the module as it generates listing IDs.

Next, let’s define the stream topology and its PStates:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
(let [s (stream-topology topologies "auction")
      idgen (ModuleUniqueIdPState. "$$id")]
  (declare-pstate s $$user-listings {Long ; user ID
                                      (map-schema Long ; listing ID
                                                  String ; post
                                                  {:subindex? true})})
  (declare-pstate s $$listing-bidders {Long ; listing ID
                                        (set-schema Long ; user ID
                                                    {:subindex? true})})
  (declare-pstate s $$listing-top-bid {Long ; listing ID
                                        (fixed-keys-schema {:user-id Long
                                                            :amount Long})})
  (declare-pstate s $$user-bids {Long (map-schema ListingPointer
                                                  Long ; amount
                                                  {:subindex? true})})
  (.declarePState idgen s)

This is exactly the same as before. Next, let’s begin defining the ETL:

1
2
3
4
5
6
7
8
(<<sources s
  (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
  (java-macro! (.genId idgen "*listing-id"))
  (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                    $$user-listings)
  (depot-partition-append! *listing-with-id-depot
                           (->ListingWithId *listing-id *listing)
                           :append-ack)

This adds a call to depot-partition-append! to this portion of the ETL. Unlike foreign-append! , depot appends within topologies go directly to the depot partition colocated with the ETL event. This is consistent with how writes to PStates work. Because there’s no other partitioning here, *listing-with-id-depot will be partitioned exactly the same as *listing-depot .

Next, let’s see the ETL for processing bids:

1
2
3
4
5
6
7
8
9
10
11
12
13
(source> *bid-depot :> {:keys [*bidder-id *user-id *listing-id *amount]})
(local-select> (keypath *listing-id) $$finished-listings :> *finished?)
(filter> (not *finished?))
(local-transform> [(keypath *listing-id) NONE-ELEM (termval *bidder-id)]
                  $$listing-bidders)
(local-transform> [(keypath *listing-id)
                   (selected? :amount (nil->val 0) (pred< *amount))
                   (termval {:user-id *bidder-id :amount *amount})]
                  $$listing-top-bid)
(|hash *bidder-id)
(->ListingPointer *user-id *listing-id :> *pointer)
(local-transform> [(keypath *bidder-id *pointer) (termval *amount)] $$user-bids)
))

The only change here is the addition of the first two lines of the ETL, which checks whether the auction is still active. The $$finished-listings PState will be defined in the other topology, and it’s a map from listing ID to a boolean flag. If the auction is finished, then the bid is ignored and no PStates are updated.

Next, let’s take a look at the start of the definition of the new ETL for this module. This ETL handles expiring listings and notifications.

1
2
3
4
5
(let [mb (microbatch-topology topologies "expirations")      
      scheduler (TopologyScheduler. "$$scheduler")]
  (declare-pstate mb $$finished-listings {Long Boolean})
  (declare-pstate mb $$notifications {Long (vector-schema String {:subindex? true})})
  (.declarePStates scheduler mb)

This defines a microbatch topology. Whereas streaming processes data immediately, microbatching processes a small batch of data across all depot partitions at the same time. Each iteration of microbatching processes the data that accumulated the last microbatch iteration. Since there’s no per-record overhead, microbatching has even higher throughput than streaming. However, because it’s batch-based the processing latency of microbatching is a few hundred milliseconds as opposed to the one or two milliseconds of streaming. Microbatching also provides exactly-once processing semantics for updates to PStates, even if a machine explodes in the middle of processing and the microbatch is retried.

Because they’re part of the same module, this new microbatch ETL is colocated with the streaming ETL and all its PStates. They share the same resources and can read each other’s PStates directly.

There are two PStates for this topology. $$finished-listings , as described before, has a flag for each listing ID as to whether the auction is finished or not. $$notifications contains a list of notification strings for each user. Since the number of notifications a user can receive is unbounded, the list is subindexed.

This topology also makes use of another utility from rama-helpers called TopologyScheduler . This is another small utility that makes it easy to schedule future work in a topology. Because it is built upon Rama’s primitives of ETLs and PStates, it’s completely fault-tolerant and can sustain very high throughputs of scheduled events.

Here’s the start of the definition for this ETL:

1
2
3
4
5
6
7
(<<sources mb
  (source> *listing-with-id-depot :> %microbatch)
  (anchor> <root>)
  (%microbatch :> {*listing-id :id
                   {:keys [*user-id *expiration-time-millis]} :listing})
  (vector *user-id *listing-id :> *tuple)
  (java-macro! (.scheduleItem scheduler "*expiration-time-millis" "*tuple"))

The %microbatch variable emitted by a microbatch source represents the entire batch of data for this microbatch iteration. It’s an anonymous operation which when invoked emits all data across all depot partitions. So if there’s 500 individual ListingWithId objects per depot partition in the microbatch, %microbatch will emit 500 times on each partition.

To process a ListingWithId , this ETL simply uses the TopologyScheduler to schedule a tuple containing the user ID and listing ID for later execution at the specified time.

The last piece of the module handles expired listings:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
(hook> <root>)
(java-macro!
  (.handleExpirations
    scheduler
    "*tuple"
    "*current-time-millis"
    (java-block<-
      (identity *tuple :> [*user-id *listing-id])
      (local-transform> [(keypath *listing-id) (termval true)]
                        $$finished-listings)
      (local-select> (keypath *listing-id)
                     $$listing-top-bid
                     :> {*winner-id :user-id *amount :amount})
      (local-transform> [(keypath *user-id)
                         AFTER-ELEM
                         (termval (owner-notification *listing-id *winner-id *amount))]
                        $$notifications)
      (loop<- [*next-id -1 :> *bidder-id]
        (yield-if-overtime)
        (local-select> [(keypath *listing-id)
                        (sorted-set-range-from *next-id {:max-amt 1000 :inclusive? false})]
                       $$listing-bidders
                       :> *users)
        (<<atomic
          (:> (ops/explode *users)))
        (<<if (= (count *users) 1000)
          (continue> (sorted-set-last *users))))
      (|hash *bidder-id)
      (<<if (= *bidder-id *winner-id)
        (winner-notification *user-id *listing-id *amount :> *text)
       (else>)
        (loser-notification *user-id *listing-id :> *text))
      (local-transform> [(keypath *bidder-id)
                         AFTER-ELEM
                         (termval *text)]
                        $$notifications)
      ))))))

.handleExpirations on TopologyScheduler inserts code that checks for expired items. Here, it’s attached to the root of the microbatch iteration so it will run once for each microbatch. For each expired item, it binds the expired item to *tuple , the time at which it checked to *current-time-millis , and then runs the provided block of code. The java-block<- macro defines a block of code for the Java API in Clojure.

First, the $$finished-listings PState is updated. Just like before, while this event is running no other events can run on this partition. So there are no race conditions with concurrent bids.

Next, it fetches the winning bidder. It then delivers a notification to the owner of the listing that the auction is over. Since listings are partitioned by their owner’s user ID, no partitioning is needed to deliver this notification.

Next, the ETL fetches all bidders from the $$listing-bidders PState. The loop that does this demonstrates an important aspect of Rama. As discussed already, nothing else can run on a task thread while an event is running. This property gives great power by being able to atomically read and write to many PStates at once, while also avoiding potential race conditions. However, as an application developer you need to make sure not to hold a thread for too long or else you’ll unfairly delay events for PState reads and other ETLs. Rama modules should be developed with cooperative multitasking in mind.

Since the number of bidders for a listing can be arbitrarily large, this code paginates through the PState reading 1,000 bidders each iteration. Each bidder is emitted from the loop separately. Each iteration, the loop calls yield-if-overtime which yields the thread to other events if too much time has passed (by default 5ms). Because of the power of the Rama’s dataflow paradigm, you’re able to write the code linearly even though it’s performing asynchronous operations in the middle of processing.

To finish delivering notifications for each bidder, the dataflow code then uses (|hash *bidder-id) to switch to the task hosting notifications for that bidder. It then updates the $$notifications PState with the appropriate text.

Here’s the code for the complete module, including requires and helper functions:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
(use 'com.rpl.rama)
(use 'com.rpl.rama.path)
(require '[com.rpl.rama.ops :as ops])
(require '[com.rpl.rama.aggs :as aggs])
(require '[com.rpl.rama.test :as rtest])
(import 'com.rpl.rama.helpers.ModuleUniqueIdPState)
(import 'com.rpl.rama.helpers.TopologyScheduler)

(defn owner-notification [listing-id winner-id amount]
  (if winner-id
    (str "Auction for listing " listing-id
          " finished with winner " winner-id
          " for the amount " amount)
    (str "Auction for listing " listing-id " finished with no winner")
    ))

(defn winner-notification [user-id listing-id amount]
  (str "You won the auction for listing " user-id "/" listing-id " for the amount " amount))

(defn loser-notification [user-id listing-id]
  (str "You lost the auction for listing " user-id "/" listing-id))

(defn sorted-set-last [^java.util.SortedSet set]
  (.last set))

(defrecord Listing [user-id post expiration-time-millis])
(defrecord Bid [bidder-id user-id listing-id amount])
(defrecord ListingPointer [user-id listing-id])
(defrecord ListingWithId [id listing])

(defmodule AuctionModule [setup topologies]
  (declare-depot setup *listing-depot (hash-by :user-id))
  (declare-depot setup *bid-depot (hash-by :user-id))
  (declare-depot setup *listing-with-id-depot :disallow)

  (let [s (stream-topology topologies "auction")
        idgen (ModuleUniqueIdPState. "$$id")]
    (declare-pstate s $$user-listings {Long ; user ID
                                        (map-schema Long ; listing ID
                                                    String ; post
                                                    {:subindex? true})})
    (declare-pstate s $$listing-bidders {Long ; listing ID
                                          (set-schema Long ; user ID
                                                      {:subindex? true})})
    (declare-pstate s $$listing-top-bid {Long ; listing ID
                                          (fixed-keys-schema {:user-id Long
                                                              :amount Long})})
    (declare-pstate s $$user-bids {Long (map-schema ListingPointer
                                                    Long ; amount
                                                    {:subindex? true})})
    (.declarePState idgen s)

    (<<sources s
      (source> *listing-depot :> {:keys [*user-id *post] :as *listing})
      (java-macro! (.genId idgen "*listing-id"))
      (local-transform> [(keypath *user-id *listing-id) (termval *post)]
                        $$user-listings)
      (depot-partition-append! *listing-with-id-depot
                               (->ListingWithId *listing-id *listing)
                               :append-ack)

      (source> *bid-depot :> {:keys [*bidder-id *user-id *listing-id *amount]})
      (local-select> (keypath *listing-id) $$finished-listings :> *finished?)
      (filter> (not *finished?))
      (local-transform> [(keypath *listing-id) NONE-ELEM (termval *bidder-id)]
                        $$listing-bidders)
      (local-transform> [(keypath *listing-id)
                         (selected? :amount (nil->val 0) (pred< *amount))
                         (termval {:user-id *bidder-id :amount *amount})]
                        $$listing-top-bid)
      (|hash *bidder-id)
      (->ListingPointer *user-id *listing-id :> *pointer)
      (local-transform> [(keypath *bidder-id *pointer) (termval *amount)] $$user-bids)
      ))
  (let [mb (microbatch-topology topologies "expirations")      
        scheduler (TopologyScheduler. "$$scheduler")]
    (declare-pstate mb $$finished-listings {Long Boolean})
    (declare-pstate mb $$notifications {Long (vector-schema String {:subindex? true})})
    (.declarePStates scheduler mb)

    (<<sources mb
      (source> *listing-with-id-depot :> %microbatch)
      (anchor> <root>)
      (%microbatch :> {*listing-id :id
                       {:keys [*user-id *expiration-time-millis]} :listing})
      (vector *user-id *listing-id :> *tuple)
      (java-macro! (.scheduleItem scheduler "*expiration-time-millis" "*tuple"))

      (hook> <root>)
      (java-macro!
        (.handleExpirations
          scheduler
          "*tuple"
          "*current-time-millis"
          (java-block<-
            (identity *tuple :> [*user-id *listing-id])
            (local-transform> [(keypath *listing-id) (termval true)]
                              $$finished-listings)
            (local-select> (keypath *listing-id)
                           $$listing-top-bid
                           :> {*winner-id :user-id *amount :amount})
            (local-transform> [(keypath *user-id)
                               AFTER-ELEM
                               (termval (owner-notification *listing-id *winner-id *amount))]
                              $$notifications)
            (loop<- [*next-id -1 :> *bidder-id]
              (yield-if-overtime)
              (local-select> [(keypath *listing-id)
                              (sorted-set-range-from *next-id {:max-amt 1000 :inclusive? false})]
                             $$listing-bidders
                             :> *users)
              (<<atomic
                (:> (ops/explode *users)))
              (<<if (= (count *users) 1000)
                (continue> (sorted-set-last *users))))
            (|hash *bidder-id)
            (<<if (= *bidder-id *winner-id)
              (winner-notification *user-id *listing-id *amount :> *text)
             (else>)
              (loser-notification *user-id *listing-id :> *text))
            (local-transform> [(keypath *bidder-id)
                               AFTER-ELEM
                               (termval *text)]
                              $$notifications)
            ))))))

Finally, let’s run another quick test to verify it works:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
(defn expiration [seconds]
  (+ (System/currentTimeMillis) (* seconds 1000)))

(with-open [ipc (rtest/create-ipc)]
  (rtest/launch-module! ipc AuctionModule {:tasks 4 :threads 2})
  (let [module-name (get-module-name AuctionModule)
        listing-depot (foreign-depot ipc module-name "*listing-depot")
        bid-depot (foreign-depot ipc module-name "*bid-depot")
        user-bids (foreign-pstate ipc module-name "$$user-bids")
        user-listings (foreign-pstate ipc module-name "$$user-listings")
        listing-bidders (foreign-pstate ipc module-name "$$listing-bidders")
        listing-top-bid (foreign-pstate ipc module-name "$$listing-top-bid")
        notifications (foreign-pstate ipc module-name "$$notifications")

        larry-id 0
        hank-id 1
        artie-id 2
        beverly-id 3

        _ (foreign-append! listing-depot (->Listing larry-id "Listing 1" (expiration 5)))
        larry1 (foreign-select-one [(keypath larry-id) LAST FIRST] user-listings)]
    (foreign-append! bid-depot (->Bid hank-id larry-id larry1 45))
    (foreign-append! bid-depot (->Bid artie-id larry-id larry1 50))
    (foreign-append! bid-depot (->Bid beverly-id larry-id larry1 48))

    ;; wait slightly more than the expiration time for the listing to allow notifications
    ;; to be delivered
    (Thread/sleep 6000)
    (println "Larry:" (foreign-select [(keypath larry-id) ALL] notifications))
    (println "Hank:" (foreign-select [(keypath hank-id) ALL] notifications))
    (println "Artie:" (foreign-select [(keypath artie-id) ALL] notifications))
    (println "Beverly:" (foreign-select [(keypath beverly-id) ALL] notifications))
    ))

This prints:

1
2
3
4
Larry: [Auction for listing 0 finished with winner 2 for the amount 50]
Hank: [You lost the auction for listing 0/0]
Artie: [You won the auction for listing 0/0 for the amount 50]
Beverly: [You lost the auction for listing 0/0]

That’s all there is to it. In just 100 lines of code we’ve built a high-performance auction application that can scale to millions of listings/bids per second, is completely fault-tolerant, and is easy to evolve over time with new features. Since deployment and monitoring are built-in to Rama, this is production-ready. We didn’t implement account registration or profiles, but that’s trivial to add.

Conclusion

Rama derives its power from being based on composable abstractions. PStates are a composable abstraction for storage, enabling any database’s data model (plus infinite more) to be expressed as the composition of data structures. Rama’s dataflow API is a composable abstraction for distributed computation, enabling you to seamlessly combine regular logic with partitioners, yields, and other asynchronous tasks.

Rama can handle all the computation and storage for a backend, but it’s also easy to integrate with existing architectures. Rama’s integration API allows you to use external databases, queues, monitoring systems, or other tools with your modules.

Besides the documentation, we’ve released other resources for learning Rama. rama-demo-gallery contains short, self-contained, thoroughly commented examples of using Rama to build a variety of use cases. But the best way to learn Rama is to try it out yourself using the publicly available build. The REPL is an invaluable environment for experimenting with Rama. If you have any questions, feel free to ask on the rama-user Google group or #rama channel on Clojurians.

Finally, if you’d like to use Rama in production to build new features, scale your existing systems, or simplify your infrastructure, you can apply to our private beta. We’re working closely with each private beta user to not only help them learn Rama, but also actively helping code, optimize, and test.

Leave a Reply