CrossClj

0.1.0 docs

SourceDocs



RECENT

    async-ring

    Clojars

    Sep 9, 2014


    Readme

    Index of all namespaces


    « Project + dependencies

    FIXME: write description

    async-ring.adapters.http-kitDocsSource
    This namespace provides an async ring compatible http-kit adapter. In order to use it, simple pass your async ring handler to the function to-httpkit, and have the return value of that function be the http-kit handler.
    
    For example, to run my-async-handler on http-kit, just do:
    
    (http-kit/run-server (to-httpkit my-async-handler) {:port 8080})
    async-ring.adapters.jettyDocsSource
    This namespace provides an async ring compatible Jetty adapter. This Jetty adapter supports normal ring and async ring handlers. If you want to pass an async ring handler to the adapter, then you must use the function to-jetty to convert the ring middleware to the async-jetty-adapter.
    
    For example, to run my-async-handler on jetty, just do:
    
    (async-jetty-adapter (to-jetty my-async-handler) {:port 8080})
    async-ring.beautyDocsSource
    This namespace contains the Beauty concurrent quality of service routing middleware. See README.md for details on how to use Beauty in your application.
    
    async-ring.coreDocsSource
    This namespace provides a core.async API for Ring. It allows you define and nest synchronous
    and async ring handlers to create efficient, async http servers.
    
    Async handlers are just core.async channels! To use them, put ring request maps into them.
    Each request map must contain 2 additional keys, :async-response and :async-error, which must
    both be channels. For each ring request map you put into the input channel, you will recieve
    either a response map via the :async-response channel or a Throwable via the :async-error
    channel.
    async-ring.experimentalDocsSource
    This namespace contains experimental async ring functionality.
    
    work-shed is an async middle that automatically returns short, precomputed responses to clients when the server gets overloaded.
    
    route-concurrently is a simple async routing macro that provides a way to expose concurrency between different routes. Beauty will probably replace it wholesale.
    
    The README below is fetched from the published project artifact. Some relative links may be broken.

    Build Status

    async-ring

    Like Ring, but async.

    To use in your Leiningen project, add the following

    ;; For http-kit support
    [async-ring "0.1.0"]
    [http-kit "2.1.19"]
    
    ;; For jetty support
    [async-ring "0.1.0"]
    [ring/ring-jetty-adapter "1.3.1"]
    

    Motivation

    Ring is a great foundation for building HTTP servers in Clojure. However, Ring fails to solve many problems that high-performance and transactional HTTP servers must solve:

    • What does the server do when it can’t handle the request rate?
    • How can the server dedicate more or fewer resources to different requests?
    • How can long-running HTTP requests be easily developed, without blocking threads?

    Async Ring attempts to solve these problems by introducing a core.async based API that is backwards compatible with Ring and popular Ring servers, so that you don’t need to rewrite your app to take advantage of these techniques.

    Features

    Forwards and Backwards compatible with Ring

    Async Ring is 100% compatible with normal Ring. Want to use your Ring handler in an Async Ring app? Just use sync->async-hander. What about mounting an Async Ring handler into a normal Ring add? async->sync-handler. Maybe you’d like to use the huge body of existing Ring middleware in your Async Ring app: try sync->async-middleware. Or maybe you’d like to use async middleware with your synchronous Ring app: just use async->sync-middleware.

    Async Ring also includes a complete set of optimized ports of Ring middleware. These ports includes a ported test suite, so you can feel comfortable in the logic being executed.

    Beauty

    Beauty is a simple concurrent router that lets you reuse your existing Ring routes and handlers, be they Clout or Compojure, Hiccup or Selmer. You just need to pass your existing app to the beauty-router middleware, and then annotate any handlers that you want to run concurrently with prioritization.

    Integration with standard servers

    Async Ring comes with adapters for Jetty 7 and http-kit, so that you don’t even need to change your server code. Just use to-jetty or to-httpkit to mount an async handler onto your existing routing hierarchy.

    Core.async based API

    Async Ring uses a standard pattern in core.async, in which the response channel is tied to the request map. This way, it’s easy to write sophisticated pipelines that route and process the request according to whatever rules are necessary for the job.

    Ports of many Ring middleware

    async-ring.middleware contains ports of all the middleware found in Ring core. Just post an issue to get your favorite middleware ported!

    Usage

    Getting Started

    Let’s first take a look at how to write “Hello World” in Async Ring with Http-Kit:

    (require '[org.http-kit.server :as http-kit])
    (require 'async-ring.adapters.http-kit)
    (require 'async-ring.core)
    
    (def async-ring-app
      (async-ring.core/constant-response
        {:body "all ok" :status 200 :headers {"Content-Type" "text/plain"}}))
    
    (def server (http-kit/run-server (async-ring.adapters.http-kit/to-httpkit async-ring-app)
                                     {:port 8080}))
    

    In this example, we see how to use the constant-response handler, which is the simplest Async Ring handler available. It always returns the same response.

    After we create the app, we use to-httpkit to make it Http-Kit compatible, and then we pass it to the Http-Kit server to start the application.

    Running traditional Ring apps on Async Ring

    Now, we’ll look at how we can run an existing traditional Ring app on Async Ring with Jetty.

    (require '[compojure.core :refer (defroutes GET)])
    (require '[async-ring.adapters.jetty :as jetty])
    (require 'async-ring.core)
    
    (defroutes traditional-ring-app
      (GET "/" []
        {:body "all ok" :status 200 :headers {"Content-Type" "text/plain"}}))
    
    (def async-ring-app
      (async-ring.core/sync->async-adapter traditional-ring-app
                                           {:parallelism 10
                                            :buffer-size 5}))
    
    (def server (jetty/run-jetty-async (jetty/to-jetty async-ring-app)
                                       {:port 8080
                                        :join? false}))
    

    Here, we first create a traditional Ring app. Then, we add an adapter to make it asynchronous, allowing up to 10 requests to be simultaneously routed and processed, and up to 5 requests to be buffered. Finally, we start the Async Ring app on Jetty.

    Using Ring middleware

    Async Ring has a small but growing library of native ports of Ring middleware. By using a native port of the Ring middleware, you’re able to get the best performance.

    (require '[compojure.core :refer (defroutes GET)])
    (require '[async-ring.middleware :refer (wrap-params)])
    (require '[org.http-kit.server :as http-kit])
    (require 'async-ring.adapters.http-kit)
    (require 'async-ring.core)
    
    (defroutes traditional-ring-app
      (GET "/" [q]
        {:body (str "got " q) :status 200 :headers {"Content-Type" "text/plain"}}))
    
    (def async-ring-app
      (-> (async-ring.core/sync->async-adapter traditional-ring-app
                                               {:parallelism 5
                                                :buffer-size 5}))
          (wrap-params {:parallelism 10
                        :buffer-size 100}))
    
    (def server (http-kit/run-server (async-ring.adapters.to-httpkit async-ring-app)
                                     {:port 8080}))
    

    Here, we can see a few things. First of all, it’s easy to compose Async Ring handlers using ->, just like regular Ring. Secondly, we can see that it’s possible to control the buffering and parallelism at each stage in the async pipeline–this allows you to make decisions such as devoting extra CPU cores to encoding/decoding middleware, and limiting the concurrent number of requests to a database-backed session store.

    Ported middleware lives in async-ring.middleware. The second argument to the async-ported middleware is always the async options, such as parallelism and the buffer size.

    If you’d like to see your middlewares ported to Async Ring, just file an issue and I’ll do that quickly.

    Using Beauty

    Beauty is a concurrent routing API that adds quality of server (QoS) features to Ring. Quality of service allows you separate routes that access independent databases to ensure that slowness in one backend doesn’t slow down other requests. QoS also allows you to dynamically decide to prioritize some requests over others, to ensure that high-priority requests are completed first, regardless of arrival order.

    Let’s first look at a simple example of using Beauty:

    (require '[compojure.core :refer (defroutes GET ANY)])
    (require '[org.http-kit.server :as http-kit])
    (require 'async-ring.adapters.http-kit)
    (require 'async-ring.core)
    
    (defroutes beautified-traditional-ring-app
      (GET "/" []
        (beauty-route :main (handle-root)))
      (GET "/health" []
        (handle-health-check))
      (ANY "/rest/endpoint" []
        (beauty-route :endpoint (handle-endpoint)))
      (ANY "/rest/endpoint/:id" [id]
        (beauty-route :endpoint 8 (handle-endpoint-id id))))
    
    (def server (http-kit/run-server (async-ring.adapters.to-httpkit
                                       (beauty-router
                                         beautified-traditional-ring-app
                                         {:main {:parallelism 1}
                                          :endpoint {:parallelism 5
                                                     :buffer-size 100}}))
                                       {:port 8080}))
    

    The first thing we added is the beauty-route annotations to each route that we want to run on a prioritized concurrent pool. Note that you can choose to freely mix which routes are Beauty routes, and which routes are executed single-threaded (/health isn’t executed on a Beauty pool).

    Next, notice that beauty-route takes an argument: the name of the pool that you want to execute the request on. beauty-router handles creating pools with bounded concurrency and a bounded buffer of pending requests. In this example, we are using 2 pools: :main, which has a single worker and only services requests to /, and :endpoint, which services all of the routes under /rest. The :endpoint pool has 5 concurrent workers, and it can queue up to 100 requests before it exhibits backpressure.

    Finally, notice that the final route (/rest/endpoint/:id) uses the priority form of beauty-route: normally, all requests are handled at the standard priority, 5. In some cases, you may know that certain routes are usually faster to execute, or that certain clients may have have a cookie that indicates they need better service. In that case, you can specify the priority for a beauty-routed task. These priorities are used to determine which request will be handled from the pool’s buffer.

    The Beauty Router should be flexible enough to solve most QoS problems; nevertheless, Pull Requests are welcome to improve the functionality!

    Comparison with Pedestal

    At first glance, Async Ring and Pedestal seem very similar–they’re both frameworks for building asynchronous HTTP applications using a slightly modified Ring API. In this section, we’ll look at some of the differences between Pedestal and Async Ring.

    1. Concurrency mechanism: in Pedestal, you write pure functions for each request lifecycle state transition, and the Pedestal server schedules these function for you. In Async Ring, you write core.async code, so the control flow of your handler is exactly how you write it. In other words, you can using <! and >! to block wherever you want.
    2. Performance: Pedestal and Async Ring both allow for many more connections than threads, thus enabling many more concurrenct connections that Ring.
    3. Composition: in Pedestal, interceptors are placed in a queue for executor. This allows for interceptions to know the entire queue of execution as it stands, at the expense of always encoding the request processing as a queue. In Async Ring, handlers are only identified by their input channel. Thus, Async Ring handlers cannot automatically know what other executors are in the execution pipeline. On the other hand, Async Ring handlers can express more complex worker-pool and dynamically routed topologies in a single expression, rather than requiring dynamic interceptor middleware.
    4. Chaining behavior: in Pedestal, the interceptor framework handles chaining behavior, which allows for greater programmatic insight and control. In Async Ring, function composition handles chaining behavior, just like in regular Ring.
    5. Compatibility with Ring: in Pedestal, you must either port your Ring middlewares, or deal with the fact that they cannot be paused or migrate threads. In Async Ring, all existing Ring middleware is supported; however, you will get better performance by porting middlewares.

    Performance

    Performance numbers are currently only preliminary.

    I benchmarked returning the “constant” body via Async Ring, Traditional Ring, and using a callback. All tests were done on http-kit.

    Traditional Ring httpkit async Async Ring
    Mean 1.45 ms 1.542 ms 1.953 ms
    90th %ile 2 ms 2 ms 2 ms

    Thus Async Ring adds 500 microseconds to each call, but doesn’t impact outliers significantly. This is something I’d like to try to to improve; however, the latency cost is worth it if you need these other concurrency features.

    License

    Copyright © 2014 David Greenberg

    Distributed under the Eclipse Public License either version 1.0