Crate strymon_coordinator [] [src]

Internal APIs of the Strymon coordinator.

This library contains the implementation and the internals of the Strymon coordinator. This crate is not intended to be used by end-users directly. Most likely, you will want to use the strymon command line utility to start a new coordinator instead.

Implementation

The Strymon coordinator maintains a connection with most components of a running Strymon cluster. In order to be able to handle concurrent requests, it's implementation is heavly based on on futures. Each potentially blocking request is transformed into a future and polled to completion by a tokio-core reactor (this dependency is likely to be replaced by the LocalPool executor in futures 0.2, as the implementation does not rely on Tokio's asynchronous I/O primitives).

Part of the coordinator implementation is also the Catalog, a data structure representing the current state of the Strymon cluster.

Exposed network services

The coordinator exposes two strymon_communication::rpc interfaces:

  1. CoordinatorRPC for submitting and managing jobs. It's address has to be known by any client in advance. By default, the coordinator will try to expose this service on TCP port 9189.
  2. CatalogRPC for querying the catalog using the Client infrastructure of strymon_job. It is exported on an ephemerial TCP port which can be obtained through a Subscription or Lookup request on the coordinator interface.

Handling clients and concurrent requests

The coordinator maintains a connection to each connected client (which can be a submitter, an executor or a job). Incoming client requests are handled by the Dispatch type, which is created for each accepted connection.

The Coordinator type implements the bulk of request handling and contains the state shared by all clients. Its external interface is mirrored through the cloneable CoordinatorRef handle. It is essentially wrapper around Rc<RefCell<Coordinator>>, however it also tracks state created by this client (such as issued publications). This allows us to automatically remove the state once the associated client disconnects.

A client might issue a request which cannot be handled immediately. Such requests (e.g. a blocking subscription request which only resolves once a matching topic is published) are implemented as a future, which are polled to completion by the internal tokio-core reactor.

Modules

catalog

The catalog contains meta-data about the current state of the system.

dispatch

Client dispatch and request completion logic.

handler

The coordinator request handler.

Structs

Builder

Creates a new coordinator instance.