(Summary, brief discussion of our features)
(We have many thread pools, what and why)
Callbacks are used extensively throughout Elasticsearch because they enable us to write asynchronous and nonblocking code, i.e. code which doesn't necessarily compute a result straight away but also doesn't block the calling thread waiting for the result to become available. They support several useful control flows:
ActionListener
is a general-purpose callback interface that is used extensively across the Elasticsearch codebase. ActionListener
is
used pretty much everywhere that needs to perform some asynchronous and nonblocking computation. The uniformity makes it easier to compose
parts of the system together without needing to build adapters to convert back and forth between different kinds of callback. It also makes
it easier to develop the skills needed to read and understand all the asynchronous code, although this definitely takes practice and is
certainly not easy in an absolute sense. Finally, it has allowed us to build a rich library for working with ActionListener
instances
themselves, creating new instances out of existing ones and completing them in interesting ways. See for instance:
ThreadedActionListener
for forking work elsewhereRefCountingListener
for running work in parallelSubscribableListener
for constructing flexible workflowsCallback-based asynchronous code can easily call regular synchronous code, but synchronous code cannot run callback-based asynchronous code
without blocking the calling thread until the callback is called back. This blocking is at best undesirable (threads are too expensive to
waste with unnecessary blocking) and at worst outright broken (the blocking can lead to deadlock). Unfortunately this means that most of our
code ends up having to be written with callbacks, simply because it's ultimately calling into some other code that takes a callback. The
entry points for all Elasticsearch APIs are callback-based (e.g. REST APIs all start at
org.elasticsearch.rest.BaseRestHandler#prepareRequest
,
and transport APIs all start at
org.elasticsearch.action.support.TransportAction#doExecute
)
and the whole system fundamentally works in terms of an event loop (a io.netty.channel.EventLoop
) which processes network events via
callbacks.
ActionListener
is not an ad-hoc invention. Formally speaking, it is our implementation of the general concept of a continuation in the
sense of continuation-passing style (CPS): an extra argument to a function
which defines how to continue the computation when the result is available. This is in contrast to direct style which is the more usual
style of calling methods that return values directly back to the caller so they can continue executing as normal. There's essentially two
ways that computation can continue in Java (it can return a value or it can throw an exception) which is why ActionListener
has both an
onResponse()
and an onFailure()
method.
CPS is strictly more expressive than direct style: direct code can be mechanically translated into continuation-passing style, but CPS also
enables all sorts of other useful control structures such as forking work onto separate threads, possibly to be executed in parallel,
perhaps even across multiple nodes, or possibly collecting a list of continuations all waiting for the same condition to be satisfied before
proceeding (e.g.
SubscribableListener
amongst many others). Some languages have first-class support for continuations (e.g. the async
and await
primitives in C#) allowing the
programmer to write code in direct style away from those exotic control structures, but Java does not. That's why we have to manipulate all
the callbacks ourselves.
Strictly speaking, CPS requires that a computation only continues by calling the continuation. In Elasticsearch, this means that
asynchronous methods must have void
return type and may not throw any exceptions. This is mostly the case in our code as written today,
and is a good guiding principle, but we don't enforce void exceptionless methods and there are some deviations from this rule. In
particular, it's not uncommon to permit some methods to throw an exception, using things like
ActionListener#run
(or an equivalent try ... catch ...
block) further up the stack to handle it. Some methods also take (and may complete) an
ActionListener
parameter, but still return a value separately for other local synchronous work.
This pattern is often used in the transport action layer with the use of the
ChannelActionListener
class, which wraps a TransportChannel
produced by the transport layer. TransportChannel
implementations can hold a reference to a Netty
channel with which to pass the response back to the network caller. Netty has a many-to-one association of network callers to channels, so a
call taking a long time generally won't hog resources: it's cheap. A transport action can take hours to respond and that's alright, barring
caller timeouts.
(TODO: add useful starter references and explanations for a range of Listener classes. Reference the Netty section.)
(including how REST and Transport layers are bound together through the ActionModule)
(long running actions should be forked off of the Netty thread. Keep short operations to avoid forking costs)
(Sketch of important classes? Might inform more sections to add for details.)
(A NodeB can coordinate a search across several other nodes, when NodeB itself does not have the data, and then return a result to the caller. Explain this coordinating role)
(Quorum, terms, any eligibility limitations)
(Explain joining, and how it happens every time a new master is elected)
(Majority concensus to apply, what happens if a master-eligible node falls behind / is incommunicado.)
(Go over the two kinds of listeners -- ClusterStateApplier and ClusterStateListener?)
(Sketch ephemeral vs persisted cluster state.)
(what's the format for persisted metadata)
(More Topics: ReplicationTracker concepts / highlights.)
(How a primary shard is chosen)
(terms and such)
(How an index write replicates across shards -- TransportReplicationAction?)
(What guarantees do we give the user about persistence and readability?)
(rarely use locks)
(What does Engine mean in the distrib layer? Distinguish Engine vs Directory vs Lucene)
(High level explanation of how translog ties in with Lucene)
(contrast Lucene vs ES flush / refresh / fsync)
(internal vs external reader manager refreshes? flush vs refresh)
(Data lives beyond a high level IndexShard instance. Continue to exist until all references to the Store go away, then Lucene data is removed)
(Explain checkpointing and generations, when happens on Lucene flush / fsync)
(Concurrency control for flushing)
(VersionMap)
(copy a sketch of the files Lucene can have here and explain)
(Explain about SearchIndexInput -- IndexWriter, IndexReader -- and the shared blob cache)
(Lucene uses Directory, ES extends/overrides the Directory class to implement different forms of file storage. Lucene contains a map of where all the data is located in files and offsites, and fetches it from various files. ES doesn't just treat Lucene as a storage engine at the bottom (the end) of the stack. Rather ES has other information that works in parallel with the storage engine.)
(All shards go through a 'recovery' process. Describe high level. createShard goes through this code.)
(How is the translog involved in recovery?)
(partial shard recoveries survive server restart? reestablishRecovery
? How does that work.)
(Frozen, warm, hot, etc.)
(AllocationService runs on the master node)
(Discuss different deciders that limit allocation. Sketch / list the different deciders that we have.)
(Significant internal APIs for balancing a cluster)
(How does this command behave with the desired auto balancer.)
(Reactive and proactive autoscaling. Explain that we surface recommendations, how control plane uses it.)
(Sketch / list the different deciders that we have, and then also how we use information from each to make a recommendation.)
(We've got some good package level documentation that should be linked here in the intro)
(copy a sketch of the file system here, with explanation -- good reference)
(Include an overview of the coordination between data and master nodes, which writes what and when)
(Concurrency control: generation numbers, pending generation number, etc.)
(partial snapshots)
(How we identify operations/tasks in the system and report upon them. How we group operations via parent task ID.)
(Brief explanation of the use case for CCR)
(Explain how this works at a high level, and details of any significant components / ideas.)
(Explain that the Distributed team is responsible for the write path, while the Search team owns the read path.)
(Generating document IDs. Same across shard replicas, _id field)
(Sequence number: different than ID)
(what limits write concurrency, and how do we minimize)
(explain visibility of writes, and reference the Lucene section for more details (whatever makes more sense explained there))
(this can also happen during shard reallocation, right? This might be a standalone topic, or need another section about it in allocation?...)