Up: BATCHING: A Design Pattern
Previous: Related patterns
Our experience with BATCHING started when we noticed that a single
piece of design had been used to build systems we already knew well.
Then we tried to abstract the core of those systems, extracting the
pattern. Once we identified the pattern, we tried to find some
new systems where it could be applied to obtain some benefit.
We did so  and obtained substantial performance
For us, this pattern has been a process where we first learned some
``theory'' from existing systems and then applied what we had
learned back to ``practice.''
In this section we show how existing systems match
the pattern described in the previous sections--hopefully, this will
allow a better understanding of the pattern, as happened in our
case. We also include a brief overview of the two systems where we
applied the pattern ourselves with a priori knowledge of the pattern.
Note that the BATCHING design allows a single implementation of the pattern to handle some
of the various applications described below. As the server is
specified every time a Program runs, the same piece of code
could perfectly handle most of the applications shown below. Nevertheless,
existing systems, built without a priori knowledge of the
pattern hardly share the common code needed to implement applications
described below (e.g. gather/scatter is always implemented separately
from message batching facilities, when both are provided.)
- Operating System extensions
- by code downloading into the kernel (like in SPIN
 and Choices ) can
be considered to be an instance of this pattern. These systems use
code downloading as the means to extend system functionality. The
mechanism employed is based on defining new programs, which are
expressed in terms of existing services.
In this case the Program is the extension performed; the set
of ConcreteControlStructures depends on the extension
language; and the run method is implemented either by
delegation to the extension interpreter or by the native processor
(when binary code can be downloaded into the system.)
- An agent is a program sent to a different domain, which
usually moves from one domain to another . The aim is to avoid
multiple domain crossings (or network messages) and also to allow
disconnection from the agent home environment.
Programs built using BATCHING are meant to stay at the server until
termination, and they possess no go9 statement.
However, BATCHING already includes most of the machinery needed to
implement an agent system. A go statement
could be provided by the command language itself.
- Gather/Scatter I/O.
- Gather/Scatter input/output is yet another
example where this pattern appears. In gather/scatter I/O a list of
input or output descriptors is sent to an I/O device in a single
operation. Each descriptor specifies a piece of data going to (or
coming from) the device. Data written is gathered from separate
output buffers. Data read is scattered to separate input buffers.
Its major goal is to save data copies.
In this case, the program is just the descriptor list, where each
descriptor can be supported by a Command. The program
run method iterates through the descriptor (i.e., command)
list and performs the requested I/O operations. The services
(i.e., commands) are simply Read and Write.
Note how by considering this pattern, gather/scatter I/O could be
generalized so that the I/O device involved does not need to be the
same for all descriptors sent by the user. Moreover, separate Read
and Write operations could be bundled in a single one.
- Message batching.
- Grouping a sequence of messages into a single
low-level protocol data unit is yet another instance of the
pattern. In this case the run method (i.e. the interpreter) is the
packet disassembler. A program is a bunch of packets
bundled together. Each packet, or each packet header, is a command
which is interpreted by the packet disassembler. This is
the BATCHING application which more closely resembles
- Deferred calls.
- Can be used to support disconnected
operation. Clients build programs while they perform operations on
non-reachable servers--whenever they are in a disconnected state. Upon
reconnection, each program is finally submitted to the target domain for
interpretation. Note that several clients might add code to a
single program to be sent to the server later on.
Each operation is a Command, the list of operations sent to
a server is a Program. The interpreter could be either:
- the piece of code sending each command in turn when it is
- an actual Program interpreter in the server domain, accepting
just a list of commands (a program)--to save some network traffic.
Futures or Promises  can be used to allow users to
synchronize with server responses.
- Improving latency in Operating Systems.
- Many user programs
happen to exhibit very simple system call patterns. That is an
opportunity for using BATCHING to save domain crossings and,
therefore, execution time.
As a matter of fact, we have done so by instantiating BATCHING for
two systems, Linux and Off++ . In both
systems, we obtained around 25% speedups for a cat program
written with BATCHING .
We implemented two new domain-specific languages (i.e.,
ControlStructures and Command sets) that allowed users to
bundle separate calls into a single one, like in the cat
example of section 1.
The first language we implemented was based on byte-codes. We included
just those commands needed to code loops, conditional branches, and
simple arithmetic. This language was used both on Linux and Off++.
The second language we implemented was a high-level one, designed
specifically for Off++. It includes just the commands needed to
Repeat a given operation n times and to perform a
Sequence of operations.
- Heterogeneous resource allocation.
- Most operating systems are structured
as a set of resource unit providers; separate servers provide
resource unit allocation for different resources. In these
systems, users issue multiple requests at a time.
BATCHING can be used to request allocation of multiple heterogenous
resources in a single call.
Off++ is a system modeled as a set of hardware resource unit
providers, it uses BATCHING to improve the performance of its applications
- Transaction processing.
- A transaction is a set
of operations executed atomically in isolation . A
given transaction can either terminate normally, by committing, or
abnormally, by aborting. Should a transaction abort, its effects
must be undone; otherwise (i.e. when it commits), its results should
be made permanent.
Commitment typically involves multiple disk writes for different
data items. Writes must follow a carefully chosen order to
preserve persistence of results, even when failures occur. One of
the strategies uses a redo algorithm
. Such algorithm does not modify the persistent
data until commitment: it works on a volatile copy of the data until
the transaction commits.
At commit time, a sequence of redo records is written into
a disk log, followed by a commit record. Redo records contain new
values for objects changed by the transaction. Finally, persistent
state for objects involved is updated. If the system fails before
the write of commit record, the transaction is aborted and their redo
records are ignored. If the system fails after writing the commit record,
redo records are replayed.
BATCHING can be used both to implement commit and for crash
recovery. Performance of transactional distributed object systems
(e.g. Arjuna ) could be improved due to the reduced
number of domain crossings.
Up: BATCHING: A Design Pattern
Previous: Related patterns
Francisco J Ballesteros