5 <meta http-equiv=
"Content-Type" content=
"text/html; charset=iso-8859-1">
6 <title>TAO Real-Time Architecture
</title>
7 <meta name=
"GENERATOR" content=
"Microsoft FrontPage 4.0">
12 <h3 align=
"center">TAO Real-Time Architecture
</h3>
13 <p>This page describes and compares the two main ORB designs we considered for
14 supporting Real-Time CORBA
1.0 in TAO.
The first design, codenamed
<i>reactor-per-lane
</i>
15 and shown in Figure
1, was chosen for our initial implementation.
The
16 second design,
<i>queue-per-lane
</i>, might also get implemented in the future,
17 as a part of a project evaluating alternative ORB architectures for Real-Time
19 <h3 align=
"center">Design I: Reactor-per-Lane
</h3>
20 In this design, each Threadpool Lane has its own reactor, acceptor, connector
21 and connection cache.
Both I/O and application-level processing for a
22 request happen in the same threadpool thread: there are no context switches, and
23 the ORB does not create any internal threads.
Objects registered with any
24 Real-Time POA that does not have the
<i>ThreadpoolPolicy
</i> are serviced by
<i>default
25 threadpool
</i>, which is a set of of all the application threads that invoked
26 <CODE>orb-
>run ()
</CODE>.
27 <p align=
"center"><img border=
"0" src=
"reactor-per-lane.gif" width=
"407" height=
"333"></p>
28 <p align=
"center">Figure
1: Reactor-per-lane
</p>
29 <p>When a Real-Time POA is creating an IOR, it includes one or more of its
30 threadpool's acceptor endpoints into that IOR according to the following
31 rules.
If the POA's priority model is
<i>server declared
</i>, we use the
32 acceptor from the lane whose priority is equal to the priority of the target
34 priority model is
<i> client propagated,
</i> all endpoints from the POA's
35 threadpool are included into the IOR.
Finally, if the
<i>PriorityBandedConnectionPolicy
</i>
36 is set, then endpoints from the threadpool lanes with priorities falling into one of the specified
37 priority bands are selected.
The endpoints, selected according to the
38 rules above, and their corresponding priorities are stored in IORs using a special
39 <i>TaggedComponent
</i><CODE>TAO_TAG_ENDPOINTS
</CODE>.
During each
40 invocation, to obtain a connection to the server, the client-side ORB selects
41 one of these endpoints based on the effective policies.
For example, if the object has
42 <i> client propagated
</i> priority
43 model, the ORB selects the endpoint whose priority is equal to the priority of the client thread making the
44 invocation.
If on the client-side, during invocation, or on the
45 server-side, during IOR creation, an endpoint of the right priority is not available, it
46 means the system has not been configured properly, and the ORB throws an exception.
47 The design and rules described above ensure that when a threadpool with lanes is
48 used, client requests are processed by the thread of desired priority from very
49 beginning, and priority inversions are minimized.
</p>
50 <p>Some applications need less rigid, more dynamic environment.
They do
51 not have the advanced knowledge or
cannot afford the cost of configuring all of the
52 resources ahead of time, and have a greater tolerance for priority
53 inversions.
For such applications, threadpools
<i> without
</i> lanes are a way to go.
54 In TAO, threadpools without lanes have different semantics that their
<i>with-lanes
</i>
55 counterparts.
Pools without lanes have a single acceptor endpoint used in
56 all IORs, and their threads change priorities on the fly, as necessary, to
57 service all types of requests.
</p>
58 <h3 align=
"center">Design II: Queue-per-Lane
</h3>
59 In this design, each threadpool lane has its own request queue.
There is a separate
60 I/O layer, which is shared by all lanes in all threadpools.
The I/O layer has one
61 set of resources - reactor, connector, connection cache, and I/O thread(s) - for each priority
62 level used in the threadpools.
I/O layer threads perform I/O processing
63 and demultiplexing,
and threadpool threads are used for application-level request
64 processing.
65 <p align=
"center"><img border=
"0" src=
"queue-per-lane.gif" width=
"387" height=
"384"></p>
66 <p align=
"center">Figure
2: Queue-per-lane
</p>
67 <p>Global acceptor is listening for connections from clients.
Once a
68 connection is established, it is moved to the appropriate reactor during the
69 first request, once its priority is known.
Threads in each lane are blocked on
70 condition variable, waiting for requests to show up in their queue.
I/O
71 threads read incoming requests, determine their target POA and its threadpool,
72 and deposit the requests into the right queue for processing.
</p>
73 <h3 align=
"center">Design Comparison
</h3>
74 <i>Reactor-per-lane
</i> advantages:
76 <li><b>Better performance
</b><br>
77 Unlike in
<i>queue-per-lane
</i>, since each request is serviced in one
78 thread, there are no context switches, and there are opportunities for
stack and TSS optimizations.
</li>
79 <li><b>No priority inversions during connection establishment
</b><i><br>
80 </i>In
<i>reactor-per-lane
</i>, threads accepting connections are the same
81 threads that will be servicing the requests coming in on those connections,
<i>i.e.
</i>,
82 priority of the accepting thread is equal to the priority of requests on the
83 connection.
In
<i>queue-per-lane
</i>,
84 however, because of a global acceptor, there is no differentiation between
85 high priority and low priority clients until the first request.
</li>
86 <li><b>Control over all threads with standard threadpool API
<br>
87 </b>In
<i>reactor-per-lane,
</i> the ORB does not create any threads of its
88 own, so application programmer has full control over the number and
89 properties of all the threads with the Real-Time CORBA Threadpool
90 APIs.
<i>Queue-per-lane
</i>, on the other hand, has I/O layer threads,
91 so either a proprietary API has to be added or application programmer will
92 not have full control over all the thread resources..
</li>
95 <i>Queue-per-lane
</i> advantages:
</p>
97 <li><b>Better feature support and adaptability
</b><br>
98 <i>Queue-per-lane
</i> supports ORB-level request buffering, while
<i>reactor-per-lane
</i>
99 can only provide buffering in the transport.
With its two-layer
100 structure,
<i>queue-per-lane
</i> is a more decoupled design than
<i>reactor-per-lane
</i>,
101 making it easier to add new features or introduce changes
</li>
102 <li><b>Better scalability
</b> <br>
103 Reactor, connector and connection cache are
<i>per-priority
</i> resources in
104 <i>queue-per-lane
</i>, and
<i>per-lane
</i> in
<i>reactor-per-lane
</i>.
105 If a server is configured with many threadpools that have similar lane
106 priorities,
<i>queue-per-lane
</i> might require significantly fewer of the
107 above-mentioned resources.
It also uses fewer acceptors, and its IORs
108 are a bit smaller.
</li>
109 <li><b>Easier piece-by-piece integration into the ORB
<br>
110 </b>Ease of implementation and integration are important practical
111 considerations in any project.
Because of its two-layer structure,
<i>queue-per-lane
</i>
112 is an easier design to implement, integrate and test piece-by-piece.
</li>