5 @page PP_Memory_Management Memory Management Rules for TAO's Pluggable Protocol Framework
7 @section background Background
9 This document proposes a clearer set of memory management rules
10 for the pluggable protocols framework.
11 To understand this proposal some basic background on how does the
12 pluggable protocol framework works, and how each abstraction
13 relates to the other components in the ORB.
15 The pluggable protocol framework uses the Acceptor and Connector
16 patterns, unlike ACE, however, it must treat all of them
18 The basic abstraction in TAO's pluggable protocol framework is the
19 <CODE>TAO_Transport</CODE>,
20 an instance of this class represents a single connection, for
21 example, the IIOP plugin uses one instance of TAO_Transport for
23 To integrate this abstraction with the ACE_Reactor framework,
24 all the protocols implemented so far use
25 specializations of the ACE_Svc_Handler class.
26 However, the original design considered the possibility of using
27 protocols without any ACE abstractions, though in practice this
28 hasn't happened so far,
29 all changes to the framework should keep this possibility open.
31 This is the main source of memory management problems in the
32 pluggable protocol framework:
33 a single entity (a connection) is represented by two instances of
34 two separate classes. On one side the ORB uses an instance of the
35 TAO_Transport abstraction, on the other the Reactor uses an
36 instance of an ACE_Svc_Handler.
38 To complicate matters even further the ORB caches both passively
39 accepted and actively established connections.
40 The actively established connections are cached by the client-side
41 to minimize or amortize the cost of connection establishment.
42 The passively accepted connections are kept in the same cache
43 mainly to support bi-directional GIOP, however, they also allow us
44 to close both accepted and established idle connections using a
45 single component, this is useful when the ORB shutdowns, but it is
46 crucial in the implementation of connection recycling strategies,
47 where the total number of connections kept by the ORB must be
50 The design must also support multithreaded clients and servers, in
51 both cases multiple threads may be using a connection
52 simultaneously, for example, multiple client threads can be
53 waiting for replys over the same connection, or multiple server
54 threads can be servicing requests received on the same connection,
55 or, if bi-dir GIOP is enabled, maybe a mix of both.
57 Some aspects of the GIOP protocol require special treatment of
58 connections with pending requests, both on the server and client
60 On the server side connections that have pending requests cannot
61 be closed (section 15.5.1.1 in the CORBA/IIOP 2.4 specification),
62 therefore, the ORB needs to know how many requests are pending, at
64 Despite this, it is possible that the underlying connection is
65 broken, for example, because the client crashed. In such cases,
66 the ORB should be able to reclaim the OS resources, but the
67 TAO_Transport must remain valid until the upcall threads finish.
68 Similarly, the client side should be able to distinguish between
69 orderly and abortive disconnects, essentially the ORB needs to
70 know if a <CODE>CloseConnection</CODE> message has been received.
72 Finally we must never forget that the ORB can be used in
73 thread-per-connection mode. In this concurrency model there is no
74 reactor used to detect when the connection can accept more input,
75 though normally this is a global setting, it is possible for a
76 pluggable protocol to *always* work in thread-per-connection and
77 no other architecture.
78 Similarly, the ORB can be configured to wait for replys using
79 read() operations, instead of the more generic wait-on-reactor or
80 wait-on-leader-follower strategies.
81 Therefore, we cannot always rely on the Reactor framework to
82 perform all the memory management for us.
84 @section requirements Requirements
86 To summarize, the TAO_Transport class should:
88 - Not be deleted until all threads using it is released by all the
90 - As many OS and ORB resources must be released when the ORB
91 detects that the connection has been terminated. For example,
92 the socket should be closed and the ACE_Svc_Handler, if any,
94 - Not be deleted until it is removed from the connection cache.
95 - Support a mechanism to proactively close the connection.
96 - Keep track of the number of pending requests in a connection.
98 @section rules Memory Management Rules
100 Instances of TAO_Transport are reference counted,
101 this is a simple way to share it among the threads using it to
102 send or receive invocations, the cache, and the potential
103 connection handler using it.
104 However, the service handler should follow the standard rules for
105 the Reactor, i.e. the Reactor owns it, and is destroyed as soon as
106 the connection is closed.
108 The underlying connection can be closed by the remote peer,
109 in this case, either the Reactor or the thread blocked on
110 <CODE>read()</CODE> will detect the problem.
111 Following the normal conventions, the ACE_Svc_Handler would be
112 closed as soon as this is detected.
113 The corresponding TAO_Transport must be informed, otherwise it
114 could attempt to use a connection already closed.
116 Finally if the connection is proactively closed, the TAO_Transport
117 informs the ACE_Svc_Handler, at this point the ACE_Svc_Handler
118 commits suicide by removing itself from the reactor.
119 Notice that it must still callback the TAO_Transport in this
122 @section data Processing Incoming and Outgoing Data
124 The final aspect to consider is the processing of incoming and
125 outgoing data. We are still working on this problem, but the
126 current approach is more complex than it has to be.
127 The usual path is as follows: the Reactor signals (via
128 handle_input) that there is some data available in the socket.
129 The message is forwarded from the ACE_Svc_Handler to the
130 TAO_Transport, then to several helper classes in the pluggable
131 protocol framework. Eventually a method is invoked on the
132 TAO_Tranport to read the actual data, this forwards on the
133 ACE_Svc_Handler (again), and eventually returns.
135 A much simpler approach would be to read the data on the
136 handle_input() method itself, and forward the data up the stream.