7 .. contents:: Table of Contents
11 Remote Procedure Call Implementation
12 ====================================
14 Traditionally, the C library abstracts over several functions that interface
15 with the platform's operating system through system calls. The GPU however does
16 not provide an operating system that can handle target dependent operations.
17 Instead, we implemented remote procedure calls to interface with the host's
18 operating system while executing on a GPU.
20 We implemented remote procedure calls using unified virtual memory to create a
21 shared communicate channel between the two processes. This memory is often
22 pinned memory that can be accessed asynchronously and atomically by multiple
23 processes simultaneously. This supports means that we can simply provide mutual
24 exclusion on a shared better to swap work back and forth between the host system
25 and the GPU. We can then use this to create a simple client-server protocol
26 using this shared memory.
28 This work treats the GPU as a client and the host as a server. The client
29 initiates a communication while the server listens for them. In order to
30 communicate between the host and the device, we simply maintain a buffer of
31 memory and two mailboxes. One mailbox is write-only while the other is
32 read-only. This exposes three primitive operations: using the buffer, giving
33 away ownership, and waiting for ownership. This is implemented as a half-duplex
34 transmission channel between the two sides. We decided to assign ownership of
35 the buffer to the client when the inbox and outbox bits are equal and to the
36 server when they are not.
38 In order to make this transmission channel thread-safe, we abstract ownership of
39 the given mailbox pair and buffer around a port, effectively acting as a lock
40 and an index into the allocated buffer slice. The server and device have
41 independent locks around the given port. In this scheme, the buffer can be used
42 to communicate intent and data generically with the server. We them simply
43 provide multiple copies of this protocol and expose them as multiple ports.
45 If this were simply a standard CPU system, this would be sufficient. However,
46 GPUs have my unique architectural challenges. First, GPU threads execute in
47 lock-step with each other in groups typically called warps or wavefronts. We
48 need to target the smallest unit of independent parallelism, so the RPC
49 interface needs to handle an entire group of threads at once. This is done by
50 increasing the size of the buffer and adding a thread mask argument so the
51 server knows which threads are active when it handles the communication. Second,
52 GPUs generally have no forward progress guarantees. In order to guarantee we do
53 not encounter deadlocks while executing it is required that the number of ports
54 matches the maximum amount of hardware parallelism on the device. It is also
55 very important that the thread mask remains consistent while interfacing with
58 .. image:: ./rpc-diagram.svg
62 The above diagram outlines the architecture of the RPC interface. For clarity
63 the following list will explain the operations done by the client and server
64 respectively when initiating a communication.
66 First, a communication from the perspective of the client:
68 * The client searches for an available port and claims the lock.
69 * The client checks that the port is still available to the current device and
71 * The client writes its data to the fixed-size packet and toggles its outbox.
72 * The client waits until its inbox matches its outbox.
73 * The client reads the data from the fixed-size packet.
74 * The client closes the port and continues executing.
76 Now, the same communication from the perspective of the server:
78 * The server searches for an available port with pending work and claims the
80 * The server checks that the port is still available to the current device.
81 * The server reads the opcode to perform the expected operation, in this
82 case a receive and then send.
83 * The server reads the data from the fixed-size packet.
84 * The server writes its data to the fixed-size packet and toggles its outbox.
85 * The server closes the port and continues searching for ports that need to be
88 This architecture currently requires that the host periodically checks the RPC
89 server's buffer for ports with pending work. Note that a port can be closed
90 without waiting for its submitted work to be completed. This allows us to model
91 asynchronous operations that do not need to wait until the server has completed
92 them. If an operation requires more data than the fixed size buffer, we simply
93 send multiple packets back and forth in a streaming fashion.
98 The RPC server's basic functionality is provided by the LLVM C library. A static
99 library called ``libllvmlibc_rpc_server.a`` includes handling for the basic
100 operations, such as printing or exiting. This has a small API that handles
101 setting up the unified buffer and an interface to check the opcodes.
103 Some operations are too divergent to provide generic implementations for, such
104 as allocating device accessible memory. For these cases, we provide a callback
105 registration scheme to add a custom handler for any given opcode through the
106 port API. More information can be found in the installed header
107 ``<install>/include/llvmlibc_rpc_server.h``.
112 The Client API is not currently exported by the LLVM C library. This is
113 primarily due to being written in C++ and relying on internal data structures.
114 It uses a simple send and receive interface with a fixed-size packet. The
115 following example uses the RPC interface to call a function pointer on the
118 This code first opens a port with the given opcode to facilitate the
119 communication. It then copies over the argument struct to the server using the
120 ``send_n`` interface to stream arbitrary bytes. The next send operation provides
121 the server with the function pointer that will be executed. The final receive
122 operation is a no-op and simply forces the client to wait until the server is
123 done. It can be omitted if asynchronous execution is desired.
127 void rpc_host_call(void *fn, void *data, size_t size) {
128 rpc::Client::Port port = rpc::client.open<RPC_HOST_CALL>();
129 port.send_n(data, size);
130 port.send([=](rpc::Buffer *buffer) {
131 buffer->data[0] = reinterpret_cast<uintptr_t>(fn);
133 port.recv([](rpc::Buffer *) {});
140 This example shows the server-side handling of the previous client example. When
141 the server is checked, if there are any ports with pending work it will check
142 the opcode and perform the appropriate action. In this case, the action is to
143 call a function pointer provided by the client.
145 In this example, the server simply runs forever in a separate thread for
146 brevity's sake. Because the client is a GPU potentially handling several threads
147 at once, the server needs to loop over all the active threads on the GPU. We
148 abstract this into the ``lane_size`` variable, which is simply the device's warp
149 or wavefront size. The identifier is simply the threads index into the current
150 warp or wavefront. We allocate memory to copy the struct data into, and then
151 call the given function pointer with that copied data. The final send simply
152 signals completion and uses the implicit thread mask to delete the temporary
158 auto port = server.try_open(index);
162 switch(port->get_opcode()) {
163 case RPC_HOST_CALL: {
164 uint64_t sizes[LANE_SIZE];
165 void *args[LANE_SIZE];
166 port->recv_n(args, sizes, [&](uint64_t size) { return new char[size]; });
167 port->recv([&](rpc::Buffer *buffer, uint32_t id) {
168 reinterpret_cast<void (*)(void *)>(buffer->data[0])(args[id]);
170 port->send([&](rpc::Buffer *, uint32_t id) {
171 delete[] reinterpret_cast<uint8_t *>(args[id]);
176 port->recv([](rpc::Buffer *) {});
184 The following code shows an example of using the exported RPC interface along
185 with the C library to manually configure a working server using the CUDA
186 language. Other runtimes can use the presence of the ``__llvm_libc_rpc_client``
187 in the GPU executable as an indicator for whether or not the server can be
188 checked. These details should ideally be handled by the GPU language runtime,
189 but the following example shows how it can be used by a standard user.
191 .. _libc_gpu_cuda_server:
197 #include <cuda_runtime.h>
199 #include <llvmlibc_rpc_server.h>
201 [[noreturn]] void handle_error(cudaError_t err) {
202 fprintf(stderr, "CUDA error: %s\n", cudaGetErrorString(err));
206 [[noreturn]] void handle_error(rpc_status_t err) {
207 fprintf(stderr, "RPC error: %d\n", err);
211 // The handle to the RPC client provided by the C library.
212 extern "C" __device__ void *__llvm_libc_rpc_client;
214 __global__ void get_client_ptr(void **ptr) { *ptr = __llvm_libc_rpc_client; }
216 // Obtain the RPC client's handle from the device. The CUDA language cannot look
217 // up the symbol directly like the driver API, so we launch a kernel to read it.
218 void *get_rpc_client() {
219 void *rpc_client = nullptr;
220 void **rpc_client_d = nullptr;
222 if (cudaError_t err = cudaMalloc(&rpc_client_d, sizeof(void *)))
224 get_client_ptr<<<1, 1>>>(rpc_client_d);
225 if (cudaError_t err = cudaDeviceSynchronize())
227 if (cudaError_t err = cudaMemcpy(&rpc_client, rpc_client_d, sizeof(void *),
228 cudaMemcpyDeviceToHost))
233 // Routines to allocate mapped memory that both the host and the device can
234 // access asychonrously to communicate with eachother.
235 void *alloc_host(size_t size, void *) {
237 if (cudaError_t err = cudaMallocHost(&sharable_ptr, sizeof(void *)))
242 void free_host(void *ptr, void *) {
243 if (cudaError_t err = cudaFreeHost(ptr))
247 // The device-side overload of the standard C function to call.
248 extern "C" __device__ int puts(const char *);
250 // Calls the C library function from the GPU C library.
251 __global__ void hello() { puts("Hello world!"); }
254 // Initialize the RPC server to run on the given device.
256 if (rpc_status_t err =
257 rpc_server_init(&device, RPC_MAXIMUM_PORT_COUNT,
258 /*warp_size=*/32, alloc_host, /*data=*/nullptr))
261 // Initialize the RPC client by copying the buffer to the device's handle.
262 void *rpc_client = get_rpc_client();
263 if (cudaError_t err =
264 cudaMemcpy(rpc_client, rpc_get_client_buffer(device),
265 rpc_get_client_size(), cudaMemcpyHostToDevice))
269 if (cudaError_t err = cudaStreamCreate(&stream))
272 // Execute the kernel.
273 hello<<<1, 1, 0, stream>>>();
275 // While the kernel is executing, check the RPC server for work to do.
276 // Requires non-blocking CUDA kernels but avoids a separate thread.
277 while (cudaStreamQuery(stream) == cudaErrorNotReady)
278 if (rpc_status_t err = rpc_handle_server(device))
281 // Shut down the server running on the given device.
282 if (rpc_status_t err =
283 rpc_server_shutdown(device, free_host, /*data=*/nullptr))
289 The above code must be compiled in CUDA's relocatable device code mode and with
290 the advanced offloading driver to link in the library. Currently this can be
291 done with the following invocation. Using LTO avoids the overhead normally
292 associated with relocatable device code linking.
296 $> clang++ -x cuda rpc.cpp --offload-arch=native -fgpu-rdc -lcudart -lcgpu-nvptx \
297 -I<install-path>include -L<install-path>/lib -lllvmlibc_rpc_server \
298 -O3 -foffload-lto -o hello
305 We describe which operation the RPC server should take with a 16-bit opcode. We
306 consider the first 32768 numbers to be reserved while the others are free to