Graduation Year


Document Type




Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Yi-Cheng Tu, Ph.D.

Co-Major Professor

Ming Ji, Ph.D.

Committee Member

Adriana Iamnitchi, Ph.D.

Committee Member

Bo Zeng, Ph.D.

Committee Member

Kandethody Ramachandran, Ph.D.

Committee Member

Lingling Fan, Ph.D.

Committee Member

Srinivas Katkoori, Ph.D.


Buffer Management, CUDA, DBMS, Parallel Computing, Resource Allocation


The unrivaled computing capabilities of modern GPUs meet the demand of processing massive amounts of data seen in many application domains. While traditional HPC systems support applications as standalone entities that occupy the entire GPU, we propose a GPU-based DBMS (G-DBMS) that can run multiple tasks concurrently. To that end, system-level management mechanisms like resource allocation and buffer manager are needed to build such a concurrent database query processing system and fully unleash the GPUs’ computing power. However, CUDA does not provide enough OS-level functionalities to support it. Thus our research is focusing on implementing the optimization of resource allocation among concurrent queries and GPU buffer manager. Firstly, we have explored the single compute-bound kernel modeling on GPUs under NVidia’s CUDA framework and provide in-depth anatomy of NVidia’s concurrent kernel execution mechanism (CUDA stream), which is the foundation of the resource allocation in CUDA. Second, we study resource allocation of multiple GPU applications towards optimization of system throughput in the context of systems. In particular, compared to earlier studies of enabling concurrent tasks support on GPU, we use a different approach to control the launching parameters of multiple GPU kernels as provided by compile-time performance modeling as a kernel-level optimization and also a more general pre-processing model with batch-level control to enhance performance. Lastly, we develop a novel Buffer Manager on GPU, which is non-trivial since GPU does not support semaphore that is crucial to implement a buffer manager. Specially we present a buffer manager on GPU to cache the output of multiple queries through GPU Page Map. We develop an intuitive Linked List algorithm and a Random Walk algorithm; the Random Walk algorithm avoids using global lock and enhances the performance of getting/releasing page dramatically. Upon them, we are able to build the prototype of G-DBMS.