Graduation Year

2018

Document Type

Thesis

Degree

M.S.C.S.

Degree Name

MS in Computer Science (M.S.C.S.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Srinivas Katkoori, Ph.D.

Committee Member

Yicheng Tu, Ph.D.

Committee Member

Sriram Chellappan, Ph.D.

Keywords

Hill Climbing, Simulated Annealing, Parallel Computing, CUDA, Tabu List, Aspiration Criteria, Neighborhood Search, Global Optimum, Local Minima

Abstract

There are many combinatorial optimization problems such as traveling salesman problem, quadratic-assignment problem, flow shop scheduling, that are computationally intractable. Tabu search based simulated annealing is a stochastic search algorithm that is widely used to solve combinatorial optimization problems. Due to excessive run time, there is a strong demand for a parallel version that can be applied to any problem with minimal modifications. Existing advanced and/or parallel versions of tabu search algorithms are specific to the problem at hand. This leads to a drawback of optimization only for that particular problem. In this work, we propose a parallel version of tabu search based SA on the Graphics Processing Unit (GPU) platform. We propose two variants of the algorithm based on where the tabu list is stored (global vs. local). In the first version, the list is stored in the global shared memory such that all threads can access this list. Multiple random walks in solution space are carried out. Each walk avoids the moves made in rest of the walks due to their access to global tabu list at the expense of more time. In the second version, the list is stored at the block level and is shared by only the block threads. Groups of random walks are performed in parallel and a walk in a group avoids the moves made by the rest of the walks within that group due to their access to shared local tabu list. This version is better than the first version in terms of execution time. On the other hand, the first version finds the global optima more often. We present experimental results for six difficult optimization functions with known global optima. Compared to the CPU implementation with similar workload, the proposed GPU versions are faster by approximately three orders of magnitude and often find the global optima.

Share

COinS