Graduation Year

2022

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Electrical Engineering

Major Professor

Ismail Uysal, Ph.D.

Committee Member

Nasir Ghani, Ph.D.

Committee Member

Zhuo Lu, Ph.D.

Committee Member

Srinivas Katkoori, Ph.D.

Committee Member

Ozsel Kilinc, Ph.D.

Keywords

Deep Clustering, Deep Learning, Self-organizing Networks, Wireless Communications

Abstract

This research focuses on machine (and deep) learning applications (including clustering,anomaly detection and signal classification) for self-organizing and next generation mobile networks in wireless communications. Specifically, this dissertation document will address the three different topics.

First, in the study titled “Performance analysis of neural network topologies and hyperparameters for deep clustering”, we explore the relationship between the clustering performance and network complexity. Deep learning found its initial footing in supervised applications such as image and voice recognition successes of which were followed by deep generative models across similar domains. In recent years, researchers have proposed creative learning representations to utilize the unparalleled generalization capabilities of such structures for unsupervised applications commonly called deep clustering. This paper presents a comprehensive analysis of popular deep clustering architectures including deep autoencoders and convolutional autoencoders to study how network topology, hyperparameters and clustering coefficients impact accuracy. Three popular benchmark datasets are used including MNIST, CIFAR10 and SVHN to ensure data independent results. In total, 20 different pairings of topologies and clustering coefficients are used for both the standard and convolutional autoencoder architectures across three different datasets for a joint analysis of 120 unique combinations with sufficient repetitive testing for statistical significance. The results suggest that there is a general optimum when it comes to choosing the coding layer (latent dimension) size which is correlated to an extent with the complexity of the dataset. Moreover, for image datasets, when color makes a meaningful contribution to the identity of the observation, it also helps improve the subsequent deep clustering performance.

Second, in the study titled “Anomaly Detection in Self-Organizing Networks Conventional vs. Contemporary Machine Learning”, we compare the premise of both conventional and modern machine (deep) learning, specifically for anomaly detection in self-organizing networks. While deep learning has gained significant traction, especially in application scenarios where large volumes of data can be collected and processed, more conventional methods may yet offer strong statistical alternatives, especially when using proper learning representations. For instance, support vector machines have previously demonstrated state-of-the-art potential in many binary classification applications and can be further exploited with different representations, such as one-class learning and data augmentation. We demonstrate for the first time, on a previously published and publicly available dataset, that conventional machine learning can outperform the previous state-of-the-art using deep learning by 15% on average across four different application scenarios. Our results indicate that when execution time is critical, conventional machine learning provides a strong alternative for 5G self-organizing networks using significantly fewer trainable parameters.

Finally, the third study is on “Fast, Robust and Light Machine Learning for Signal Classification in Next Generation Mobile Networks”. The next generation mobile networks bring unprecedented opportunities coupled with unique challenges thanks to the integration of multiple families of devices. Fast and robust signal classification and modulation identification become critical to meet the sustained demand on capacity. This paper presents a comparative study of data-centric and conventional approaches to signal identification at different noise levels on a real-world application. We demonstrate that a standard lightweight classifier can detect multiple modulation schemes with and without data compression and outperforms current state-of-the-art by as much as 6% on average across 15 different noise levels. More importantly, the detection speed is improved by at least 50-fold without a significant loss in accuracy when using feature compression.

Share

COinS