Graduation Year


Document Type




Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Sriram Chellappan, Ph.D.

Co-Major Professor

Ryan Carney, Ph.D., M.P.H., M.B.A.

Committee Member

Mehran Mozaffari Kermani, Ph.D.

Committee Member

Attila Yavuz, Ph.D.

Committee Member

Stephen Saddow, Ph.D.


Artificial Intelligence, Bee, Bumble Bee, Citizen Science, Computer Vision, Insects, Larvae, Machine Learning, Mimicry, Mosquito


Since the dawn of the Industrial Revolution, humanity has always tried to make labor more efficient and automated, and this trend is only continuing in the modern digital age. With the advent of artificial intelligence (AI) techniques in the latter part of the 20th century, the speed and scale with which AI has been leveraged to automate tasks defy human imagination. Many people deeply entrenched in the technology field are genuinely intrigued and concerned about how AI may change many of the ways in which humans have been living for millennia. Only time will provide the answers. This dissertation is concerned with designing, deploying and validating computer vision algorithms (a branch of AI dealing with image data sets) for addressing a range of problems in insect classifications. The broader impacts of this dissertation lie in agriculture, species evolution, and public health.

Bees are vital in agriculture and ecology. Food cultivation depends on bees to a great extent. Therefore ensuring a healthy environment for bees and the conservation of endangered bee species are a top priority. For conserving bees, their rapid identification in nature is necessary. To help entomologists in this regard, we design, deploy and validate convolutional neural network models to classify bees from image datasets. At first, we collected 6,332 original Research-Grade images of bees and other insects from the iNaturalist platform. We trained and evaluated our first model to classify bee images from other insect images using a VGG16-based CNN model. We could achieve more than 91% accuracy for that model. We trained another CNN model using ResNet-101-based model to classify be- tween bumble bee and other bee images, and for that model we could achieve almost 89% classification accuracy. We used cutting-edge Class Activation Maps (CAMs) to test our model’s capability to identify insect body pixels from the background pixels.

We also collected 360 mimic insect images to evaluate how capable our model is for classifying between bees and bee mimics from morphological perspective. We discovered that bee mimics were most capable of fooling our models compared to the non-mimic insects. Additionally, we employed the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm on the mimic insect images. t-SNE is an unsupervised, non-linear technique for dimensionality reduction, primarily used for visualizing high-dimensional data. We employed this method in our study to understand the clustering of various mimic groups. We found out that t-SNE generated 12 distinct clusters defining the 12 phylogenetic groups of mimic insects. As a next step of our study, we again tried to take advantages of CNN algorithms to identify one of the most endangered bees in nature today - namely, the Rusty-patched bumble bee Bombus affinis. We collected 200 rusty-patched bumble bee images and 200 other bee images, again from the iNaturalist platform. These other bee images were collected from 20 different species from six distinct genera. We used mirror and rotate techniques to augment and increase the dataset to 3,200 images. Additionally, we generated grayscale version for each of these 3,200 images. At first, we trained and evaluated an EfficientNetV2B0-based classifier model to identify rusty-patched bumble bees from other bee images. We developed this model separately for both color and grayscale images. For the test dataset, the models could achieve around 90% and 92% accuracy respectively. From the CAM results, we were motivated by a natural marker - a “V”-shaped black spot on the thorax - that was always highlighted in our classification algorithms. Motivated by this, we tried an innovative method, which we call as anatomically inspired classification. We manually cropped the thorax portion from the images and trained another EfficientNetV2B0-based classifier to classify between the same classes, for both color and grayscale images. This time we could see a good classification performance boost achieving around 95% and 94% accuracy for color and grayscale images respectively. Next, we annotated the thorax portion on 400 images and trained an object detection and localization model to automatically detect the thorax on the bee images. Us- ing that model we automatically cropped all the thorax images and trained a new set of CNN networks to classify between rusty-patched bumble bees and other bees. Here also we found better classification accuracy than for the full-body classification. This demonstrates that learning from nature, and being able to look at the right anatomical components for classification indeed brings improved results. Later in this dissertation, we move towards mosquitoes - another insect important in nature from a public health perspective. Specifically, in this study, we attempted to employ CNN models to identify the sex of mosquito larvae - an important problem in certain mosquito control techniques. Within the broad ambit of mosquitoes, our focus is one of the deadliest mosquitoes today on the planet - the Anopheles stephensi mosquito, which is proving to be an unstoppable vector of malaria in Asia and more recently, Africa. Our partners in this study are researchers across multiple disciplines from organizations such as the CDC. For this study, we used Anopheles stephensi larvae images to identify the sex, as the 6th abdominal segments of males include two black spots representing the gonads. We collected 560 images from insectaries at the CDC and USF. The images were from L3 (3rd instar) and L4 (4th instar) stages. We attempted this solution in two different phases. In the first phase, we collected 362 images, augmented those data with mirror and rotate techniques, then trained a VGG16-based CNN model to classify between male and female images. We achieved around 90% accuracy in the validation dataset. But in the test dataset, the accuracy was a little poor, 74%. We applied CAM on the larvae images and found that the model was not focusing much on the gonads at the 6th abdominal stage. Instead, it was focusing more on the thorax and nearby hairs. Next, we collected some more larva image data from the lab at USF and increased our dataset to 560 images and trained the CNN model in the second phase. In this phase, we trained an Xception-based network and augmented images with eight different algorithms. Training in three steps we could achieve a validation accuracy of 96% and test accuracy of around 86%. We again employed the CAM algorithm, and this time also the model was focusing on the thorax and nearby hairs. Finally, we integrated our Anopheles stephensi larvae sex classifier model into a web-based portal using Firebase app and a Linode GPU instance to be used by the general public and mosquito control personnel around the world. A beta version of the system is open for anyone to use now.

We believe that this dissertation demonstrates the innovative uses of AI for insect classification. The novel aspects of this study include investigating the capability of bee mimics to fool AI algorithms, creating anatomically inspired algorithms for classifying an endangered bee, and classifying the sex of a highly invasive and deadly mosquito at the larval stage. We hope that these results will help drive forward AI applications in insect biology.