Graduation Year


Document Type




Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Alfredo Weitzenfeld, Ph.D.

Committee Member

Marvin Andujar, Ph.D.

Committee Member

Yu Sun, Ph.D.

Committee Member

Susana Lai-Yuen, Ph.D.

Committee Member

David Diamond, Ph.D.


Bio-inspired Learning, Computational Model, Hippocampus, Reinforcement Learning, Spatial Cognition


Place cells are one of the most widely studied neurons thought to play a vital role in spatial cognition. Extensive studies show that their activity in the rodent hippocampus is highly correlated with the animal’s spatial location, forming “place fields” of smaller sizes near the dorsal pole and larger sizes near the ventral pole. Despite advances, it is yet unclear how this multi-scale representation enables navigation in complex environments.

In this dissertation, we analyze the place cell representation from a computational point of view, evaluating how multi-scale place fields impact navigation in large and cluttered environments. The objectives are to assess how the brain might exploit its multi-scale architecture and to extend brain-inspired models for controlling autonomous robots.

To achieve our goal, we present a multi-scale spatial cognition reinforcement learning model based on the differences between the dorsal and ventral hippocampus. We use the model to assess several place cell distribution methods in cluttered environments, showing how obstacles affect different scales. As a result, we propose distribution methods that can outperform single-scale representations by simultaneously optimizing the number of neurons used, the length of the paths learned, and the time required to learn them. Furthermore, we propose and implement methods for automatically adapting the representation to each environment allowing autonomous systems to exploit the multi-scale representation.

As reinforcement learning takes too long to learn, we also present our work with hippocampal replay models. Particularly, we extend a computational model to illustrate how hippocampal replay might reduce the number of trials required to learn a task by pre-exposing the agent to the environment, a phenomenon known as latent learning.