Graduation Year
2005
Document Type
Thesis
Degree
M.S.C.S.
Degree Granting Department
Computer Science
Major Professor
Nagarajan Ranganathan Ph.D.
Committee Member
Don Hilbelink, Ph.D.
Committee Member
Sudeep Sarkar, Ph.D.
Keywords
Virtual reality, Visible human, 3d texture mapping, Gigabyte volume exploration, Direct volume rendering
Abstract
The use of virtual reality (VR) for visualization can revolutionize medical training by simulating real world medical training procedures through intuitive and engaging user interface. Existing virtual reality based visualization systems for human anatomy are based on 3D surface and volumetric models and simulative systems based on model libraries. The visual impact as well as facilitation for learning are inadequate in such systems. This thesis research is aimed at eliminating such inadequacies by developing a non-immersive virtual reality system framework for storage, access and navigation of real human cadaveric data. Based on this framework, a real time software system called virtual cadaver navigation system (VCNS) is developed, that can be used as an aid for teaching human anatomy.
The hardware components of the system include, a mannequin, an examination probe similar to a medical ultrasound probe, and a personal computer. The examination probe is moved over the mannequin to obtain the virtual tomographic slice from the real cadaveric3-D volume data. A 3-D binary space partitioning tree structure is defined to organize the entire volumetric data, by subdividing it into small blocks of predefined size, called as bricks that are assigned a unique address for identification. As the examination probe is moved over the mannequin, the set of bricks intersecting the corresponding tomographic slice are determined by traversing the tree structure, and only, the selected bricks are accessed from the main memory and brought into the texture memory on the graphics accelerator card for visualization. The texture memory in the graphics card and the main memory are divided into slots of size, that is a multiple of the brick size, and a tagging scheme that relates the brick addresses, texture memory slots, and the main memory blocks is developed.
Based on spatial, temporal and sequential locality of reference, only the currently required bricks as well as some of the neighboring bricks are loaded from the main memory into the texture memory, in order to maintain the highest frame rates required forreal time visualization. The above framework consisting of the data organization and the access mechanism are critical in terms of achieving the interactive frame rates required for real-time visualization.
The input data to the system consists of non-segmented voxel data, and the data segmented and labelled based on tissue classification. The software system includes a labeling tool, in order to display the specific tissue information at the the location of the mouse cursor. This facility is useful in both teaching anatomy and self learning. Thus, the proposed VCNS system supports efficient navigation through the human body for learning anatomy and provides the knowledge of spatial locations and the interrelationship among the various organs of the body. A prototype software system has been developed, which is capable of achieving a throughput of 30 frames per second and has been tested with a 18-Gigabyte human cadaveric data obtained from the National Library of Medicine, on a personal computer with 64 Megabytes of texture memory and 512 Megabytes of main memory.
Scholar Commons Citation
Lothe, Abhijit V., "Virtual Cadaver Navigation System: Using Virtual Reality For Learning Human Anatomy" (2005). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/748