Lynn Abbott, Marc Abrams, Chris Beattie, Yvan Beliveau, David Bevan, Jack Carroll, Bob Fields, Deborah Hix, Diana Farkas, Ed Fox, Hayden Griffin, Susan Broker-Gross, Richard Gandour, Ron Kriz**, Al Loos, Scott Midkiff, John Moore, Joan Moore, Cal Ribbens, Mary Beth Rosson, Wayne Scales, Robert Schubert, Cliff Shaffer, Toni Trani, Mike Vorster, Layne Watson, Robert Williges. (This is a dynamic list so please let me know if I missed you. Names will be added as paragraphs become available. Send your contribution to Ron Kriz at email@example.com.)
** Principal Contact: R.D. Kriz, (540) 231-4386, firstname.lastname@example.org
This proposal is the result of several years of activities exploring the use of visual tools on campus and most recently a visit by key faculty and university administrators from Virginia Tech with NCSA staff, following an invitation from NCSA to create a NCSA-Virginia Tech partnership. Recent programs in supercomputing, human computer interfaces and visualization at Virginia Tech have evolved to a point where state-of-the-art equipment can enhance our present resources and create a visual computing environment on campus. This proposal is consistent with Virginia Tech's commitment to enhancing its teaching and research mission through the use of supercomputing, human-computer interfaces and visualization in various research programs, undergraduate and graduate curriculum, and through extension to Virginia as a land grant university.
The HCI Center, Laboratory for Scientific Visual Analysis and key faculty in the Colleges of Architecture, Arts and Sciences, and Engineering are interested in the creation of a CAVE resource that will enhance existing educational and research programs on campus. The CAVE is a 3D visual computing environment that recreates space and allows the educator or researcher to interact and visualize complex shapes in an interactive 3D environment. We request that Virginia Tech proposes to cost share in the purchase of a SGI Power Onyx with at least three VR boards and a small supporting workstation laboratory for those who do not have desktop workstations. Our long term goals are to extend this type of resource as part of the proposed Advanced Communication and Information Technology Center (ACITC).
After many discussions with faculty throughout the University we discovered that CAVE technology has broad based support and will be used as a multidisciplinary tool. Several professors across campus are as interested in using CAVE technology for the classroom as they are for research. Hence this proposal fits nicely with the University teaching and research mission. We are particularly fortunate to have a HCI group which will be a significant team component. Depending on the level of outside funding and matching University funds, the HCI group will facilitate the creation of general 3D interfaces that will benefit educational and research programs equally.
While 3D visualization technology is still in its infancy, hardware capabilities have progressed to the point where useful, practical work can be done. However, there exist significant barriers to using 3D visualization, primarily in usability and interfaces. Our position is that, to break through these barriers, research is needed on the following enabling technologies: (If you want to make a contribution here to develop an interface tool, contact Ron Kriz at email@example.com).
Listed above are HCI and visualization researchers with a variety of backgrounds that will focus on developing general purpose 3D interfaces. Listed below are some specific projects that will use the interface tools listed above or use other turnkey visualization tools that would take advantage of a CAVE enivronment: (If you want to make a contribution here to a specific application, contact Ron Kriz at firstname.lastname@example.org).
Another goal is to create a visual computing environment by linking the proposed CAVE with existing parallel computers and the HCI Interactive Accessibility facilities on campus. To facilitate the link between existing numerical computing and the proposed visual computing environments, the University began the process of upgrading to an ATM campus network in spring 1995. This network can provide 155-Mbps links (OC-3) between parallel computers as well as individual workstations. We will also study the use of MPI and PVM to link parallel computers with the CAVE on campus. Future plans call for a state-wide ATM network that can encourage alliance interaction with industry, government, and other academic institutions in Virginia.
Other faculty in the College of Business, College of Agriculture, and the College of Veterinary Medicine are also interested in participating. I will be meeting with key faculty this week from these colleges. Base on these discussions we anticipate to broaden our support for this proposal.
Below we list the paragraphs for each of the topics listed above which will be linked as hypertext to the final two page document.
Given the availability of suitable hardware, the primary barrier to effective 3D visualization is a well designed user interface. Developing user interfaces to portray large quantities of information is rapidly becoming one of the most challenging areas of computer systems. Across all sectors of our society -- private, industrial, military, government -- the need to manipulate the ever-increasing volume of data is growing rapidly. The primary objective of our research is to dramatically improve the methods used for manipulating and visualizing vast quantities of data, especially in 3D environments. VR-based systems as the CAVE are strikingly different from the current defacto standard GUIs (graphical user interfaces) that run on desktop workstations. But there have been very few new techniques developed for users, who develop on desktop workstations, to interact with CAVEs and other 3D virtual environments in a way that is most appropriate for those environments. In fact, all too often in a virtual environment, there is a fundamental mismatch between what a device is suited for versus what it is actually being used for. Clearly the CAVE is a superior 3D visualization environment when compared to traditional desktop workstations, but desktop workstations are where engineers and scientist first visualize their data, hence linking this two environments would be of obvious benefit. Both input and output need to be studied, and techniques for enhanced user control in these environments developed and evaluated. New input technologies such as eye gaze, foot-based input, and of course speech, may offer improved interaction for both the desktop and CAVE environments. For output, such effects as haptic (tactile/pressure) and auditory feedback need to be studied. These new techniques for user control need a close coupling of input and output in order to simulate the real world (as much as is desirable and/or possible).
Human-computer interface (HCI) work in this project will be user centered. This means that the work proposed in this section will not be based not on what software or interaction techniques are already available or are easiest to code as in section 2, but rather on what tasks users want and need to perform in order to access, manipulate, and understand large amounts of data, particularly in virtual environments such as the CAVE. These user-based needs will then serve as requirements for creating supporting software for novel interaction techniques that offer an appropriate match of technique and device to user goals and tasks. Hopefully the results of this study will also provide insight into how to improve the desktop workstation environment. The full potential of promising interactive technologies can be realized only when users can easily communicate with such systems. The result of our proposed HCI research will be measurably (and documentability) improved user interfaces for visualization and control of multi-dimensional information, in terms of both functionality and usability for the user.
We propose to build an interface between 3D visual tools that work on both desktop workstations and a 3D CAVE hardware environment. Our 3D visualization software interface would be designed to allow researchers to fully develop applications on their desktop workstations and then to automatically convert their workstation application for use in the CAVE. Emphasis is placed on developing applications on the desktop workstation rather than entirely from within the CAVE. Since CAVE technology is expensive, it is difficult to justify developing a specific application entirely within the CAVE.
Researchers are more likely to use 3D CAVE hardware if they can continue to develop on their desktop workstations. This puts an emphasis on working with existing, widely used, turnkey software. We propose to work with AVS, MSI-BioSym, VNI, SGI and other companies that make 3D volume visualization tools. We will create this interface both as an independent post processor and as an integral part of the source code when possible. We anticipate that while working with CAVE hardware and software interfaces design we will encounter new ideas that will also enhance the 3D visualization techniques used on desktop workstations. For this we anticipate the purchase of some stero-scopic workstation displays.
We expect educators and researchers from a variety of disciplines on campus to participate in both designing and testing our workstation-CAVE interface tool, hence giving the HCI evaluation team the opportunity to study a much wider variety of 3D visualization applications. We plan to make our resulting software widely available to the general research community, DoD agencies and their industrial partners.
The fundamental mathematical structure of many data spaces is generally multidimensional, and the goal is to fathom the structures through 3D views of that space. The central problem is data navigation: either the data set is too massive to exhaustively visualize, or there are multiple data sets to be compared.
Many scientific disciplines produce and use large, multidimensional databases. One typical examples include the General Social Survey (GSS), widely used in the Social Sciences. Collected nearly every year for about 20 years, the GSS contains hundreds of individual questions each administered to hundreds of individuals regarding preferences and demographic statistics. Another example is the US Census, issued as hundreds of statistics on each census tract. Large scale computations such as the multidisciplinary design optimization of an aircraft or weather forecasting produce far too much data to exhaustively visualize.
In collections of text, multimedia, and hypermedia, there also are high dimensional representations required (e.g., vector spaces, feature spaces). Indeed, thousands of dimensions are the norm, and powerful methods like Latent Semantic Indexing only reduce the number to hundreds of dimensions. Here the added challenge is that many dimensions are interconnected or related in complex ways, through links, synchronization, and content-based associations.
Most scientific visualization software today is designed to visualize numerical data, or data that has as its domain a set whose members are totally ordered, such as integers or real numbers. Yet many databases such as the GSS or the US Census include non-numeric, or categorical data: a partial order may exist among the domain values, but a total ordering does not. Genome sequences and individuals in a demographic study are categorical data. So are the values in many computer and network traces, such as traces of which program module is currently in execution in a program used in the design of software for parallel computers. The trouble with categorical data is that there is no obvious way to visualize them, in contrast to numerical data, whose total ordering provides an obvious mapping to two or more Cartesian dimensions. A particular problem is visualizing categorical time series. The body of time series methods for numerical data, such as Fourier transforms, correlation graphs, and statistical methods such as ARIMA are defined in terms of arithmetic operations, which are meaningless for categorical data.
Just as a statistician would never make an inference based on a sample size of one, visualization systems will ultimately have to handle multiple data sets, representing multiple simulation runs, or multiple trace files, or multiple observations of a system. Doing so requires handling massive amounts of data, thereby exacerbating the scale problem in visualization tools: how to design a tool that can simultaneously analyze an arbitrary number of traces of arbitrary size, but not exceed the limited input bandwidth of a human's senses. There are four fundamental solution methods to allow analysis of multiple data sets, while addressing the scale problem:
The following are some approaches to dealing with these problems.
Real time navigation: Visualization today is mostly concerned with representing one data set, and the hardware reflects that fact--one fast workstation or a synchronous multiprocessor with a few processors. True data navigation of large and multidimensional data sets will require massive computational power, which translates to parallel supercomputers with tens to hundreds of processors (like an Intel Paragon or a Cray T3E, rather than a Power Challenge). We propose development and implementation of parallel visualization algorithms, because they are a crucial enabling technology. This is especially important in handling real-time network traffic analysis, where thousands of streams are simultaneously active on our campus.
Extending visualizations: Techniques for visual analysis and browsing of such multidimensional databases are beginning to emerge. Integrated environments allowing multiple views of the data can allow users to gain new insights into their data. Such views include maps (for spatial distribution), graphs for range distribution, and spreadsheets for viewing multiple criteria at a time. Scatter plots and special techniques such as parallel coordinates attempt to let users visually spot correlations and other relevant relationships. Few of these techniques have been extended to 3D visualization environments and the new visualization technology such as desktop and CAVE VR.
Machine interpretation: A methodology that helps identify "interesting" things to look at, such as outliers or steep gradients, or that intelligently condenses insignificant data, or that clusters similar data, can help in navigation. Our past work, on a system to analyze time series data called Chitra, analyzes not one but a set, or ensemble, or data sets. Chitra commands, which can manipulate the ensemble as a unit, statistically analyze, model, and visualize select members individually, or as an ensemble. Chitra provides transforms to simplify the data and models to reduce trace data to a parsimonious, dynamic characterization of system behavior. We propose building on our work with Chitra to incorporate data navigation and techniques from the emerging field of "data mining." The solution used in Chitra is to visualize some traces, test all traces, transform all, and model all. Transforms are used to control state space explosion in the resultant model; examples of transforms include state aggregation by pattern aggregation and state aggregation by filtering in frequency domain. Among its statistical tests, Chitra allows partitioning of an ensemble. Normally the user will partition into mutually exclusive and exhaustive sub-ensembles, each of which is homogeneous; and then visualize one trace in each sub-ensemble, which by definition is ``representative.'' Chitra also can generate a separate model of each sub-ensemble.
Exploiting more human input bandwidth: Naturally a significant bottleneck in visualization of multidimensional databases has been the limits of the 2D computer monitor. 3D visualization hardware allows for the development of new visualization techniques that take advantage of 3D stereoscopic displays, or true 3D support through Virtual Reality. We propose studying VR interfaces and visualization techniques for multidimensional databases. Of particular interest are collections of information about network traffic, where sonification approaches might be helpful to add dimensions, allowing understanding of: media type, media size, communication origin and destination, are, topics, etc.
The primary goal of this research is to provide a fundamental understanding of the major independent and dependent variables that improve human-to-human communication in computer-conferencing environments using 3D visualization. Computer conferencing can be enhanced by 3D visualization in order to facilitate electronic presence in human-to-human communication. It can also help learning, of individuals and groups. A host of human-computer interface parameters and alternative human sensory communication variables need to be considered simultaneously in the design of these computer-augmented systems. The large number of resulting variables preclude empirical evaluation in one large experimental design. These variables need to be investigated using sequential research techniques in order to build an integrated database using the results of several studies to specify the functional relationships among these variables. A five-year research effort is proposed involving three overlapping phases of investigation. Phase 1 includes the development of the 3D computer-conferencing environment that will operate using the 3D CAVE visualization tools and the development of appropriate evaluation metrics. Phase 2 deals with sequential research planning, selection of appropriate independent variables affecting human communication, selection of dependent variables affecting communication performance, and conducting a series of sequential experiments. Phase 3 is concerned with combining all the data into an integrated database that can be queried by an empirical model based on polynomial regression to optimize the 3D computer-conferencing system.
1. Automatic Interpretation of Computed Tomography Images, A.L. Abbott
Computed tomograph has extended outside of the medical imaging arena where there are now many examples that can benefit for high speed image processing algorithms to detect explosives in luggage and detect cracks in structual materials to mention only a few industrial applications. With high speed neural networks we propose to design a system that automatically detects defects or structures of interests in a 3D visualizaiton environment. Particular interest will devoted to studying the use of voxel volume visualization tools that show potential for real-time industrial applications. Current reserach show that the neural-net approach can take advantage of parallel computing architectures and hence take advantage of recent parallel computing systems on campus: Intel Paragon, IBM-SP2, and a SGI Power Challenge. a SGI Power Challenge.-SP2, and
2. Perfomance Analysis of Parallel Computers and Communications Networks, M. Abrams
......... Paragraph Pending..................
3. General Modeling Simulation of Air Flow and Contamination Distribution, Y.C. Beliveau
There is a need to match sensor data in real-time to the air circulation patterns to determine pollutant transfer in enclosed systems such as the biosphere project. CAVE resources will provide the researcher the ability to interact with simulation models in real-time and determine air quality parameters not just for special projects such as the biosphere, but also for other industrial, commercial and residential facilities.
4. Visualization and Animation of 3D CAD Building Construction Models, Y.C. Beliveau
There is a need to better understand design data for building and construction of facilities. With the proposed CAVE resources it will be possible for the user to animate and visualize CAD model simulations where the user can interactively travel through and become part of the CAD models. There is a further need to link in object oriented capability for different user data bases that are needed to support task and operations throughout the life cycle of the construction of a facility. This CAVE interface would be used both for building and construction research and educational programs on campus.
5. Molecular Modeling in Biochemistry, D.R. Bevan
The CAVE facility will greatly increase our capabilities in molecular modeling both in research and teaching. Dr. Bevan's current research projects, in which molecular modeling is applied, include: 1. structural analysis and visualization of proteins, 2. simulation of distance distributions in peptides, 3. solvation energies and their relationship to membrane permeability, and 3. isomerization of proline-containing peptide bonds.
The aspects of these projects that involve visualization of structures will be greatly enhanced by the ability to view the molecules in a CAVE. A CAVE also will provide the environment to gain a better "feel" for molecular interactions through the tactile responses that are possible. Dr. Bevan also teaches courses in which the CAVE will be used and he assists other biochemistry faculty who want to apply structure visualization in teaching. For example, Dr. Bevan has taught several times a graduate-level course entitled "Molecular Modeling of Proteins and Nucleic Acids." Students learn the theory of molecular modeling, but much of the course is taught from from a more practical perspective in that students are required to do several computer-based problems and projects. The students in the molecular modeling course will all experience the CAVE for visualization, and those with the inclination will apply it in their class projects.
The Department of Biochemistry has the second or third largest undergraduate enrollment in the country. Dr. Bevan will use the CAVE to assist these students in visualizing protein and nucleic acid structures as part of their undergraduate training. Many years ago when we first began using computer visualization of structures in our classes, the students were very enthusiastic because they gained a much better understanding of structural features. The CAVE will raise this educational experience to a level that far surpasses what we are currently able to provide.
6. Composite Strength and Reliability, W.A. Curtin
The strength and reliability of most fiber-reinforced composites depend on the strengths of the fibers, which are brittle materials and must thus be described statistically. Composite performance then depends on the statistical distribution of fiber- strengths and the manner in which load is transferred from broken to unbroken fibers as some fibers break under loading. The resulting strength of the composite is also statistical, and depends on the composite volume. Predicting and understanding the evolution of damage in a fiber composite is a important but difficult problem, and only limited analytical progress has been made to date.
Our new research in this area is focused on numerically intensive simulation studies of fiber composite failure. The fiber composite is mapped onto a discrete anisotropic lattice of elastic elements which represent the elastic fibers and the slip between fibers and matrix; each fiber element is assigned a strength selected from the desired strength distribution, and the anisotropy of the lattice controls the load transfer. The inhomogeneous distribution of stresses throughout the composite is calculated numerically using the 3D Green Functions appropriate to the anisotropic elastic lattice. This requires the inversion of a matrix which is on the order of the composite volume. However, one inversion must be made for each damage state of the system as the fiber damage evolves, until failure occurs, which then represents one single composite strength value at the selected composite size, fiber distribution, and load transfer. The entire procedure must be repeated many times to generate statistical distributions, and on increasingly larger system sizes to garner information on size scaling. Since laboratory composites typically contain 100,000 fibers in a cross-section, and actual components are many times larger, volume scaling is a critical issue and requires simulations on systems as conceivable. Visualization of these results are also an important where 3D visualization tools have already been developed in GL to aid in the interpretation of simulation results. Visualization of results will be developed first on workstations and where particulary complex 3D structures are involved our reserach can benefit from access to CAVE resources.
The numerical intensity requires the use of advanced work stations for program development and initial studies. Current work is being carried out on a SGI Indigo 2. Individual runs on composites containing only 100 fibers required about 1 hour of CPU time. For larger systems, 2500 fibers, tile time per run increases rapidly, as does the required memory. Access to multiple machines of the same type through a transparent network could allow for some parallel operation, which would increase speed by factors of 20-40. Access to larger machines will permit larger system sizes to be investigated far more systematically then presently.
7. Molecular Modeling in Material Science, D. Farkas
This project involves a basic study of the structure of various defects in materials at a molecular atomistic level. The project uses descriptions of the interactions between the various atoms that are based on both experimental data as well as first principle quantum mechanical calculations. The technique we presently use for the interatomic forces is the embedded atom method (EAM) and a modified version of the method that we have developed. Once this description is obtained the equilibrium atomic configuration around any defect can be calculated through various energy minimization schemes, molecular dynamics or Montecarlo techniques. The detailed information on the atomistic configuration of the defective region is then linked to the properties in the material that are controlled by such defects. The Atomistic Simulation Laboratory has been involved in atomsitic computer simulation of materials behavior for more than 10 years now. In recent years the increased speed of available computing facilities have allowed us to undertake large scale massive simulations. It is now possible to conduct simulations involving millions of atoms and study the structures of defective solids and their relation with material properties.
These large scale simulations pose new challenges in the ways in which the results can be visualized and analyzed. Conventional two dimensional scientific visualization packages usually can not handle the large number of atoms involved in these massive simulations. Furthermore, the new computer architectures and increasing speeds allow the study of defects that are truly three dimensional in nature. This means that the results have to be visualized necessarily in three dimensions. An example of this type of simulation is the work on the study of cracks in intermetallic materials that is currently under way in the Laboratory. This work is now possible in three dimensions using realistic crystal structures for the materials considered and also realistic descriptions of the energetic interaction among the atoms. Molecular statics and molecular dynamics studies of crack behavior require complex visualization schemes in order to translate the three dimensional structure into various two dimensional plotting schemes. CAVE visualization will allow this research to take a qualitative step forward since the structure of the crack and surrounding area can be seen in a direct way.
During a recent visit to NCSA's CAVE a sample simulation was already visualized in the CAVE, using applications already in place. These preliminary results indicate that the benefits of CAVE visualization on the fracture mechanics studies at an atomistic level are indeed going to lead to qualitatively new possibilities. Similar benefits are expected in the studies of various other defects underway in our laboratory such as the structure of stepped surfaces, grain boundaries and interfaces in various materials. Our current work is sponsored by NSF and ONR representing a large effort in the area with a group of two postdoctoral research fellows and six graduate students.
8. Polymer Synthesis, NSF-STC, H.W. Gibson
Rotaxane is a compound which consists of a cyclic molecule which is threaded by a linear molecule; there is no covalent bond between the two species. The name derives from the Latin words for wheel and axle. Polymeric analogs of these small compounds can consist of very long linear species and have a very large number of cyclic molecules threaded along the backbone. Alternatively the cyclic species may be incorporated into the polymeric backbone or included as a group pendant from the backbone and can be threaded by either low molecular weight or high molecular weight linear species. The synthesis of this new class of materials has been pioneered in our group by using simulation-visualization resources on campus. The proposed CAVE environment linked with other computational resources on campus would constitute a significant improvement on the present simulation-visualization resources.
It is well known in polymer science that the molecular shapes of species with the same chemical constitution play a very critical role in the determination of physical properties. For example the viscosity in solution and melt states, which depends upon the interaction of the macromolecule with the surroundings, is very sensitive to shape. Solid state properties, such as the glass transition temperature, the melting point, indeed, even the presence of crystalline order, and mechanical behavior, which depend upon packing in a matrix, are also responsive to molecular shape. From a general experimental perspective then it is well appreciated that changes in molecular geometry or topology play a significant role in physical properties in both solution and solid states of polymeric materials.
Polyrotaxanes challenge our fundamental understanding of these relationships. The details of the 3-dimensional organization of the systems and the response of the structures to changes in environment are poorly understood on a molecular our ability to predict the physical properties is limited. The objective of this project is to develop a sufficient understanding of the structures and interactions in polyrotaxanes to be able to predict physical properties of importance in the solution, molten and solid states. These calculations involve extensive computation because there are many atoms, many rotatatable bonds and a very large number of conformational states. The ability to visualize simulation results is used to develop predictive capabilities, hence this research would particulary benefit from access to a 3D visualization CAVE environment.
All of these simulation-visualization activities are computationally intensive because of the sizes of the polyrotaxanes and the necessity of examining a large number of structures as a function of simulated thermal and mechanical stresses in order to predict bulk properties with any confidence.
We currently employ a variety of commercial programs to attack this problem at various levels. In particular we use Molecular Simulations' Polygraf and Cerius with AVS. With these 3D simulation- visualization tools we will develop first on our desktop workstations and then thes same tools can take advantage of the unique CAVE environment where there is advantage to be immersed in particularly complex 3D structures.
9. Micromechanical Modeling of Composite Textiles, O.H. Griffin
Distributed computing resources for simulation and visualization are both required for development, solution, and post processing of the detailed micromechanics models used for analysis of textile composite materials. Most of the pre- and-post processing of these models can be accomplished on an Indigo II machines. The solution phase of small models that are less than 50 degrees of freedom (d.o.f.) may also he performed on a Indigo II. For models larger than 200K d.o.f., scalable midsize computer.
Scalable computing resources and visualization of results in a CAVE environment will enable the analysis of detailed micro mechanical models of textile based composites. While traditional aerospace industry over the past 20 years, textile based composites are a recent development in the area of lightweight structural materials. Textile composites (TC) have the advantage over laminated composites of significantly greater damage tolerance and resistance to delamination. Currently, the major disadvantage of TC is the inability to examine the details of the internal response of these materials under load. Although more costly than simple strength of material models, the present analysis, based on detailed finite element models of the representative volume element (RVE) of a textile, allows prediction of the load, mode, and location of failure within the RVE. Through these models, not only is gross characterization possible, but internal details of displacement, strain, stress, and failure parameters can be studied.
Preliminary analyses on an IBM RS6000 have proven useful in the study of simple textiles such as plain weaves. Since the RVE of a plain weave contains only 2 yarns, it could be modeled with 45K d.o.f. Solution of a linear elastic analysis required approximately 600Mb of disk and 100 minutes of c.p.u. time. The largest proposed models are two-dimensionally braided textiles, having ten to fifteen yams in the RVE and requiring in excess of 200K d.o.f. for fully converged solutions. These models will incorporate an anisotropic progressive failure methodology requiring multiple load steps and multiple iterations within each load step. This combination of the finite element method and progressive failure methodology should allow the numerical simulation of the response of a textile composite through final failure. Computer requirements for such a procedure should be between one and two orders of magnitude greater than that which was required for the simple plain weave model. Software requirements: IDEAS, ABACUS , AVS. This research would be significantly enhanced if these existing desktop workstation tools could also be used in a CAVE environment, particularly for complex RVE geometries and corresponding strain energy density distributions.
10. Wave Propagation Models: Simulation-Visualization, R.D. Kriz
Recent advances in Scanning Acoustic Microscopy (SAM) have allowed researchers to evaluate the distribution of elastic properties in heterogeneous anisotropic composites. From SAM images material researchers can better predict tolerance of these materials to crack growth. Unfortunately much of the information in these SAM images lack a physical interpretation. The use of a numerical simulation with visualization can assist the material researcher in the analysis and interpretation of SAM experimental results. Because each material system has a unique structure, the corresponding numerical Finite Element Model (FEM) simulation requires a large refined mesh to model these irregular and unique structures. With such large meshes less approximations are made and more realistic results can be obtained. With scalable metacenter resources parametric studies in realtime with appropriate simulation-visualization techniques can be used by material researchers to physically interpret SAM images and suggest new nondestructive evaluation (NDE) test methods.
Most of our efforts to date have been focused on developing an efficient 3-D FEM code for simulating wave propagation in simple homogeneous anisotropic materials. With this 3-D model we can simulate the propagation of acoustic energy in a representative heterogeneous unit-cell of a fiber-polymer-interphase composite material. Results from such a study can have significant impact on how new advanced fiber-reinforced composites are fabricated for optimal design performance. With scalable computing on campus linked via a high speed network with a CAVE environemnt we will create an interactive realtime visual computing environment that will allow the design engineer to experiment with design paramenters interactively in CAVE. All visual processing will be accomplished in "realtime" during the actual model simuation such that parametric studies can be accomplished in "realtime".
11. Resin Transfer Mold Process Simulation-Visualization, A.C. Loos
The objective of this investigation to develop a comprehensive 3-D process simulation model for complex composite and to optimize the Resin Transfer Modeling (RTM) fabrication of aircraft stiffened structures. This effort will directly impact the process cycle development in the NASA Advanced Composite Technology (ACT) program. A joint research program between NASA Langley, Virginia Polytechnic Institute and State University, The College of William and Mary, Northrop Corp. and Douglas Aircraft Company is now underway to develop a science-based understanding of the RTM process. The use of this program will result in new low-cost manufacturing methods for damage tolerant structures.
The RTM simulation code includes numerical modules for the prediction of the resin free surface movement in the porous textile preform, a module for predicting the convective and conductive heat transfer in the polymer resin during the process, a of the porous preform, and a module to predict the compaction behavior of the textile preform and tooling assembly during the processing cycle. The main section of the 3-D simulation code consists of a set of five highly-coupled non-linear finite element solvers which produce the resin pressure, resin temperature, fiber/tooling temperature, and deformation fields for the problem.
Presently the simulation code uses a series of sparse solvers provided with the Cray Y-MP at NASA Langley. These include several from the SITRSOL numerical library from Cray Research Inc. In order to reduce the memory requirements involved in the comprehensive simulation, assembly of the global stiffness matrices is accomplished using the sparse column format. Efforts are underway which will allow the RTM code to be ported to a symmetric multiprocessing architecture system. The implementation on such a system will allow for the current simulations to be run on a midsize workstation platform. Scalability will allow for the user to summit RTM simulations of varying size to the META center computers and if more memory or CPU cycles are needed then additional resources will be automatically allocated from the server machines across the network. As the problem sizes grow the use of 2, 4, or 8 way memory interleaving will also be necessary. Access to scalable system will demonstrate a more general purpose computational paradigm to a broader audience.
With scalable systems it would be possible to simulate and visualize results in "realtime" in a CAVE environment for some of the more complex 3D structures. Visualization of 3D simulation results would be developed on desktop workstations first and converted to a CAVE environment when appropriate.
12. Networks for Distributed VR Computing Environments & Telepresence, S.F. Midkiff
This effort will investigate the network services needed to effectively support (i) demanding virtual reality applications where the VR environment (CAVE) is separated from computational resources (remote workstations and parallel processors), and (ii) distributed VR environments and telepresence (remote virtual presence) applications. These applications are extremely demanding of network services as they require low overhead (especially for distributed computing), real-time delivery (for interactivity and animations), highly variable bandwidth (due to varying tasks and compression), and efficient multicast support (especially for distributed VR and telepresence). While high data rate networks, such as ATM over OC-3 and OC-12 links can provide the required bandwidth, quality of service (QoS) guarantees for these application requirements are hard to achieve with any efficiency and providing appropriate QoS is an open research issue. Existing network and transport protocols, especially the widely used TCP/IP, are deficient in their support for these requirements. In addition, the switched structure of ATM creates problems for multicast operation that must be resolved.
The proposed infrastructure will provide an excellent "laboratory" for investigating these issues. The infrastructure will provide the essential mix of high data rate networking (ATM), VR environment (CAVE), computing resources (remote workstations and parallel machines), and demanding applications (as described elsewhere in this proposal). With these elements, it is possible to understand application-network interaction through measurement and characterization, investigate protocols to meet application needs, and evaluate new network protocols and services.
Visualization of 3D Flows in Turbomachinery Blade Rows, J. Moore and J. G. Moore 18e and J.anJoan G. Moore
The CAVE facility would allow 3D viewing of both turbomachinery hardware and the 3D flows within them. This will enhance understanding of these complex flows. The CAVE 3D viewing capability would be useful both for research in turbomachinery design and in courses such as
Modern CFD codes, such as MEFP developed by the authors, allow the calculation of the turbulent three-dimensional flows found in turbomachinery blade rows. Properties of the flow such as
14. Ionospheric Processes, W.A. Scales
Due to the profound effects of the earth's ionosphere and magnetophere on radio waves and therefore communication systems using radio waves, the structure and dynamics of these regions of the earth's upper atmosphere have been extensively studied since the 1930's. With the advent of unmanned space vehicles to provide high resolution observational measurements and high-powered computers for simulation and visualization of theoretical models, many of the critical unsolved and controversial issues are within grasp. The work at Virginia Tech involves studying three areas of upper atmospheric science: (1) creation and evolution of artificially produced ionic disturbances, (2) utilizing nonlinear processes produced by 'heating' the ionosphere with high-powered radio waves, (3) studying microscopic magnetospheric processes during highly active geophysical periods.
Large-scale simulation and visualization are currently being extensively used to study each of these problems. Several M.S. and Ph.D. students are involved in this work. The goals are the determination of the likely geophysical conditions for irregularity development, the spatial and temporal scales of the irregularities and ultimately an assessment of the effects of these irregularities on communication systems.
Due to the wide range of space and time scales involved in ionospheric and magnetospheric processes (eight orders of magnitude !), sophisticated numerical models must be developed that are very computationally intensive. Numerical methods at the state-of-the-art are utilized to simulate theoretical models which exhibit highly nonlinear, turbulent and chaotic behavior. The simulation models and computer codes are developed in house at Virginia Tech and include fluid dynamic, magnetohydrodynamic, Particle-in-Cell, and hybrid fluid-particle. Indeed, many of these models may serve as benchmarks for the computational ability of high performance computers. Also, since the trend is to study these processes in 2 and 3 spatial dimensions, visualization has become essential for interpreting the output of these numerical models. In the past, this research has made extensive use of super computers at Corneal University and Los Alamos National Laboratory.
The computer facilities at Virginia Tech have become inadequate at the present to support this research to its fullest capacity. This work will greatly benefit from the use of scalable architecture over a metacenter link with NCSA. Extensive use of visualization both on desktop workstations and in a CAVE environment will be essential for making physical interpretations of the physical processes being studied.
15. Acoustic Simulation in Building, R.P. Schubert
......... Paragraph Pending...........
16. Airport Transportation Systems, A.A. Trani
For the past three years the Transportation Systems Laboratory (TSL) has conducted research with the Federal Aviation Administration (FAA) in the optimization of aircraft landing operations at congested airports. The computer simulation-optimization models developed by the Virginia Tech TSL have been implemented in large scale aviation simulation models to predict airport capacity and delay.
A current effort is to "port" these computer simulation-optimization models to a real-time environment to be used by Air Traffic Controllers stationed at control towers to improve the tactical and strategic planning of aircraft arrivals and departures. There is a realization that in real-time the algorithms developed will reduce fuel consumption for the airlines and alleviate some of the workload imposed in ATC controllers.
A possible research alternative for ATC simulations will be the use of the CAVE to visualize aircraft operations and provide a surrogate simulation environment where we could investigate the best interfaces and information presented to local and ground traffic controllers. The use of real-time visualization and simulation with a supercomputer would offer obvious benefits as the proposed tactical ATC control system will be fine tuned off-line with no risks involved. It is believed that a visualization tool that offers high visual fidelity coupled with large computing power, would be an excellent resource to study the complex interactions of Air Traffic man-machine systems before their implementation.
17. Graphical Display of Quantitative Business Data, M.C. Vorster
Business data unlike most scientific data are inherently multidimensional and are not ordered or continuous. With appropriate visual tools we can not only replace tables of statistical numbers with more informative graphics but with well designed graphics in a CAVE environment we can create instruments for reasoning about quantitative business data. Although it is possible to design some generic graphical tools to explore business data in general, this research will study how to link the desktop 3D graphical tools with graphical tools unique to a CAVE format for the analysis of data used to organize, plan, and implement a large construction project.
With the advent of computers, the construction industry has already shifted from drafting boards and pencils to CAD and scheduling software and has as a result created voluminous computer generated project reports between the various players involved in the construction project. A successful project is tantamount to effectively communicating information between all players involved in a large construction project. With access to a CAVE environment we can explore alternate graphical tools that could prove to be useful in communicating critical information between all players and suggest how these graphical tools could be extended back to the desktop environment which is the more common tool in the office and on site.
18. Experimental Spatial Dynamics Modeling, R.L. West
.The concept of experimental spatial dynamics modeling (ESDM) embodies the experimental sampling of both surface shape and surface velocity of a structure under test by scanning lasers interfaced to an engineering workstation for data acquisition, modeling and scientific visualization. The goal is to derive statistically qualified spatial dynamics models of structures from in-the-field measurements. The resulting shape and dynamic response are experimentally derived spatial models that allow analysis, post-processing and visualization of the dynamics over the structure. These models are the experimental counterparts to the structure's existing finite element model.
The objective of the ESDM scalability project is to demonstrate the utilization of the distributed scalable computing resource through the use of supercomputer resources on campus for simulation and visualization in experimental structural dynamics. This research would particulary benefit from "realtime" simulation-visualization of 3D model predictions in a CAVE environment. The proposed demonstration plan follows in three phases:
* Research in ESDM is currently conducted by prototyping problem solving approaches and model formulations in Mathematica and graphics simulation using SGI's Inventor on laboratory SGI and HP workstations. The laboratory workstation computing level is predominantly used for model exploration, validation, data acquisition and signal processing and visualization.
* The process of reconstructing both the shape and dynamic response models from several thousand samples of shape and velocity is computationally intensive. Upon validation of a solution concept, critical computational elements in the problem formulation are coded as C++ objects and applied to the acquired data. The opportunity of utilizing a distributed scalable computing resource is directly compatible with the execution of prototype codes to model and post-process the large datasets.
* Reconciling the experimental data with computational models and model updating under a probabilistic framework requires computational resource at the supercomputer level. At this level the computational model must accommodate several thousand samples taken over a range of frequencies. Additionally, model order studies must also be carried out to obtain estimates on the "goodness" of the updated model. With CAVE resources much of these parametric studies could be done more efficiently in realtime with specially design 3D visualization CAVE tools.
This scalable approach to research oriented problem solving has worked well and is directly applicable to the concept of leveraging distributed scalable computing resources. Several elements of the overall ESDM concept have been realized and interfaced to "virtual" and working instruments on existing SGI and HP workstations and, in turn, used on industry problems. This project also targets the practical issues of utilizing the simulation-visualization computing resources for solving industry problems on a scale from laboratory workstations through university compute resources to the supercomputing level at NCSA.
Creation and use of the 3D tools described in the previous sections. This will include:
Improved 3D visualization user interfaces that will allow the researcher to first develop 3D tools on the desktop workstation that can be transformed into a CAVE environment when necessary.
Development of a 3D computer conferencing environment, as well as a database to design 3D computer conferencing systems.
Lynn Abbott: Associate Professor, Electrical Engineering, VPI&SU. Ph.D. University of Illinois, 1990, Areas of interest: computer vision, pattern recognition, high-performance computer architectures, multisensor fusion in x-ray images.
Marc Abrams: Associate Professor, Computer Science, VPI&SU; Ph.D., Computer Science, University of Maryland at College Park, 1986. Five years of work developing Chitra, a system to visualize, analyze, and model categorical and time series data sets. Areas of interest: Visualization of multidimensional and categorical data; performance modeling of communication networks and parallel, and distributed systems.
Yvan Beliveau: Professor, Architecture and Urban Studies, VPI&SU; Ph.D. Civil Engineering Purdue University. Department Chairman of Building Construction. Dr. Beliveau is the co-founder and Chairman of the Board of a hi-tech laser based company, Spatial PositioningSystems, Inc. (SPSi) which specializes in automation and performance improvement for the construction industry. This work has captured the imagination of several corporations and other entities (including Bechtel Corp., Intergraph Corp., ASCE-CERF, NSF, Du Pont, Amoco, Jacobus Technology Inc., and the US Army Corps of Engineers).
David R. Bevan: Associate Professor, Biochemistry, VPI&SU; Ph.D. Biochemistry, Northwestern University, 1978. Research interests include protein structure and function, molecular modeling/computational chemistry, quantitative structure-activity relationships.
William A. Curtin, Jr.: Associate Professor, Engineering Science and Mechanics and Material Science and Engineering, VIP&SU. Ph.D. Theoretical Physics, Cornell University, 1986. Research in a wide variety of areas in Materials Science as related to eneergy storage, electronic properties, and structual properties, including metals, ceramics, and polymers.
Diana Farkas: Professor, Materials Science and Engineering, VPI&SU; Ph.D., University of Delaware, 1980. Areas of interest: atomistic simulation and visualization of defect structure in metals and intermetallic alloys, duffusion and mechanical behavior in alloys.
Edward A. Fox: Professor, Computer Science, VPI&SU; Ph.D. 1983 and M.S. 1981, Computer Science, Cornell U.; B.S., E.E., MIT 1972. Eight years as vice chair and then chair of ACM SIGIR; founder of ACM congerence series in multimedia and digital libraries; past editor-in-cheif ACM Press Database and Electronic Products. Director of NSF-funded Information Access Laboratory.
Harry W. Gibson: Professor, Chemistry, VPI&SU. Ph.D., Chemistry University, 1966. Areas of interest: Determination of the relationship between molecular structure and physico-chemical properties to enable rational design of polymers with useful properties; chemical modification of polymers, polymers and oligomers with chemically reactive functionality, polymeric reagents and catalysts, conducting polymers, polymers with novel organized structures, and heterocyclic polymers.
O. Hayden Griffin: Professor, Engineering Science and Mechanics and Associate Dean for Academic Affairs for the College of Engineering, VPI&SU. Ph.D. Engineering Mechanics, 1980. Areas of interest: numerical methods, mechanics of composite materials, and smart materials and structures.
Deborah Hix: Research Computer Scientist, Computer Science, VPI&SU and Naval Research Laboratory in Washington DC. Ph.D. Computer Science and Applications, VPI&SU, 1985. Principal investigator in Human-Computer Interaction research group at Virginia Tech, in existence since 1979 and one of the pioneering groups in HCI research and development. Areas of interest: methodologies for user interface development, multimedia applications, novel interaction techniques.
Ronald D. Kriz: Associate Professor, Engineering Science and Mechanics and Materials Science and Engineering, VPI&SU; Ph.D. Engineering Mechanics, VPI&SU, 1979. Director of the Laboratory for Scientific Visual Analysis and has been teaching a course on Scientific Visual Data Analysis and Multimedia with an emphasis on scientific and engineering applications since 1991. Area of interest: materials research, solid mechanics, structural design, and nondestructive testing with 18 years experience mainly with composite materials.
Alfred C. Loos: Professor, Engineering Science and Mechanics and Material Science and Engineering, VPI&SU. Ph.D., Mechanical Engineering, University of Michigan, 1982. Areas of interest: composite materials processing, environmental effects on organic matrix composites, and mechanics of composite materials with a focus on development and verification of a simulation model for resin transfer molding through the use of large-scale computers and visualization of results.
Scott F. Midkiff: Associate Professor of Electrical Engineering, VPI&SU, Ph.D. Electrical Engineering, Duke University, 1985. Research and teaching in computer networks. Investigates wireless networks, high data rate networks and protocols, multimedia networks, experimental performance evaluation of network applications and protocols, and, through SUCCEED (an NSF Engineering Education Coalition), network applications for collaboration and distance learning.
John Moore: Professor, Mechanical Engineering, VPI&SU. Ph.D. ------------. Areas of Interest: 3-D flow calculations for rocket pumps, endwall flow in axial compressor rotors, Reynolds stress measurements, modeling and visualization, turbulence modelling for transition calculations.
Wayne A. Scales: Assistant Professor, Electrical Engineering, VPI&SU. Ph.D. Electrical Engineering and Applied Physics, Cornell Univeristy, 1989. Areas of Interest: Modeling structure and dynamics of the earth's upper atomosphere with emphasis on the ionosphere and magnetosphere including theoretical studies of plasma waves and instabilities as well as large-scale numerical simulation and computer visualizaiton of nonlinear microscopic and macroscopic physical processes.
Robert P. Schubert: Professor, Architecture and Urban Studies, VPI&SU; Masters of Architecture, VPI&SU, 1976. Assistant Dean for Research in College of Architecture and Urban Studies. Areas of interest: building physical environmental factors and solar energy.
Cliff Shaffer: Associate Professor, Computer Science, VPI&SU; Ph.D., Computer Science, University of Maryland at College Park, 1986. Since 1987, he has been a member of the Department of Computer Science at Virginia Tech where he is presently an Associate Professor. Dr. Shaffer's primary research interests are in the areas of spatial data structures, computer graphics and computer aided education.
A.A. (Tony) Trani: Assistant Professor, Civil Engineering, VPI&SU. Ph.D., . Areas of interest: air transportation, airport engineering, mass transit systems, intercity transportation modes, high-speed ground transportation, transportation systems analysis.
Michael C. Vorster: Professor, Civil Engineering, VPI&SU; Ph.D. University of Stellenbosch, 1981. Associate Dean of Research in the College of Engineering. Areas of interest: His 12 years of industry experience have helped develop his expertise in construction scheduling, schedule impact analysis and dispute resolution, construction equipment and methods, and equipment economics, life, repair, rebuild and maintenance policies. Areas of interest: dispute review boards, mentor programs for small disadvantaged businesses, and construction equipment methods, economics and most recently business data visualization.
Layne T. Watson: Professor, Computer Science and Mathematics, VPI&SU; Ph.D., 1974, Mathematics, University of Michigan, Ann Arbor; B.A., 1969, Psychology, University of Evansville. Areas of interest: Numerical analysis, nonlinear programming, mathematical software, solid mechanics, fluid mechanics, image processing, scientific computation. Associate Editor SIAM J. on Optimization and ORSA J. on Computing
Robert C. Williges: Professor of Industrial and Systems Engineering, Psychology, and Computer Science at VPI&SU; Director of Human Factors Engineering Center, VPI&SU; M.A., Ph.D. degrees in engineering psychology from The Ohio State University, A.B. degree in psychology from Wittenberg University. Over 25 years experience managing and directing human factors engineering research dealing with topics including human-computer interaction and human factors research methodology using sequential experimentation and empirical modeling building. Fellow of the Human Factors and Ergonomics Society and the American Psychological Association; past editor of Human Factors.
Other key researchers will be listed here as information becomes available