E hardware software


















We do this by looking at how several VE efforts have reported their model acquisition process. If one reads the work done for the walkthrough project at the University of North Carolina Airey et al. In the presentation of that paper, one of the problems discussed was that of ''getting the required data out of a CAD program written for other purposes.

In particular, data related to the actual physics of the building were not present, and partitioning information useful to the real-time. RB2 is a software development platform for designing and implementing real-time VEs.

Development under RB2 is rapid and interactive, with behavior constraints and interactions that can be edited in real time. RB2 has a considerable following in organizations that do not have sufficient resources to develop their own in-house VE expertise. RB2 is a turnkey system, whose geometric and physics file formats are proprietary. As a result, project researchers have developed an open format for storing these three-dimensional models Zyda et al.

Computer-aided design systems with retrofitted physics are beginning to be developed e. Many applications call for VEs that are replicas of real ones. Rather than building such models by hand, it is advantageous to use visual or other sensors to acquire them automatically.

Automatic acquisition of complex environment models such as factory environments is currently not practical but is a timely research issue. Meanwhile, automatic or nearly automatic acquisition of geometric models is practical now in some cases, and partially automated interactive acquisition should be feasible in the near term Ohya et al. The most promising short-term approaches involve active sensing techniques. Scanning laser finders and light-stripe methods are both capable of producing range images that encode position and shape of surfaces that are visible from the point of measurement.

These active techniques offer the strong advantage that three-dimensional measurements may be made directly, without the indirect inferences that passively acquired images require. Active techniques do, however, suffer from some. Surfaces that are nonreflective or obliquely viewed shiny surfaces may not return enough light to allow range measurements to be made. Noise is enough of a problem that data must generally be cleaned up by hand.

A more basic problem is that a single range image contains information only about surfaces that were visible from a particular viewpoint. To build a complete map of an environment, many such views may be required, and the problem of combining them into a coherent whole is still unsolved. Among passive techniques, stereoscopic and motion-based methods, relying on images taken from varying viewpoints, are currently most practical.

However, unlike active sensing methods, these rely on point-to-point matching of images in order to recover distance by triangulation. Many stereo algorithms have been developed, but none is yet robust enough to compete with active methods. Methods that rely on information gleaned from static monocular views—edges, shading, texture, etc.

For many purposes, far more is required of an environment model than just a map of objects' surface geometry. If the user is to interact with the environment by picking things up and manipulating them, information about objects' structure, composition, attachment to other objects, and behavior is also needed.

Unfortunately, current vision techniques do not even begin to address these deeper issues. The term augmented reality has come to refer to the use of transparent head-mounted displays that superimpose synthetic elements on a view of the real surroundings.

Unlike conventional heads-up displays in which the added elements bear no direct relation to the background, the synthetic objects in augmented reality are supposed to appear as part of the real environment. That is, as nearly as possible, they should interact with the observer and with real objects, as if they too were real. At one extreme, creating a full augmented-reality illusion requires a complete model of the real environment as well as the synthetic elements.

For instance, to place a synthetic object on a real table and make it appear to stay on the table as the observer moves through the environment, we would need to know just where the table sits in space and how the observer is moving. For full realism, enough information about scene illumination and surface properties to cast synthetic shadows onto real objects would be needed.

Furthermore, we would need enough information about three-dimensional scene structure to allow real objects to hide or be hidden by synthetic ones, as appropriate. Naturally, all of this would. This sort of mix of the real and synthetic has already been achieved in motion picture special effects, most notably, Industrial Light and Magic's effects in films such as The Abyss and Terminator 2.

Some of these effects were produced by rendering three-dimensional models and creating a composite of the resulting images with live-action frames, as would be required in augmented reality.

However, the process was extremely slow and laborious, requiring manual intervention at every step. After scenes were shot, models of camera and object motions were extracted manually, using frame-by-frame manual measurement along with considerable trial and error. Even small geometric errors were prone to destroy the illusion, making the synthetic objects appear to float outside the live scene.

Automatic generation of augmented-reality effects is still a research problem in all but the least demanding cases. The two major issues are: 1 accurate measurement of observer motions and 2 acquisition and maintenance of scene models.

The prospects for automatic solutions to the latter were discussed above. If the environment is to remain static, it would be feasible to build scene models off-line using interactive techniques.

Although VE displays provide direct motion measurements of observer movement, these are unlikely to be accurate enough to support high-quality augmented reality, at least when real and synthetic objects are in close proximity, because even very small errors could induce perceptible relative motions, disrupting the illusion. Perhaps the most promising course would use direct motion measurements for gross positioning, using local image-based matching methods to lock real and synthetic elements together.

In order to give solidity to VEs and situate the user firmly in them, virtual objects, including the user's image, need to behave like real ones. At a minimum, solid objects should not pass through each other, and things should move as expected when pushed, pulled, or grasped.

Analysis of objects' behavior at the scale of everyday observation lies in the domain of classic mechanics, which is a mature discipline. However, mechanics texts and courses are generally geared toward providing insight into objects' behavior, whereas to support VE the behavior itself is of paramount importance—insight strictly optional.

Thus classic treatments may provide the required mathematical underpinnings but do not directly address the problem at hand. Simulations of classic mechanics are extensively used as aids in engineering design and analysis. Although these traditional simulations do. In engineering practice, simulation is a long, drawn-out, and highly intellectualized activity. The engineer typically spends much time with pencil and paper developing mathematical models for the system under study.

These are then transferred to the simulation software, often with much tweaking, and parameter selection. Only then can the simulation actually be run.

As a design evolves, the initial equations must be modified and reentered and the simulation rerun. In strong contrast, a mechanical simulation for VEs must run reliably, seamlessly, automatically, and in real time. Within the scope of the world being modeled, any situation that could possibly arise must be handled correctly, without missing a beat. In the last few years, researchers in computer graphics have begun to address the unique challenges posed by this kind of simulation, under the heading of physically based modeling.

Below we summarize the main existing technology and outstanding issues in this area. Solid Object Modeling Solid objects' inability to pass through each other is an aspect of the physical world that we depend on constantly in everyday life: when we place a cup on a table, we expect it to rest stably on the table, not float above or pass through it.

In reaching and grasping, we rely on solid hand-object contact as an aid as do roboticists, who make extensive use of force control and compliant motion. Of course, we also rely on contact with the ground to stand and locomote. The problem of preventing interpenetration has three main parts. First, collisions must be detected. Second, objects' velocities must be adjusted in response to collisions. Finally, if the collision response does not cause the objects to separate immediately, contact forces must be calculated and applied until separation finally occurs.

Collision detection is most frequently handled by checking for object overlaps each time position is updated. If overlap is found, a collision is signaled, the state of the system is backed up to the moment of collision, and a collision response is computed and applied. The bulk of the work lies in the geometric problem of determining whether any pair of objects overlap.

This problem has received attention in robotics, in mechanical CAD, and in computer graphics. Brute force overlap detection for convex polyhedra is a straightforward matter of testing each vertex of every object against each face of every other object. More efficient schemes use bounding volumes or spatial subdivision to avoid as many tests as possible. Good general methods for objects with curved surfaces do not yet exist. This is not merely an esoteric concern, because it means that rapidly moving objects, e.

Needless to say, large errors can result. Guaranteed methods have been described by Lin and Canny for the case of convex polyhedra with constant linear and angular velocity. Collision response involves the application of an impulse and producing an instantaneous change in velocity that prevents interpenetration. The basics of collision response are well treated in classic mechanics and do not pose any great difficulties for implementation.

Problems do arise in developing accurate collision models for particular materials, but many VE applications will not require this degree of realism. To handle continuous multibody contact, it is necessary to calculate the constraint forces that are exchanged at the points of contact and to identify the instants at which contacts are broken.

Determining which contacts are breaking is a particularly difficult problem, turning out, as shown by Baraff, to require combinatorial search Baraff and Witkin, ; Baraff, Fortunately, Baraff also developed reasonably efficient methods that work well in practice. Many virtual world systems exhibit rigid body motion with collision detection and response Hahn, ; Moore and Wilhelms, ; Baraff, ; Baraff and Witkin, ; Zyda et al.

Baraff's system also handles multibody continuous contact and frictional forces for curved surfaces. These systems provide many of the essential elements required to support VEs. Constraints and Articulated Objects In addition to simple objects such as rigid bodies, we should be able to handle objects with moving parts—doors that open and close, knobs and switches that turn, etc.

In principle, the ability to simulate simple objects such as rigid bodies, together with the ability to prevent interpenetration, could suffice to model most such compound objects. For instance, a working desk drawer could be constructed by modeling the geometry of a tongue sliding in a groove, or a door by modeling in detail the rigid parts of the hinge. In practice, it is far more efficient to employ direct geometric constraints to summarize the effects of this kind of detailed interaction.

For instance, a sliding tongue and groove would be idealized as a pair of coincident lines, one on each object, and a hinge would be represented as an ideal revolute joint.

The simulation and analysis of articulated bodies—jointed assemblies of rigid parts—have been treated extensively, particularly in robotics. Building on the work of Lathrop, Schroeder demonstrated that it is nevertheless feasible to build a "virtual erector set" based on recursive formulations Schroeder and Zeltzer, Another approach to simulating constrained systems of objects builds on the classic method of Lagrangian multipliers, in which a linear system is solved at each time step to yield a set of constraint forces.

This approach offers several advantages: first, it is general, allowing essentially arbitrary holonomic constraints to be applied to essentially arbitrary not necessarily rigid bodies. Second, it lends itself to on-the-fly construction and modification, an important consideration for VEs. Finally, the constraint matrices that form the linear system are typically sparse, reflecting the fact that everything is not usually connected directly to everything else.

Using numerical methods that exploit this sparsity can yield performance that competes with recursive methods. Witkin et al. Nonrigid Objects A vast body of work treats the use of finite element methods to simulate continuum dynamics.

Most of this work is probably of limited relevance to the construction of conventional VEs, simply because such environments will not require fine-grained nonrigid modeling, with the possible exception of virtual surgery. However, interactive continuum analysis for science and engineering may become an important specialized application of VEs once the computational horsepower is available to support it.

Highly simplified models for flexible-body dynamics are presented by Witkin and Welch , by Pentland and Williams , and by Baraff and Witkin The general idea of these models is to use only a few global parameters to represent the shape of the whole object, formulating the dynamic equations in terms of these variables. These simplified models capture only the gross deformations of the object but in return provide very high performance.

They are probably the most appropriate choice for VEs that require simple nonrigid behavior. The general idea is to use simulated flexible materials as a sculpting medium. Flexible thin sheets are employed by Celniker and Gossard and by Welch and Witkin Szeliski and Tonnesen uses clouds of oriented particles to form smooth surfaces.

Motivated by the obvious need in both computer graphics and engineering for realism and physically based environments that support various levels of object detail and interaction depending on the application , Metaxas , ; Metaxas and Terzopoulos, a, b, ; Terzopoulos and Metaxas, developed a general framework for shape and nonrigid motion synthesis, which can also handle rigid bodies as a special case. The framework features a new class of dynamic deformable part models.

These models have both global deformation parameters that represent the gross shape of an object in terms of a few parameters and local deformation parameters that represent an object's details through the use of sophisticated finite element techniques.

Global deformations are defined by fully nonlinear parametric equations. Hence the models are more general than the linearly deformable ones included in Witkin and Welch and quadratically deformable ones included in Pentland and Williams By augmenting the underlying Lagrangian equations' motion with very fast dynamic constraint techniques based on Baumgarte , he adds the capability to compose articulated models Metaxas, , ; Metaxas and Terzopoulos, b from deformable parts, whose special case for rigid objects is the technique used by Barzel and Barr Moreover, Metaxas , also develops fast algorithms for the computation of impact forces that occur during collisions of complex flexible multibody objects with the simulated physical environment.

Issues to be Addressed Most of the essential pieces that are required to imbue VEs with physical behavior have already been demonstrated. Some—notably snap-together constraints and interactive surface modeling—have been demonstrated in fully interactive systems, and others—notably the handling of collision and contact—are only now beginning to appear in interactive systems recent work by David Baraff at Carnegie Mellon University involves an interactive 2.

The most immediate challenge at hand is one of integrating the existing technology into a working system, along with other elements of VE construction software. Many performance-related issues are still to be addressed, for example, doing efficient collision detection in a large-scale environment systems with from to , players or parts and further accelerating constrained dynamics solutions. In addition, many of the standard. For example, the ratio of compute time to real time can vary by orders of magnitude in the simulation of noninterpenetrating bodies, slowing even further when complex contact situations arise.

Maintaining a constant frame rate will require the development of new methods that degrade gracefully in such situations. The need for simulated autonomous agents arises in many VE application areas, such as training, education, and entertainment, in which such agents could play the role of adversaries, trainers, or partners or simply supernumeraries to add richness and believability.

Although fully credible simulated humans are the stuff of science fiction, simple agents will often suffice. The construction of simulated autonomous agents draws on a number of technologies, including robotics, computer animation, artificial intelligence, and optimization.

Motion Control Placing an autonomous agent in a virtual physical environment is essentially like placing a robot in a real environment: the agent's body is a physical object that must be controlled to achieve coordinated motion. Fortunately, controlling a virtual agent is much easier than controlling a real one, since many simplifications and idealizations can be made. For example, the agent can be given access to full and perfect information about the state of the world, and many troubling mechanical effects need not arise.

Closed-loop controllers were used to animate virtual agents by McKenna and Zeltzer and by Miller More recently, Raibert and Hodgkins adapted their controller for a real legged robot to the creation of animation. Rather than hand-crafting controllers, Witkin and Kass solve numerically for optimal goal-directed motion, in an approach that has since been elaborated by Van de Panne et al.

Human Figure Simulation In many applications, a VE system must be able to display accurate models of human figures, possibly including a model of the user. Consider training systems, for example. Out-the-window views generated by high-end flight simulators hardly ever need to include images of human figures. But there are many situations in which personnel must cooperate and interact with other crew members. Carrier flight deck operations, small squad training or antiterrorist tactics, for example, require precise coordination of the actions of many individuals for safe and successful execution.

VE systems to support training,. We call a computer model of a human figure that can move and function in a VE a virtual actor. If the movement of a virtual actor is slaved to the motions of a human using cameras, instrumented clothing, or some other means of body tracking, we call that a guided virtual actor , or simply, a guided actor.

Autonomous actors operate under program control and are capable of independent and adaptive behavior, such that they are capable of interacting with human participants in the VE, as well as with simulated objects and events. In addition to responding to the typed or spoken utterances of human participants, a virtual actor should be capable of interpreting simple task protocols that describe, for example, maintenance and repair operations. Given a set of one or more motor goals—e.

Beyond the added realism that the presence of virtual actors can provide in those situations in which the participants would normally expect to see other human figures, autonomous actors can perform two important functions in VE applications. First, autonomous actors can augment or replace human participants. This will allow individuals to work or train in group settings without requiring additional personnel. Second, autonomous actors can serve as surrogate instructors.

VE systems for training, education, and operations rehearsal will incorporate various instructional features, including knowledge-based systems for intelligent computer-aided instruction ICAI Ford, The required degree of autonomy and realism of simulated human figures will vary, of course, from application to application.

However, at the present time, rigorous techniques do not exist for determining these requirements. It should also be noted that autonomous agents need not be literal representations of human beings but may represent various abstractions. For example, the SIMNET system provides for semiautonomous forces that may represent groups of dismounted infantry or single or multiple vehicles that are capable of reacting to simulated events in accordance with some chosen military doctrine.

In the remainder of this section, we confine our discussion to simulated human figures, i. In the course of everyday activity, we touch and manipulate objects, make contact with various surfaces, and make contact with other humans either directly, e. There are other ways, of course, in which two or more humans may coordinate their motions that do not involve direct contact, for example, crew members on a carrier flight deck who communicate by voice and hand signals. In the computer graphics community, there is a long history of human figure modeling, but this work has considered, for the most part, kinematic modeling of uncoupled motion exclusively.

With today's graphics workstations, kinematic models of reasonably complex figures say, 30 to 40 degrees of freedom can be animated in real or near-real time; dynamic simulations cannot. We need to understand in which applications kinematic models are sufficient, and in which applications the realism of dynamic simulation is required.

Action Selection In order to implement autonomous actors that can function independently in a virtual world without the need for interactive control by a human operator, we require some mechanism for selecting and sequencing motor skills appropriate to the actor's behavioral goals and the states of objects—including other actors—in the VE.

That is, it is not sufficient to construct a set of behaviors, such as walking, reaching, grasping, and so on. In order to move and function with other actors in a virtual world that is changing over time, an autonomous actor must link perception of objects and events with action. We call this process motor planning. Brooks has developed and implemented a motor planning mechanism he calls the subsumption architecture.

This work is in large part a reaction against conventional notions of planning in artificial intelligence. Brooks argues for a representationless paradigm in which the behavior of a robot is modulated entirely by interaction between perception of the physical environment and the robot's task-achieving behavior modules. Esakov and Badler report on the architecture of a simulation-animation system that can handle temporal constraints for task sequencing, rule sets, and resource allocation.

No on-line planning was implemented. Task descriptions were initially in the form of predefined animation task keywords. A high-level task expansion planner Geib, creates task-actions that are interpreted by an object-specific reasoner to execute animation behaviors.

Recent work by Badler et al. Magnenat-Thalmann and Thalmann , , and Rijpkema and Girard have reported some work with automated grasping, but their systems seem to be focused on key frame-like animation systems for making animated movies, rather than for real-time interaction with virtual actors. Their system uses limited natural language for describing body configurations, e. However, this has only limited use in describing interactions with objects in the environment.

Ridsdale describes the Director's Apprentice, which is intended to interpret film scripts by using a rule-base of facts and relations about cinematic directing.

This work was primarily concerned with positioning characters in relation to each other and the synthetic camera, but it did not address the representation and control of autonomous agents. In later work, Ridsdale describes a method of teaching skills to an actor using connectionist learning models Ridsdale, Maes has developed and implemented an action selection algorithm for goal-oriented, situated robotic agents.

Her work is an independent formalization of ideas discussed in earlier work by Zeltzer , with an important extension that accounts for the continuous flow of activation energy among a network of motor skills. Routine, stereotypical behavior is a function of an agent's currently active drives, goals, and motor skills. As a virtual actor moves through and operates in an environment, motor skills are triggered by presented stimuli, and the agent's propensities for executing some behaviors and not others are continually adjusted.

The collection of skills and the patterns of excitation and inhibition determine an agent's repertoire of behaviors and flexibility in adapting to changing circumstances.

One of the key aspects of a virtual world is the population of that world. We define population as the number of active entities within the world. An active entity is anything in the world that is capable of exhibiting a behavior. By this definition, a human-controlled player is an active entity, a tree that is blown up is midway between an active and static entity, and an inert object like a rock is a static entity. Recently, the term computer generated forces CGF has been developed to group all entities that are under computer control into a single category.

The controlling mechanisms of the expert systems and autonomous players are briefly discussed below. The expert system is capable of executing a basic behavior when a stimulus is applied to an entity. Within NPSNET it controls those entities that populate the world when there are an insufficient number of human or networked entities to make a scenario interesting.

These added entities are called noise entities. The noise entity expert system has four basic behaviors: zig-zag paths, environment limitation, edge of the world response, and fight or flight. These behaviors have been grouped by the stimuli that causes the behavior to be triggered.

The zig-zag behavior uses an internal timer to initiate the behavior. Environment limitation and edge of the world response are both dependent on the location of the entity in the database as the source of stimuli. The fight or flight behavior is triggered by external stimuli.

The purpose of an autonomous force is to present an unattended, capable, and intelligent opponent to the human player at the simulator. In NPSNET, the autonomous force is broken down into two components: an observer module that models the observation capabilities of combat forces and a decision module that models decision making, planning, and command and control in a combat force.

The autonomous force system employs battlefield information, tactical principles, and knowledge about enemy forces to make tactical decisions directed toward the satisfaction of its overall mission objectives. It then uses these decisions in a reactive planning approach to develop an executable plan for its movements and actions on the battlefield. Its decisions include distribution of multiple goals among multiple assets, route planning, and target engagement.

The autonomous force represented in this system consists of a company of tanks. The system allows for cooperation between like elements as well as collaboration between individuals working on different aspects of a task. The observer module, described by Bhargava and Branley , acts as the eyes and ears of the autonomous force. In the absence of real sensors, the observation module uses probabilistic models and inference rules to generate the belief system of the autonomous force.

It accounts for battlefield conditions, as well as the capabilities and knowledge of individual autonomous forces, to determine whether and with how much accuracy various events on the simulated battlefield can be observed. The system converts factual knowledge about the simulated environment into.

It does so by combining the agent's observations with evidence derived from its knowledge base and inference procedures. If one considers three-dimensional VEs as the ideal interface to a spatially organized database, then hypermedia integration is a key technological component.

Hypermedia consists of nonsequential media grouped into nodes that are linked to other nodes. If we embed such nodes into a structure in a virtual world, the node can be accessed, and audio or compressed video containing vital information on the layout, design, and purpose of the building can be displayed, along with historical information.

Such nodes will also allow us to make a search of all other nodes and find related objects elsewhere in the virtual world. We also envision hypernavigation, which involves the use of nodes as markers that can be traveled between, either over the virtual terrain at accelerated speeds or over the hypermedia links that connect the nodes.

Think of rabbit holes or portals to information populating the virtual world. Hypermedia authoring is another growing area of interest. In authoring mode, the computer places nodes in the VE as a game is played. After the game, the player can travel along these nodes which exist not only in space but also in time, appearing and disappearing as time passes and watch a given player's performance in the game.

Authoring is especially useful in training and analysis because of this ability to play back the engagement from a specific point of view.

Some examples of the uses of hypermedia in virtual worlds are presented in the following paragraphs. Hyper-NPSNET combines virtual world technology with hypermedia technology by embedding hypermedia nodes in the terrain of the virtual world. Currently, hypertext is implemented as nonsequential text grouped into nodes that are linked to other text nodes. This video contains captured video of the world being represented geometrically.

Thus it provides information not easily represented or communicated by geometry. In another application, the University of Geneva has a project under way entitled "A Multimedia Testbed" de Mey and Gibbs, , in which an object-oriented test bed for multimedia is presented. This is a test bed for prototyping distributed multimedia applications.

The test application of that software is a virtual museum. The museum is a three-dimensional. In all likelihood, the main short-term research and development effort and commercial payoff in the VE field will involve the refinement of hardware and software related to the representation, simulation, and rendering of visually oriented synthetic environments. This is a natural and logical extension of proven technology and benefits seen in such areas as general simulation, computer-aided design and manufacturing, and scientific visualization.

Nevertheless, the development of multimodal synthetic environments is an extremely important and challenging endeavor. Independent of the fundamental psychophysical issues and device design and development issues, multimodal interactions place severe and often unique burdens on the computational elements of synthetic environments.

These burdens may, in time, be handled by extensions of current techniques used to handle graphical information. They may, however, require completely new approaches in the design of hardware and software to support the representation, simulation, and rendering of worlds in which visual, auditory, and haptic events are modeled. In either case, the generation of multimodal synthetic environments requires that we carefully examine our current assumptions concerning VE architectural requirements and design constraints.

In general, multimodal VEs require that object representation and simulation techniques now represent and support the generation of information required to support auditory signal generation and haptic feedback i.

Both of these modalities require materials and geometric i. Consequently, volumetric approaches may become more attractive at all three levels of information handling i. Not only may volumetric approaches facilitate the representation of the information needed for objects in multimodal VEs but they may also lend themselves to local interaction models of physics that are elegant and straightforward to implement Toffoli, In addition, hardware to support this form of physical simulation is starting to become available on such machines as the CAM-8 and the FX-1 from Exa Corporation.

Let's take a deeper look at what these two things are and why they're important. Hardware is any element of a computer that's physical. This includes things like monitors , keyboards , and also the insides of devices, like microchips and hard drives. Software is anything that tells hardware what to do and how to do it , including computer programs and apps on your phone.

Video games, photo editors, and web browsers are just a few examples. Hardware and software are different from each other, but they also need one another in order to function. What I found out yesterday is Microsoft in its infinite wisdom arbitrarily blocked Abelton Live sweet Software blocking software due to suspect? PelliX and TheToneDig like this. So after that episode on my HP desktop I loaded Abelton live 11 on to my laptop.

Clicked a couple of settings for input , output and it works. I had to load samples in a session and keys work even through my headphones on the HX Stomp cool. Next up is fixing EZ drummer on the laptop.

It loaded up fine the problem is I messed up and loaded. Joined: Dec 7, Messages: 18, Likes Received: 18, SkyMonkey likes this. Joined: Nov 10, Messages: Likes Received: I cannot help but will learn new from this thread Had to google half of the words Im an analog man here but curious on the thread and tone! Last edited: Dec 2, Sort of getting it working. Now I have latency issue recording the guitar in Abelton live. I have to look at that.



0コメント

  • 1000 / 1000