Web 3D Formats
Table of Contents
1 Introduction 2
2 Exporting VRML from 3ds Max with VRML Helpers 4
Copyright By PowCoder代写 加微信 powcoder
2.1 Animate the Coke can 5
2.2 Add VRML Helpers to the 13
2.2.1 VRML Helpers — Touchsensor and NavInfo 14
2.3 Export to VRML97 20
3 Rendering with VRML and X3D 23
3.1 Web 3D Format Workflow 24
4 Convert VRML97 based 3D model to X3D 31
4.1 Test your VRML first 31
4.1.1 Convert with the Instantreality Online Transcoder 32
4.1.2 Using AOPT to Create Optimized X3DOM Content 37
4.2 Convert VRML97 direct to X3DOM for rendering in HTML5 38
4.3 Convert X3D to X3DOM for rendering in HTML5 42
5 Integrating VRML, X3D and X3DOM models into a 3D App 46
5.1 Write X3D into the HTML5 DOM 46
5.2 Embedding a VRML/X3D plugin into the web page 47
Dr Martin White
Mobile Web 3D Applications
Web 3D Formats
© University of Sussex
Introduction
Lab 3 has been designed to introduce you to the process of converting your 3D models from 3ds Max into a Web 3D format, e.g. X3D, that will allow you to display your 3D model and interact with it on the Internet, i.e. in a responsive and mobile first Web 3D application (your 3D App). To do this you will need to convert your 3ds Max model into a compatible 3D format that will allow you to integrate your 3D models into your 3D App.
There are several ways to allow 3D to be rendered in your 3D App, but we will largely focus on exploiting a technology referred to as X3DOM — it is well worth browsing through the online catalogue of examples that you can access from the carousel! We can think of X3DOM as an open source JavaScript framework used to create 3D scenes in web pages. Or, quoting directly from the X3DOM website, X3DOM allows us to:
“Integrate 3D content seamlessly into your webpage — the scene is directly written into the HTML markup. No Plugins needed. Simply include a JavaScript file”.
By scene, we mean your 3ds Max models, etc. This is quite powerful stuff, IMHO, because as implied, if the 3D model is written directly into the web page, it must have some form of a tag (or node) structure, and if so these tags or nodes can be directly manipulated with JavaScript. Indeed, as the quote says, X3DOM has a JavaScript file developed to manage this embedding of, what is actually, an X3D model in the HTML5 DOM.
This is really good, but we will also utilize some popular web-based VRML/X3D players (some of which can also be used as web browser plugins) for testing your 3D models first, and after that we will exploit X3DOM as the Web 3D format of choice. We will initially focus on how to use VRML (Virtual Reality Modelling Language) and its successor X3D (championed by the Web3D Consortium) because generally speaking 3ds Max does not export direct to X3DOM — well, as you will learn, you can get a plugin for 3ds Max to export, but it only works with very early versions of 3ds Max.
So, we will use VRML first, to test if our 3ds Max models can be used in a web browser, simply because 3ds Max only has a VRML97 Exporter by default. But, also because 3ds Max has some interesting VRML Helpers, which may be useful for in scene interaction with your 3D model.
It is worth noting that some other 3D authoring packages do export to X3D, and you can in fact install InstantExport into older versions of 3ds Max up to 2014. This X3D exporter is provided by instantreality.
So, in this tutorial you can try 3 methods to arrive at an X3DOM format.
Use 3ds RML Export, then convert to X3D or X3DOM using an instantlabs online tool
· This is the way I usually do it, for convenience, but sometimes the Instantreality.org server is down.
Use 3ds RML Export, then convert to X3D or X3DOM using an instantlabs command line (aopt) tool.
· The command line tool should be installed in the labs as part of the instantreality framework, which also includes the instantplayer. You can also download this framework and install at home: http://www.instantreality.org/downloads/. It is a good idea to install this on your home PC or laptop.
Try to install the InstantExport plugin, to convert to X3D or X3DOM, as mentioned above.
· I did use this method up to 3ds Max 2013, but since upgrading to 2017 and later versions I have not tried it. So, currently I don’t use this method because the first two methods work ok. Feel free to give it a go, the download site is: http://www.instantreality.org/downloads/dailybuild/?dir=/InstantExport — if someone wants to give it a go and it works, spread the news. If it does work, then it’s ok to use it, but be aware that is not installed in 3ds Max 2017 in the labs.
There are also other possibilities for inserting 3D into a responsive mobile first web page, way too many to consider. For example, the Unity gaming engine has a web player (no longer supported though), and you could also code at the WebGL level: https://get.webgl.org/. However, these methods are beyond the scope of this module. Instead, this module will exploit X3DOM, which is effectively an abstraction on top of WebGL anyway.
In the context of this laboratory we will focus on integrating 3D into your 3D App using X3DOM with its associated CSS3 and JavaScript libraries for embedding or writing X3D code directly into the HTML5 DOM (Document Object Model) — you will appreciate that this method is very effective, and will ultimately be the key method you will use for your assignment.
Primarily, for this Lab 3, we will focus on exporting your 3 models created in Lab 2 (i.e. your Coke can, Sprite bottle and Dr Pepper cup) as VRML97 objects from 3ds Max. So, we will export to VRML97 and then test this VRML model in a VRML player such as the Cortona3D Viewer, instantplayer or BSContact player — I don’t think the BSContact player is installed on the lab image. However, you should be able to download it and install in your local drive, my guess is ITS will have shut the door on that one now. Nevertheless, you can install it on your own machine.
Here is another X3D viewer you can also use called Xj3D, which is built into an X3D Editor, called X3D Edit. X3D Edit should also be installed in the lab image, but I tend not to use it — it would probably be very good if you were developing X3D models from scratch via code, rather than developing them in 3ds Max and converting to X3D. We will only need to do some simple modifications to the X3D code, and 9 times out of 10 we can probably avoid even this, so a lightweight text editor will do, particularly if it recognizes XML tags.
We will then, in this lab, investigate converting your models to X3D and again test these in the Instantreality player or BSContact player (note Cortona3D Viewer does not support X3D). Along the way, we will investigate adding VRML helpers to provide some interaction with the 3D model.
A point to note is that X3DOM does not, at the time of writing, support directly some of the VRML Helpers that are available in 3ds Max, in particular the Touch Sensor, so we need to use JavaScript (e.g. with an HTML onclick button) to trigger any touch based interactions, such as animations that would have been triggered by a VRML TouchSensor node. However, that is cool, that is what we want to do anyway, i.e. we want to manipulate X3D nodes with JavaScript to get useful interactions in your 3D App.
Finally, we will convert either your VRML into X3DOM (and inline X3D) to demonstrate how a plugin approach, such as the Cortona3D Viewer and BSContact players can be eliminated, as being inferior in many ways, from the 3D App — nevertheless, they are useful for testing the 3ds Max exported 3D model (VRML).
As you get towards the end of this Lab 3 you should be able to appreciate the advantages of integrating an inline X3D model into the HTML5 DOM against that of embedding a VRML97 model into the HTML5 web page using a plugin — the difference is striking! Then, we will finish off by implementing a very simple Bootstrap based template to integrate your three X3D models into the HTML5 DOM — the makings of a 3D App, no less!
Let’s start by looking at how to export your 3D models from 3ds Max with the VRML97 Exporter.
Exporting VRML from 3ds Max with VRML Helpers
In this tutorial, we explain how to export your 3ds Max model into VRML using the VRML97 Exporter in 3ds Max. First, we’ll use your Coke can model, you can then apply the same principles for your other models, e.g. the Sprite bottle, Dr Pepper cup, and any models you create for your assignment. We will also cover the basic principles behind interacting with a 3D model through the use of VRML helpers. VRML helpers are used in linking actor objects or actions to object model animations, for example. There are several useful VRML helpers you may want to use to add some kind of interaction to your 3D model while it is rendered in your 3D App (as opposed to web page interactions that manipulate your 3D model). Further, several of these VRML helpers convert to X3D well, while some do not — obviously don’t use those that do not convert well.
This tutorial also covers the setup of an actor camera that will be used in a 3D viewing application (such as the Cortona3D Viewer, BS Contact or X3DOM) to navigate the virtual environment and better visualize your object in the context of a 3D scene (rather than just a single object). For example, you could imagine walking around several objects triggering animations by proximity or touch. This tutorial only covers a few basic VRML components and requires a basic understanding of 3ds Max, which you already gained in Lab 2.
To explore VRML Helpers before we export to VRML, you need to have a 3ds Max model that you can animate in some simple way, e.g. an animation being something interesting that can be triggered through, say a touch sensor, so make sure you have finished at least the Coke can, and ideally the Sprite bottle and Dr Pepper cup models as well. We will animate the Coke can in a simple way using a 3ds Max key frame animation. We will then attach a VRML touch sensor to the scene to allow us to trigger the animation.
Tip! Don’t get wrapped up in animation, this is not a 3D Animation module, however being able to trigger, from a web page button, some simple animations such as rotating your 3D model in the assignment is a good interaction feature. Any more detailed animations will depend on your 3D model and may or may not be worth your effort.
Animate the Coke can
Remember, this lab tutorial is not meant to focus on key frame animation as such, we are simply using animation to illustrate the use of some of the VRML helpers!
So, let’s Figure out how to create the simple animation first.
1. Open up your Coke can model you created in 3ds Max in Week 2. At this stage, you should have a textured Coke can, if you haven’t finished it yet, then you are getting behind and will need to catch up quickly! Conversely, feel free to adapt this tutorial and use your Sprite bottle or Dr Pepper cup if you have finished these. I’ll use my coke_final.max model in this tutorial first.
2. Set up your Coke can in the viewports so you can see your Coke can ok, see Figure 1, and then save it as a new file for your animation. This ensures you still have a copy of your final lab 2 Coke can 3D model. You will also need to ensure you have cameras set up that can see the animation.
Figure 1: Coke can animation to be triggered by the VRML Helper: Touch Sensor
3. Using an appropriate viewport so that you can see the animation move from left back to 0,0,0, move the Coke can to the left a short distance. We will animate the Coke can so that it returns to its original position X,Y,Z = 0,0,0. So for example, if you are looking at the front viewport (in my case), move the can to the left as shown in Figure 3. It doesn’t matter how far, in my case I moved it -3000 units in the X direction. You can move the Coke can to the left in several ways, e.g. use the Select and Move after clicking on the can, and observe the X value increase to -3000 or simply set X to -3000, see Figure 2. Zoom to get a reasonable viewport. Your values may be different depending on the Units you set up when you modelled your 3D model.
Figure 2: X set to -3000
Also, looking ahead, I have changed some of my camera parameters. My scene happens to have 3 target cameras, all roughly looking at the Coke can when it is located at 0,0,0. So, I have set two of the target camera targets to 0,0,0. I have then set the third camera target about midway between the new position of the Coke can and 0,0,0 and changed it lens to get a wider view (35 mm lens). This camera will see the whole animation, hopefully, see Figure 3.
Figure 3: Coke can moved to the left as seen from the front viewport, with cameras adjusted
4. Now we will return it to 0,0,0 using a key frame animation. Select the auto-key, which will make the (front) viewport outside frame turn RED, and the key frame tool highlight in RED, see Figure 4.
Figure 4: Selecting Auto-key
5. Press the Set Keys icon (just to the left of the Auto Key — it’s the Key symbol) to sample the Coke can’s initial position, see Figure 5, then move the time slider, with auto-key still on, to the time destination of your first animation movement, say 10 frames, see Figure 6. For your first animation movement, do something simple like raise the Coke can to do a hop to about halfway back to its original position of 0,0,0. You can do this with the Select and Move button — in general you might like to keep it simple with a combination of translate and rotate actions to affect your moving animation.
Initial animation position
Figure 5: Initial animation position sample with the Set Key
Figure 6: First animation up to 10 frames
6. Repeat the process every 5 or 10 frames, creating some animation until about frame 90.
7. Turn auto-key off (it is important to stop recording so you don’t mess up your animation), then play your animation — you can play the recorded animation by scrubbing the animation slider back and forth, see Figure 7 and Figure 8, which shows the can at frame 75/100 of the animation that I did (yours will be different depending on how you made the can animate). Or you can play continuously by selecting the Play Animation button.
Figure 7: Play the recorded animation
8. You can play around with the animation. For example, I recorded the animation for 90 frames, and did an animation movement every 5 frames. On the animation tool, you can see the little red, green and blue striped squares representing key frames in the animation where you recorded the animation as you made it. You can re-time the animation by moving these key frames along the timeline. Have a go at stretching the animation to say 100 frames, or compressing it to say 35 frames, delete a key frame or two to see what happens. I just stretched out the animation at the start and end, see Figure 8.
Figure 8: Modified animation
9. You can now have a further play with different animations (it’s worth repeating this exercise to get a feel for setting up amusing animations of the Coke can); you’ll need to delete the old one by selecting the key frames and deleting to start again. Or, save this one and start a new one.
Tip! When you build your 3D App for the assignment you could consider setting up some simple animations to use as media objects along with photorealistic renderings of your models. You would build these media objects into your overall 3D App architecture.
10. Also, you can play with the animation curves using the Edit Curve editor, to adjust the animation, see Figure 9. Select the Curve Editor and have a play.
Figure 9: Using the Curve Editor to adjust a key frame animation
11. Looking at the Curve Editor, we can see the position of the object (the Coke can) at the start of the animation shows the cola can is -3000 along x axis, and zero on the z and y axis. Clearly the red curve is the x-axis, so adjusting this curve should modify the x-axis components of your animation. In this particular example, if you follow the red curve you will see at about 50 frames the cola can is back at zero on the x-axis and continues into positive x returning back to zero in x, which is exactly what I animated, see Figure 10. Study the other curves to better understand what is happening, explore to discover and learn more.
Figure 10: Isolating the X Position animation curve.
If you click on a particular point you can fine tune the animation parameters.
Figure 11: Adjust the animation parameters, such as time and position of the X components
12. Close the Curve Editor if you have been using it. Next, we will attach a touch sensor to trigger this animation.
Add VRML Helpers to the
Now that you have an animation, you need to use the VRML97 Helpers to create a touch sensor to trigger that animation.
It is interesting at this point to consider the other VRML Helpers, there are 12 you can use, some may be useful, but it depends on your final 3D App functionality:
· Can click an object and jump to other areas in your scene (other camera viewpoints) or other HTML pages, or VRML worlds
· Background
· Set Sky Color, Ground Color, and Images rollouts.
This is very useful to match the background colour of your VRML or X3D plug-in with your web site background colour! Alternatively, you can write the VRML code to do this — Google it!
· Can specify the colour and range of fog in your VRML world
· ProxSensor
· Triggers an animation when in the region of the Proximity Sensor
· TimeSensor
· Adds time-based animation controls, such as the start and end frames for a particular object’s animation, and looping
· AudioClip
· Specify the name and characteristics of an audio file that can be used by the Sound helper
· Billboard
· Creates geometry that is camera-aligned in the VRML97 browser, the objects always align to the viewpoint in the VRML browser
· Set up how to navigate around the VRML97 world, e.g. walk, fly, and also used to specify whether headlight should be used
· Lets you place 3D (spatial) or ambient sounds in a scene, the sound may be located at a point & emit sound in a spherical or ellipsoid pattern
· Touch Sensor
· An animation is triggered on touch of an object
Some useful interaction scenarios could be developed using a combination of some of these sensors, for example, a proximity sensor (ProxSensor) could be used to trigger a sound, which uses an AudioClip, along with animation, e.g. spinning the 3D model round. The Anchor could be used to jump to another viewpoint (e.g. between cameras) in the scene or even to an alternative web page outside the VRML scene, for example jump from the Coke 3D model to the Dr Pepper web page.
VRML Helpers — Touchsensor and NavInfo
Let’s add some VRML Helpers to your Coke can scene:
1. In the right-hand side menu, click on the measuring angle icon labelled “helpers” and from the dropdown box select ‘VRML97’, see Figure 12. A list of primitives appears from which we will be using only ‘NavInfo’ and ‘TouchSensor’.
Figure 12: Selecting the VRML Helper: Touch Sensor to trigger the Coke can animation
2. Create a touch sensor and position it next to the Coke can. It is not important where you position it as it only serves as a link between a trigger object (that triggers an animation) and the animated object. It makes it easier to see which sensor is attached to what object if you put them next to that object, see Figure 13.
The touch sensor has two parameters:
· Trigger Object: the object that you click on to trigger an animation — this could be the Coke can itself, or another object in the scene.
· Target Object: the object that performs its animation when the trigger object is clicked.
To activate the animation, both trigger and target object can also be the same. However, you could model a small object and place it next to the Coke can (for example create a simple sign, ‘Touch to Animate,’ using for example one of the primitive objects) and use that as your tri
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com