3D graphics authoring for embedded systems designers: Part 2- Deploying 3D objects
The texture map for the cube is shown in Figure 2. As you can see, every side of the cube is represented in the texture map. The texture coordinates that were generated by 3ds Max when the OBJ was exported are what map each section of the texture maps to each face on the cube. As you can see from Figure 1, the cube is made up of 12 triangles and each triangle is represented in the OBJ and Processing code as a face.
Figure 2: Cube01 Texture Map
Face 9 is represented in the OBJ file as “f 3/17 2/18 8/19”. The first three numbers in the vertex() function represent the vertex’s x, y and z coordinates in 3D model space. Remember that each face or triangle has three vertices so there must be three vertex() functions called for every triangle that makes up the 3D object. Numbers on the left side of the ‘/’ symbol are vertex table index numbers. Numbers on the right side of the ‘/’ symbol are texture coordinate table index numbers. The first vertex is indexed with a “3” so in the table at the third entry and you will find a 10.000000, 0.000000 and a -10.000000. These three numbers are our x, y and z coordinates in 3D model space for the first vertex of our triangle. The first texture coordinate entry is a ‘17’, so at the seventeenth entry in our texture coordinate table in the OBJ file and you will find 0.3370, 0.3347, and 0.0000. Texture coordinates are typically made up of two numbers, a U and a V, as explained in the previous article. The third number (called a ‘W’ – which was the 0.0000) is not necessary for our purposes and is ignored. Now we have all the data required for our first vertex entry:
vertex(10.000000, 0.000000, -10.000000, 0.3370, 0.3347); // 3/17
This is repeated two more times to create the triangle.
// Face 9 - f 3/17 2/18 8/19
vertex(-10.000000, 0.000000, -10.000000, 0.6652, 0.3347); // 2/18
vertex(-10.000000, 20.000000, -10.000000, 0.6652, 0.6662); // 8/19
Please note that you must start with the first face entry in the .OBJ file and process sequentially or your 3D object will not display correctly. I used Face 9 above as a “spot” example. Obviously a converter utility could be written that would convert an OBJ file to a list of vertex() function calls, and I might write one of those and put it and the resulting source code up at http://ricktewell.wordpress.com.
Now that you understand the code, you can run it. Be sure to copy the texture map “cube.png” to your Processing sketch’s project directory. To create this directory, save your Processing sketch using File->Save As. Simply copy “cube.png” into this sketch directory. Under Windows, this will be found under MyDocuments\Processing\Sketch_Name. Now press the “Run” button and you should see your 3D cube. By moving the mouse you can rotate the cube on multiples axes (Figure 3).
Figure 3: Cube01
At this point, you might be thinking “Great! I can show a 3D object on a PC or Mac…what does this have to do with embedded systems?” The answer is that if you understand the processes outlined above, you know most everything you need to know in order to display a 3D object on any embedded system. Yes, the code will be different, but the process is the same. Let’s review the steps:
1. Create the 3D object using a 3D authoring tool
2. Export the 3D object using the OBJ format
3. Convert the data in the OBJ format into something that can be understood by the 3D environment you are using
4. The 3D system will need the vertices and the texture coordinate map and the texture bitmap in order to show the object
By way of a simple example for OpenGL ES, instead of the vertex() function we used in the processing environment, you might use glVertex3fv() and glTexCoord2f() - I say “might” because there are other ways to accomplish the same thing using other mechanisms in OpenGL ES that can be more efficient. The basic purposes, however, are identical. You are passing the geometry and the texture coordinates to the 3D renderer along with the texture bitmap.
3D graphics can be complex and in many ways daunting. I strongly encourage you, if you are a beginner, to spend some time using the Processing environment to learn the fundamentals of 3D graphics. There is so much there to learn without the added burden of fighting the development tools – compilers, debuggers, RTOS environment, licensing, etc. Spend time just learning the basics of 3D programming. The Processing environment fully supports OpenGL as well, so you can move on to lighting, fogging, and even the wild and wacky world of shaders. 3D is becoming such an essential part of graphics programming including GUI development. It is my hope that this article sparks an interest in you to add your mark to 3D graphics on embedded devices.
3D graphics deployment tools
There are a variety of methods used to deploy 3D objects on embedded device screens, including low-level 3D APIs (OpenGL ES, Direct3D Mobile), native device low-level 3D APIs (Fujitsu V03), high-level 3D APIs (JSR 184, JSR 297), a wide variety of 3D “engines” (SIO2, Unreal, Bork 3D), and various higher level graphics user interface (GUI) design tools (Unity, CGI Studio). Any one of these methods could be the subject of a long article, and a discussion of all of them is beyond the scope of the present article. I will mention only that a common method of deploying 3D objects is to write some code using OpenGL ES (ES refers to “embedded systems”). The two primary versions of OpenGL ES are are 1.1 and 2.0. There is a vast difference between the two versions, and they are not compatible, with OpenGL ES 1.1 being a subset of the full OpenGL 1.5 specification and OpenGL ES 2.0 being a subset of the full Open GL 2.0 specification. OpenGL ES introduces you to a complex world of things like lighting, fogging, matrices, display lists, mipmaps, vertex buffer objects and, with OpenGL ES 2.0, the complicated world of “shaders.” To learn more about OpenGL ES programming, visit http://nehe.gamedev.net/, operated by GameDev.net
Rick Tewell is the Director of Engineering for Fujitsu Semiconductor. He has a long history in embedded electronics and is an embedded graphics specialist with more than two decades of experience. He also served in a variety of posts with Ligos Corp., Sequoia Advanced Technologies and Columbia Data Products prior to joining Fujitsu Semiconductor.


Loading comments... Write a comment