The basics of 3D authoring for embedded systems designers – Part 1 -

The basics of 3D authoring for embedded systems designers – Part 1

In this first of a two-part series, Rick Tewell of Fujitsu Semiconductor describes the process of 3D graphics design, with tips and techniques for creating high-quality models suitable for use in a variety of embedded designs.

3D objects (buttons, menus, drop-down lists and other widgets) not only greatly enhance the look and feel of a user interface, they convey information and have little impact on memory usage. It's no wonder 3D user interfaces are showing up just about anywhere an LCD screen exists: in consumer, industrial, medical, and automotive products ranging from vending machines to automotive dashboards. It’s generally expected that 3D interfaces will be the norm by 2014 or 2015.

Enhanced 3D user interfaces are made possible by the powerful 3D graphics engines found in most higher-end graphics SoCs today. In some cases, the 3D capabilities equal or rival that of the iPad 2. To be prepared, the embedded software engineer needs to master the art of 3D design. Fortunately, the process is straightforward, and working in 3D adds only a small amount of programming time.
The first step in creating 3D graphical interfaces is creating 3D models. This article outlines that process, with tips and techniques for creating high-quality models suitable for use in embedded applications.

The Basics of 3D Modeling
A 3D model is a construct in three dimensions (height, width, and depth) that contains details about the outline or “form” of the model. The model also contains information about the color of the object, how it should respond to light, and how a high-quality decal (called a texture) should be applied to the object. 3D models are significantly different from 2D objects.

3D models are created using a 3D authoring software package, such as:

  • Blender (free, open source)
  • Caligari trueSpace (commercial, free as of this writing)
  • Google SketchUp (free and pro [paid] versions)
  • Autodesk 3ds Max (commercial)
  • Autodesk May (commercial)
  • Maxon Cinema 4D (commercial)

Unfortunately, simply acquiring a 3D authoring package does not a 3D artist make. If you can’t draw anything more sophisticated than a stick figure, then you aren’t going to start creating incredible 3D models without learning traditional art methods. However, this does not mean that 3D authoring packages are useful only for artists. You can buy existing 3D models and change them as needed for your specific application – simply search online for 3D models. However, to modify an existing 3D model, you need to understand the steps involved in 3D modeling: creating the model’s form (the “polygonal mesh”), texture mapping, and texture creation.

3D Polygonal Mesh
A 3D model is made up individual polygons in the form of a mesh. These polygons are typically triangles because mathematically it is easier to deal with triangles than with any other type of polygon.

As an example, imagine that you want to model a car in 3D. Once modeled in your 3D authoring tool, you will have a “wire frame” that is the form of the car made up entirely of a mesh of triangles, each with its own set of three vertices expressed as points in 3D space (Figure 1 ). Each vertex of every triangle has its own X, Y, and Z coordinate, representing a specific location relative to an origin point in Euclidian space. So a 3D model‘s polygonal mesh will be made up of a list of three coordinates (X, Y, and Z) for every vertex of every triangle.

Click on image to enlarge.

Figure 1

The more triangles making up the model, the better the model will look to the viewer. This is referred to as “polygon count.” To maximize performance in an embedded system, you need to use the smallest polygon count possible while still maintaining a good visual appearance – a real balancing act. The usual technique is to start with a high polygon count model and reduce the count either through a process within your 3D modeling tool or by using a helper tool like Simplygon, VizUp, or PolyTrans.

Unfortunately, it isn't as simple as just creating a high polygon count object that looks fantastic and using it with your 3D graphics controller because there are limits to the number of polygons embedded 3D graphics controllers can handle at an acceptable display rate. Higher polygon 3D objects render more slowly than lower polygon 3D objects.

Also, there is a big difference between the 3D graphics controller in your mobile phone or tablet and your desktop PC. The 3D graphics controllers in your mobile phone or tablet are a block within a single IC. They draw little power and share memory with the host processor. Desktop 3D graphics controllers are usually standalone graphics ICs with their own cooling and power systems and their own high-bandwidth memory architectures with dedicated memory.

If you want high-performing 3D graphics for embedded devices, you need to create high-quality, lower-polygon-count 3D models. Figure 2 shows an example of high-, mid-, and low-polygon counts of the same 3D model.

Click on image to enlarge.

Figure 2: High-, mid-, and low-polygon count models

Texture Mapping
If you ever built a plastic model, you are probably familiar with the process of applying decals to the model after it has been glued (or snapped) together. Texture mapping is the process of applying a decal to every triangle in the model’s polygonal mesh. This decal is a single raster pixel image for the entire model. This texture can be a highly detailed, photographic-quality image that can make your 3D model look real. The texture map is a set of two coordinates that represents offsets into the texture image for every vertex of every triangle that makes up the 3D model. If your 3D model is comprised of 2,400 vertices, you will have 2,400 corresponding texture coordinates. Texture coordinates are referred to as “U” and “V” instead of X and Y to avoid confusion with the X and Y coordinates of the model’s polygonal mesh. It is common to refer to texture mapping as UV mapping. There are many other different texture-mapping techniques, but this article will focus on UV mapping.

Texture coordinates are specified between the values of 0.0 and 1.0 where (0.0, 0.0) is the lower left corner of a texture and (1.0, 1.0) is the upper right corner of a texture (Figure 3 ) If you specify a value above 1.0 for a coordinate, the texture will tile (repeat). If you specify a value lower than 0.0, the texture will “mirror.”

Clickon image to enlarge.

Figure 3: Texture coordinates and their effects on model

A texture for a UV map is a pixel-based image of a specific size that is always a power of 2 (e.g., 128×128, 256×256 or 512×512). Rectangular textures are also allowed as long as each dimension is a power of 2 (e.g., 128×512 or 256×64). Within the 3D authoring tool, the texture map is created by selecting which parts of the geometric mesh of the model will correspond to a specific area of the texture. This is a manual process; you need to fit the selected pieces of your model’s geometric mesh into the texture area in as optimized a manner as possible (Figure 4 ).

Clickon image to enlarge.

Figure 4

Tricks are often used to optimize the use of a texture area. For example, you can have a single image of a tire in a texture correspond to all four tires instead of having four separate images represent the tires. Or you can have a single image in your texture represent both sides of a car since they are almost always symmetrical (Figure 5 ). By carefully analyzing your model, you will probably find many opportunities to optimize the UV map and pack as much visual information as possible into the texture area.

Clickon image to enlarge.

Figure 5

As stated previously, creating a UV map is a manual process. 3D modeling software does not do this for you. You have to decide how to optimize the use of the chosen size texture area. When you acquire a 3D model (either for free or purchased), the UV map will almost always be done for you. If, however, you change the geometry in some way, the UV map will have to be generated again (or updated) since the vertex count will change. Since the UV maps are normalized to values between 0.0 and 1.0 (as opposed to pixel coordinates), it is a simple matter to create larger or smaller textures without having to change the UV map. For example, you can have a higher quality texture of 512×512 pixels and scale down to a lower quality texture of 64×64 pixels and not change the UV. It is customary to start with a high-quality texture-map image (e.g., 1024×1024 pixels) and reduce the size using a tool like Photoshop or Gimp (which have good filtering capabilities when an image is reduced in size). When you have completed your UV map, the 3D modeling tool will allow you to export a UV reference image that you will use when you create the actual texture (Figure 5).

Just as with geometry, use the smallest size texture you can get away with and still have a good-looking 3D model. Keep the target screen size and application in mind. Displaying a 3D model on a 320×240 screen is a lot different than displaying a model on a 1024×768 screen. If you have a small screen size and the viewer isn’t going to zoom the model up dramatically, you might be able to successfully deploy a texture of 32×32. Larger textures require more time to process and map to the polygons in the 3D model than smaller ones. The best rule of thumb is to use the smallest texture you can get away with, but create a large, high-quality texture to begin with and scale down from there.

By now, it should be obvious that textures rarely map 1:1 to the display’s resolution. There will always be some magnification, where a single texture pixel (a “texel”) covers more than one pixel on a display, or there will be minification, where more than one texel covers a pixel. This means that we are either zooming into or out of the texture. As in the case with standard raster images, if you want to zoom into or out of an image, you must apply filtering to improve the appearance of the image after the zoom. Generally the 3D graphics controller will automatically apply the filtering when displaying the 3D model on the display. There are many instances, however, where it is possible to create a series of different size textures and prefilter them using the filtering algorithms inside industry-standard raster graphics tools like Photoshop and Gimp.

Depending on the scaling value of a 3D model in a scene, a more optimally sized texture can be selected for the model when it is displayed. If you zoom into the 3D model, you might select a larger texture to provide more detail. If you zoom out of a 3D model, you can select a smaller texture because the detail is not needed or warranted. This process is called “mipmapping” and can be handled automatically by the 3D library. However, you can often get far better results by creating your own prefiltered scaled textures and supplying them to the mipmapping handler (Figure 6 ).

Clickon image to enlarge.

Figure 6

Texture Creation
As discussed previously, textures are the “decals” that are mapped to the polygons to make the 3D model look realistic. Textures can be created from photographs of real objects or can be created from scratch using 2D drawing software. Let’s say you have created the 3D mesh and UV map for a model car (Figure 7a ). If you didn’t buy or download the 3D model, one way to get textures is to take digital photographs of the model car from the top, bottom, sides, front, and back, which will give you all the reference photographs you need to apply the images to your UV map.

Next, load the photographs into an image editor that has the ability to cut irregularly sized images out of the photographs, and paste them into the UV map reference image (Figure 7b ). In some cases, you will need to stretch the image to cover the polygonal mesh portion you are texturing. You will need to be able to “blend” the pixels to the edge of the polygonal mesh to create smooth edge transitions. Photoshop and Gimp are ideal tools for this. Repeat this process for every polygonal mesh section contained within your texture map until the map is completed. For example, cut out the picture of a tire from the photograph and scale it to cover the tire section of the UV map. Then cut out the side of the car from the photograph and stretch and scale it to cover the side section of the UV map (Figure 7c ).

Clickon image to enlarge.

Figure 7

Once you have done this and you render the 3D model to the display, you will see the finished 3D model looking exactly the way you want it (Figure 8 ). It still amazes me that a strange-looking collection of chopped-up images contained in a texture can end up looking like a finished and polished 3D model.

Clickon image to enlarge.

Figure 8

Textures, as we have discussed, are simply pixels that make up an image. These images can be saved as .JPG, .PNG or in any number of other standard formats. The images are a specific color depth; they can be 4-, 8-, 16- or 24-bit color or any other color depth you choose. They can contain alpha values (or not) and can be compressed. Ultimately, however, the texture must be converted into a format that the 3D graphics controller understands, which is the frame buffer’s native color-depth format. For now, I recommend saving textures as 24-bit color with alpha .PNG files. This format can be easily converted into any other format necessary for virtually any 3D graphic controller.

Creating 3D models is a three-step process of creating the 3D polygonal mesh, the texture map, and the texture. Mastering this process is both an art and a science. But the rewards are well worth it, for 3D objects enable the embedded user interface to convey a lot of information in little space, significantly enhancing the user experience.


In the Part 2 of The basics of 3D authoring for embedded systems designers, the author takes you through the process of  displaying 3D models on an embedded device that supports that supports 3D graphics.

Rick Tewell is the Director of Engineering for Fujitsu Semiconductor. He has a long history in embedded electronics and is an embedded graphics specialist with more than two decades of experience. He also served in a variety of posts with Ligos Corp., Sequoia Advanced Technologies and Columbia Data Products prior to joining Fujitsu Semiconductor.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.