Tuesday, June 30, 2009

OpenGL ES 2.0 Programming Guide has 3Gs Sample Code

The authors of the OpenGL ES 2.0 Programming Guide have created Xcode projects for all the code provided in the book. This makes the book a lot more interesting from the perspective of an iPhone developer. Anyone read this book and have an opinion on it?

If you download the sample code for the iPhone 3Gs, make sure you grab the build instructions, as it's a little more complex than just opening Xcode and pressing ⌘B.

Beginning iPhone 3 Development

Well, I guess it's okay to talk about this since it's on Amazon, Dave and I have completely updated and revised Beginning iPhone Development for the 3.0 SDK.



Now, the question some people likely have is, "should I buy the new version." Let me be honest with you: For many of you the answer is definitely "no". While Dave and I certainly would appreciate you buying the new edition, there's not a lot of truly new material, and we don't want anyone buying the book under false pretenses. We've incorporated the errata and clarified some of the conceptual material based on feedback from readers and on having had another year of working with the iPhone SDK. We have also tweaked the code to match Apple's current coding style (moving the IBOutlet keyword to the property, for example), and in several cases modified the code to use SDK 3 features, such as using the faster single-step autorotation instead of the old 2-step autorotation.

But, as far as completely new material, we don't include much. There's really only some discussion of table view cell styles and, in the persistence chapter, a new introduction to Core Data. Table view styles are cool, but they're easy enough to understand and certainly don't justify a new purchase for readers who have mastered table views already.

Core Data is probably something a lot of you are interested in, but the coverage in Beginning iPhone 3 Development is pretty introductory-level. We took that same persistence application that we wrote using archiving, property lists, and SQLite, and re-wrote it one more time using Core Data. The application is simple enough that we really don't cover most of the more difficult aspects of Core Data, (but hold that thought).

I don't want to discourage you from buying the revised book, but Dave and I both feel strongly that we don't want readers to feel duped into buying it and unfortunately, the description of the book on Amazon right now is incorrect. The way publishing works is that publishers often put new book descriptions into the computer system that Amazon pulls from long before the book is actually available, sort of as a placeholder. In fact, sometimes publishers put the description into the system before the book is even written. The description now on Amazon for Beginning iPhone 3 Development is still a holdover from that placeholder and talks about some material that is not actually being covered in the updated book, like in-app purchase, push notifications, and mapkit. Unfortunately, we have to wait for the corrected description to ripple through the various computer systems until it shows up on Amazon.

But...

Dave and I do have another project in the works that does cover Core Data in much more detail - several chapters in fact. It also covers many of the new SDK 3.0 features like GameKit, MapKit, Push Notifications, and In-App Purchase in great detail. Plus, we cover some more intermediate and advanced topics such as networking and concurrency. The new book has not officially been announced, but the title of the book will be More iPhone 3 Development. We don't have an availability date yet, but we are furiously working on the book as we speak and will get it done as quickly as we can without cutting any corners.

The reason we have decided not to cover many of the new 3.0 APIs in the updated version of Beginning iPhone Development is precisely because we didn't want people to feel like they had to buy a book they already owned in order to get the new material. We felt that the persistence chapter needed to mention Core Data, but otherwise, we wanted to put all the new material in a second book to avoid forcing people to buy the first book again. At the same time, we felt that the release of the new SDK warranted an update to the first book so that new readers wouldn't be confused by the differences. It also gave us a chance to incorporate errata and clean up a few things.

I hope that's all clear, and I apologize for any confusion the Amazon description may have caused.

Friday, June 26, 2009

YouTube: 400% Daily Increase in Mobile Uploads since 3Gs

The YouTube Blog is reporting that there has been a 400% increase per day in mobile uploads to their service since the iPhone 3Gs became available. If you read my earlier post, you know that video upload was the first new feature to really grab my interest (and unexpectedly so), so I feel a little vindicated and less weird.

via via Daring Fireball

Wednesday, June 24, 2009

Blender Export Script

I've revised the Blender Objective-C Export Script. It was a relatively minor change - I was calculating the vertex normals based on the face normals, which was unnecessary work since Blender keeps track of vertex normals as well. On larger models, this version should perform better than the original version since it doesn't have to calculate vertex normals by looping through all vertices on all faces looking for matches.

Empty OpenGL ES Application Project Template Updated for 3.0

I have updated my OpenGL ES Xcode project to work with SDK 3.0. You can find the new version right here.

This new version is actually a rewrite from the ground up, and it fixes several issues that the old one had, including the problem where the controller wasn't receiving touch events and, of course, the fact that it didn't work on 3.0. It also includes more OpenGL-related convenience functions, and a class that makes it easier to do texture mapping.

Using 3D Models from Blender in OpenGL ES

One question I get asked a lot is "how do I load 3D models into OpenGL ES?".

Of course, the answer to that isn't a simple one. If you followed my my earlier posts on importing Wavefront OBJ files, you're probably aware of that already. There are many file formats, and none of them are ideal for loading into a resource-constrained device like the iPhone.

Apple recommends that 3D objects be stored in header files as static arrays. This obviates the need to do any loading or transforming or conversion at all. But… there's not really any easy way to create those header files from within 3D software packages.

Well, there is one way now. The open source 3D program called Blender has a very extensible architecture, making it relatively easy to write custom export modules. Blender's scripting architecture is based on Python, a language that I'm not particularly familiar with, but I hacked out something that works. So, let's say that you've got an object in Blender:

Click to see larger version


And you want to load it into a program you're writing for your iPhone:

Click to see larger version
Note, I know that the screenshots don't actually match - I cropped the texture after exporting from Blender to make it more iPhone sized - I didn't want to resize it for fear that the texture would get too small to see.


All you have to do is take this script and install it into the scripts folder in Blender.

There's a catch, however. The Mac OS X version of Blender doesn't follow the sames rules as it does for other platforms. If you look up how to install a script in Blender, it will tell you to add the script to ~/.blender/scripts/. That directory doesn't get created on the Mac, and if you manually create it, you will lose all the delivered scripts.

Instead, you have to actually install the script into the Blender.app bundle. To make things even more gnarley, the scripts are stored in an invisible folder inside of the Blender bundle. The easiest way to copy the script is to use Terminal.app and the unix cp command. The scripts folder is located at:
/path/to/Blender.app/Contents/MacOS/.blender/scripts
So, if Blender is installed in your Applications folder inside of a folder called Blender, you could use the following command to copy the unzipped script into Blender:
cp objc.py /Applications/Blender/Blender.app/Contents/MacOS/.blender/scripts/

Every time you upgrade your Blender install, you'll have to reinstall this script. The next time you start Blender after copying the script, you will find an entry in the Export menu called Objective-C Header (.h). This is intended to be used with texture-mapped objects, and only exports the active object, not all selected objects or all objects. I am going to create a separate version for non-texture mapped objects. This script will create an inefficient version of non-texture-mapped objects because of the way OpenGL ES uses texture coordinates.

I've also created a sample project that shows how to use the exported header file. It's actually quite easy. You just pass the arrays from the header into the various OpenGL calls, like this:

    glVertexPointer(3, GL_FLOAT, 0, CubeVertices);
glNormalPointer(GL_FLOAT, 0, CubeNormals);
glTexCoordPointer(2, GL_FLOAT, 0, CubeTexCoords);

You can then either use glDrawArrays() or glDrawElements() as you see fit. There are indices provided for glDrawElements(), but since the object is exported as triangles, glDrawArrays() works just fine too.

Before running the script, make sure you put the object into edit mode and select Mesh->Faces->Convert Quads to Triangles. This script will not convert to triangles for you, and OpenGL ES requires triangles. You also need to load and bind the texture you used in Blender, or one that you created based on Blender's exported UV template.

As I said before, I am not a very experienced Python programmer, so if you want to suggest improvements, I'm happy to hear those suggestions. You will not hurt my feelings one little bit. This is what I call a "brute force" script - it gets the job done, but perhaps not with the elegance it should have.

If you want to play around with the simple Blender project I used in the test Xcode project, you can find that right here (right-click and save to disk).

I hope to create a future version that interleaves the data as suggested by Apple, but for now, I was just thrilled to get something working.

Monday, June 22, 2009

Mobile Orchard Two Day iPhone Training Class

Dan Grigsby, the founder of Mobile Orchard, just wrote to let me know that he has started offering two-day iPhone programming workshops targeted at developers coming from other platforms like Java and .Net. The workshops are intensive and very hands-on. Over the course of the workshop, Dan walks you through developing seven complete applications and covers many of the core iPhone technologies, including several that are new with the 3.0 SDK such as Core Data.

There are upcoming workshops in the San Francisco Bay Area and in the Seattle area, with more to come in the future.

Dan has been nice enough to offer a $200 discount on the workshop to my readers. You can get this discount by entering the discount code "jeff". Dan has also authored a great series of tutorials that you might want to check out.

A Weekend with the 3Gs

Okay, I've now had my iPhone 3Gs for a few days. My wife's out of town, so I'm on single-parent duty right now, which means I haven't gotten a chance to play with the phone as a developer yet, but I did get quite a bit of time with the phone as a user. I'm still feeling really positive about my purchase. On the surface, the revision from the 3G to the 3Gs doesn't seem that dramatic - same form factor, same operating system. But, in terms of user experience, the difference is huge.

The feature that has impressed me the most is not one that I thought it would be. The ability to take videos is huge, and I didn't really realize how huge until I had a 3Gs in my pocket. The quality of the video is pretty darn decent, and the tap-to-focus works really well. It also adjusts the light-metering based on where you tap, so if you tap to focus on something closer to the camera, for example, it will adjust the light so that the object you focus on can be seen better.

Man, I wish I had this twelve years ago. As a parent, the ability to take short video clips is really great. Pictures are great, but being able to remember what your kids sounded like and seeing their mannerisms is great. You forget things, even things you wouldn't think you'd forget. Having short reminders in the form of videos is great.

We bought a camcorder when my oldest, now twelve, was born, and were diligent about taking video when she was young. As she got older, and the other kids were born, and life got busier, we got less and less good about taking video because it was a hassle. After a few years, we stopped taking video almost altogether, except on vacation sometimes. The camera was big and bulky and importing the video into the computer was a painful, multi-step process. Now, my phone doubles as a video camera, so it's always with me, and importing the video is as easy as plugging it into my computer and pressing a single button.

This weekend, I took several movies and pictures of my boys as we were out and about, and was able to immediately send them to my wife and daughters who are out of town, as well as to my parents who live in Florida. It's pretty amazing how quickly the iPhone 3Gs compresses and sends movies to YouTube over the 3G connection. The quality of the video does suffer a bit, however, when you send it to YouTube. There is a noticeable degradation when you compare the YouTube video to the original pulled from the phone through iPhoto. I couldn't seem to find any way to tell it to use less compression, which I'm guessing is done on purpose to preserve bandwidth. It would be nice to have the option to use less compression, at least when sending over wi-fi.

The ability to zoom and a little higher resolution for still images would be great, but I'm being pretty demanding there. Cell phones have had video for a while, but none have had video like this. I think the video camera will be my most-used feature of this phone, since it will let me share, almost immediately, what's going on with relatives who live far away.

The speed increase is noticeable for me. I upgraded from a first-generation Edge-based iPhone to a 3G phone, which is quite a nice improvement in connection speed. Coverage is quite good around here (it wasn't a year ago), and I'm very happy with the speeds I'm seeing. Even watching YouTube videos is quite tolerable over 3G and Mobile Safari seems lightning fast other than the short latency delay at the start. I'm sure I'll be jaded and wanting more speed in a few months, but for now, I'm really happy with the internet speed I get on this phone.

I haven't pulled down any games that really utilize the capabilities of the new graphics chip and additional memory, nor have I done any OpenGL ES 2.0 work on my own yet, so my impressions of that will have to wait for another day, but I'm sure it's going to floor me. Everything on this phone is snappy, and the demos we saw at WWDC of what can be done with OpenGL ES 2.0 are pretty phenomenal.

Voice control seems to work well. It's not a feature I really care about much, but I've tried about a half dozen commands and they were all interpreted correctly the first time. The last phone I had with voice control was much dodgier in that respect, so if voice control is something that matters to you, I think you'll like the 3Gs.

The Oleoresistant screen, which is supposed to resist fingerprints and smudges, works pretty well also. it's not perfect - it is possible to smudge the screen, especially if your fingers are really dirty. I found out first hand that having wet clay on your fingers, for example, will smudge the screen. But under ordinary use, you should see very few fingerprints, and the screen cleans easily on your shirt or other soft cloth.

Walking directions with the magnetometer are great, and that's quite a boon in a strange city. I can't tell you how many times I've been somewhere traveling and had to walk a block (or two) in order to get my orientation. Knowing which way you're heading is a really great feature and it's implemented almost flawlessly.

I would love to tell you all about MMS and tethering, but um... y'know... AT&T. sigh.

In fact, AT&T is almost my only complaint about this phone. My other complaint is one that I know is simply a limitation of current technology, but the battery life on this thing isn't as good as my first generation iPhone. I know that the 3G radio sucks additional power and all that, but it's still frustrating. Especially when I'm traveling, I use my phone a lot. I use it to check e-mail, and tweet, make calls, and to pass the time on long flights playing games, watching movies, and listening to music. Standby time and music-listening time seem to be at least as good as my old phone, but if I'm actively using the phone with the screen on and with 3G service, the battery goes frustratingly fast. An external battery pack is probably a good investment if you go long periods away from your computer or power outlet.

I saw this morning on TUAW that Apple sold a million iPhone 3Gs's over the weekend. I'm not surprised, as there were still long lines yesterday (Father's day), on the phone's third day of sale. I don't think the sales are going to stop, either. I think it's going to continue to sell very strongly based on strong word-of-mouth. Everyone I've talked to who got one is thrilled with the phone. Granted, many of my friends are, like me, borderline fan-boys, but even taking that into account, it's still an impressive phone.

If you're on the fence, I say go for it if you can afford it. It's a great update that addresses most of my complaints with the original. If you're a developer, especially if you're a game developer or developing anything with a lot of visual impact, I'd say the 3Gs is not only compelling, but necessary. If you want to create cool looking programs, you want to be able to leverage the power of OpenGL ES 2.0 and the new graphics chip and extra memory that the 3Gs provides.

Saturday, June 20, 2009

Willpower Fail

After posting yesterday that I was going to be responsible and wait a few weeks before getting a iPhone 3Gs, I ended up running out and getting one this morning when I found out the nearest Apple store had some still in stock. I got a 32 gig black model. I haven't used it much, but first impressions are pretty fabulous. Video looks good, touch-to-focus works really well, camera does much better in low-light, and sound quality seems to be better. But the biggest difference you notice is how snappy everything is. Apps launch faster, the keyboard responds quicker and never hiccups.

I'll post my full impressions after I've had a day or two to play with it, but I'm really excited to see what people do with this new hardware. I think we're going to see some awesome games and other programs before long.

Friday, June 19, 2009

OpenGL ES 2 Shaders

Looking for a good starting place for creating shaders for the new OpenGL ES 2.0 available on the iPhone 3Gs?

The orange book has an online home that's well worth looking at. The Orange Book (the current version of which actually has more purple than orange) is the official book on the shading language. If you're thinking of writing shaders, this is the first resource you should get. The official site has example shaders and links plus errata, so you might want to bookmark it.

You should also check out this page, which has lots of shader resources, including a zip full of shaders that you can use.

Lastly, in the Developer folder on your hard drive, in /Applications/Graphics Tools, there's a program called OpenGL Shader Builder that you might want to check out.

Upgrading etc.

Sorry for the quiet week. Been suffering from the post-WWDC blues, a combination of being massively behind on my workload and massively tired.

I have updated several of my iPhone projects that don't work after upgrading to 3.0 so that they do, including a completely new version of the OpenGL ES Xcode template. The only problem is, I did all the udpating on a pre-release version of Xcode under Snow Leopard. I need to find some time to install the release version of SDK3 and "downgrade" the Xcode projects before I can post them without violating the NDA.

The next two installments of the OpenGL ES from the Ground Up series are, tentatively, drawing text and hit testing, although I may push those off and insert a OpenGL ES 2.0-specific posting in there first, because I know a lot of people are interested in the ES 2. stuff. Perhaps I'll write an introduction to shaders as my next one. In any case, I'm unlikely to get another OpenGL ES posting done until I've got a few more chapters under my belt, so probably two weeks or so.

In completely unrelated news, the inimitable Wil Shipley tweeted about an interesting blog post by somebody from the Microsoft camp today.

I can only assume this is the latest in their recent ham-fisted campaign to win back marketshare for their products. Other parts of it include a page of outright silly assertions labeled as "facts" and a $10,000 bribe to use IE8.

In this blog post, Guy Claperton insinuates that the twitterverse burst into flames over problems with the iPhone OS 3.0 upgrade. Now, it may be a little immodest to say this, but I think I probably have my finger a little closer to the pulse of the iPhone community than a "freelance journalist" who specializes in small business who is on Microsoft's payroll, and I didn't witness anything close to a meltdown from the 3.0 rollout. There were a few complaints here and there, sure, but that's to be expected with a major OS upgrade, and make no mistake, this was a major upgrade, and it's been only one year since 2.0, so that's pretty impressive. How long did Vista take, again?

Guy then goes on to insinuate that Microsoft users are smarter because they don't upgrade immediately.

How's that again? Really? I don't think I've seen a more blatant form of apologism out of anybody ever, including Microsoft. Right, people didn't upgrade to Vista because they were responsible and cautious.

Bullshit.

People didn't upgrade to Vista because it was a huge flaming pile of dog shit, and instead of fixing it, Microsoft spent millions justifying and defending it, essentially telling their customers they were wrong in the process. Advertising can do a lot, but it can't force people to buy dog shit that is currently on fire. Corporations didn't upgrade to Vista because it was expensive, required hardware upgrades, in many cases also required software upgrades, and offered no compelling new features.

Guy writes off "twitter going crazy for a few hours" to the "fan mentality".

Wow. Yeah, Microsoft doesn't want fans. Seriously, do Microsoft bloggers believe the stuff they write, or are they given an agenda from Marketing and then have to craft something that fits it, like some eighth-grade homework assignment, only in hell?

The fact is, fans are an indication that a company is doing something right. Every company has their fan-boys (and girls) - people who love that company regardless of what they do. But a critical mass of true fans - people who love a company's products so much they wait in line for them - is what companies strive for. It's an indication of success, and there's just no way to spin that to make it look bad.

The 3.0 SDK offered some very neat and very desired new functionality. People wanted the upgrade. Plus, the upgrade was free except for iPod Touch users, for whom it cost a small amount (about $10, if I remember correctly). A few upgrade glitches does not a Vista make, and it's hard to imagine that iPhone OS 3.0 could be labelled anything other than a success.

In related news, I am NOT standing in line for a 3Gs today, as much as I'd like to be. I'm being responsible and waiting for my next royalty check before upgrading my phone. This has been a crazy month for finances, between WWDC, a problem with my wife's car that set us back well over $1k. Also, our clothes dryer blew, our dishwasher blew, and a huge silver maple in our backyard fell down. Because of the latter, we had to hire first an arborist to asses the situation, then a tree removal company to come remove the fallen tree from our neighbor's yard, and then to remove a few other standing trees that posed similar threat of falling. Plus, my wife is taking my daughters on a weeklong trip to Florida to celebrate their birthdays.

So, as you're playing with your new iPhone 3Gs with all its wonderful features, extra memory, and extra speed, please think of me sitting at my desk working with my first generation iPhone.

Saturday, June 13, 2009

An Amazing WWDC

Well, WWDC ended yesterday. I'm a little sad about that, though I am looking forward to getting home to my wife and kids. I miss them quite a lot. I find it hard to believe that I used to travel as much as I did. I'm not sure how I handled doing that as long as I did.

Right now, I'm waiting at my hotel in San Francisco until it's time to go to the airport. My brain's not yet quite awake enough to write code, so I thought I'd finally get around to doing a short post about the week. I had the best intentions of doing some status updates during the week, but time was a scarce commodity this week, and many of the nights I got less than four hours sleep (something that used to be much easier for me).

I have very few complaints about this year's show. There were a few sessions that were, in my mind, mislabeled a bit as to their target audience (high level sessions being marked as "Expert", for example). The name badges were also printed in 14-point font so that they couldn't be easily read, But those were tiny blips in an absolutely great week. I doubt that many Apple folks read my blog, but for any of you who might stumble across it, thank you all so much for this past week. I know you all kill yourselves for the months leading up to WWDC. I certainly appreciate the effort you put in, and so does most everybody I talked to this week.

Even if the content hadn't been so great, this week would still have been one amazing week. Thanks to the extraordinary power of the intertubes, most especially the power of Twitter, I was able to meet an awful lot of my wife calls my "imaginary friends" - people that I have interacted with only on the internet. I also met an awful lot of new people.

This week was very special for me personally because it represents the completion of a very long-held goal of becoming a part of the Mac (and now iPhone) developer community. For years, Cocoa was my hobby, done as my work and personal obligations allowed. While I had written for MacTech and participated on and off in the Cocoa-Dev mailing list for years, I was alway on the sidelines and never felt like I was truly a part of the community. I think I can safely say that I have achieved my goal, and it feels good.

It has struck me for years that in the Mac development community there is a very low asshole ratio. Almost everybody is nice. I mean, direct competitors not only get along, they often even consider each other friends. People are relatively unselfish and are happy to help others, regardless of skill level. The iPhone dev community seems to have picked up on this trend, something that I'm very glad about.

And as weird as it sounds, though the people here this week came from many different places and different backgrounds, there was a sense of camaraderie and friendship that's hard to explain or imagine.

Though most of what I learned this last week is under NDA, I can tell you that some (not all, but some) of the information will fall out of NDA on June 17th which is only a few days away. At very least, I plan to update my OpenGL ES template for 3.0 (already done, it just can't be posted until 3.0 GM is released to the public) as well as updated versions of the projects that accompany the OpenGL ES articles. I've also got two more OpenGL ES articles in the pipeline, though it may be quite some time before I can get them done. I'm very behind on some writing projects thanks to this week and have to give that work priority.

Let's see... this is a little bit of a rambly posting, and I apologize. It takes a few days for my brain to recover after WWDC. I didn't take many pictures this week, but I'll finish off by sharing just a couple of things I did shoot.

The first truly odd thing of the week was the fact that a porn producer targeting the iPhone platform paid a bunch of girls to circle Moscone West wearing bikinis while advertising a website and a soi-disant "launch party". For the benefit of those who weren't there, or who got in line early, here is some video (don't play if watching bikini girls yelling "iPorn" is not appropriate where you are):



I'm hardly a prude and do not typically object to seeing attractive young women in very little clothing, but I do have to say this whole thing struck me as odd and somewhat out of place. I can understand the thought process behind this - there were going to be 6,000 geeks in one place, the vast majority of which were young men. But... it just seemed like a bad fit. And though this is probably an indication that I'm getting old, my first thought was to honestly feel bad for the young women. It was so not bikini weather that morning. They were however, as one commenter on YouTube put it, "troopers" about the whole situation. I can't imagine it's easy to be excited about freezing your ass off for money.

Here's another odd, yet wonderful thing from the week for me: I got to meet one of Apple's co-founders: Woz.



For those of you who haven't met us, I'm on the left, Dave Mark is on the right. And though I look drunk in that shot, it was taken just as we arrived and I hadn't actually started drinking yet. Oh, well.

Woz (in the middle of course) was just as nice as I've always heard. Having learned to program on a machine that he designed and for which he wrote much of the operating system, it was really something to get to meet him.

Well, time to pack so I can make the long journey back to my home. Save travels to everyone on their way home.

Sunday, June 7, 2009

OpenGL ES from the Ground Up Part 8: Interleaving Vertex Data

Technote 2230 makes many suggestions for improving the performance of your iPhone apps that use OpenGL ES. You're now far enough along in your understanding of OpenGL ES that you should read it. No, really. Go read it, I'll wait.

Okay, done? Under the section titled Optimizing Vertex Data, there's a somewhat cryptic recommendation to "submit strip-ordered indexed triangles with per vertex data interleaved". When Apple makes a recommendation, they usually have a good reason for it, so let's look at how we comply with this one.

First of all, let's look at what it means. Let's break it down:

Strip Ordered: In other words, if your model has adjacent triangles, submit them as triangle strips rather than submitting each triangle individually. We've talked about using triangle-strips in earlier installments, so you already know a little about doing that. It's not always possible to use triangle strips, but for a good many objects you will be able to, and whenever you can, you should because using triangle strips greatly decreases the amount of vertex data you have to push into OpenGL ES every frame.

indexed: This is also nothing new. We've been using vertex indices for a while now. Our spinning icosahedron uses them to create twenty faces with only twelve vertices. glDrawElements() draws based an indices rather than vertices.

Heck, we're doing great so far, aren't we? So far, we seem to be doing all the right things! Let's look at the last part of the recommendation, however:

with per vertex data interleaved: Okay, hmm.. What the hell does that mean?

Okay, time to test out your memory. Do you remember in several of the past installments when we time we talked about functions like glVertexPointer(), glNormalPointer(), glColorPointer(), or glTexCoordPointer ? In earlier installments, I told you not to worry about the parameter called stride and to just set it to 0.

Well, now you can start worrying about stride, because that's the key to interleaving your per vertex data.

Per Vertex Data

So, you might be wondering what "per vertex data" is and how you would interleave it.

You remember, of course, that in OpenGL ES we always pass geometry in using a vertex array, which is an array containing sets of three GLfloats that define the points that make up our objects. Along with that, we also sometimes specify other data. For example, if we use lighting and need vertex normals, we have to specify one normal per vertex in our normal array. if we use texture coordinates, we have to make sure that our texture coordinate array has one set of texture coordinates per vertex. And if we use a color array, we have to specify one color per vertex. Do you notice how I keep saying "per vertex"? Well, these types of data are what Apple is referring to when they say "per vertex data" in that Technote. It's anything that you pass as an array into OpenGL ES that supplies any kind of data that applies to the vertices in the vertex array.

Interleaving

Up until now in this series, we've created one array to hold the vertex data, and additional separate arrays to hold the normal data, color data, and/or texture coordinate data, like so:

separatearrays.png


What we're going learn how to do today is to smush all this data together into a single contiguous chunk of memory:

interleaved.png


Don't worry if you can't read the code in that illustration. When it becomes important, I'll give you the code listing again, that's just to illustrate the point that we're going to have all of our vertex data in a single glob of memory. What that's going to do is put all the data describing a single vertex together in one place in memory. That will allow OpenGL faster access to the information about each vertex. In today's installment, we're going to interleave vertices, normals, and color data, though the same exact technique would work for texture coordinates, or for just interleaving vertices and normals. In fact, in the accompanying Xcode Project, there are data structures defined to handle all three of those interleaving scenarios.

Defining a Vertex Node

In order for this to work, we need a new data structure. In order to interleave vertices, normals, and color data, we need a structure that looks like this:

typedef struct {
Vertex3D vertex;
Vector3D normal;
Color3D color;
}
ColoredVertexData3D;


Pretty straightforward, huh? You just create a struct with each piece of per-vertex data that we're using.

Next, of course, we need to populate our vertex data, so we need to combine those three static const arrays into a single one. Here's what the same icosahedron data looks like specified using a static array of our new datatype:

static const ColoredVertexData3D vertexData[] = {
{
{0, -0.525731, 0.850651}, // Vertex |
{0.000000, -0.417775, 0.675974}, // Normal | Vertex 0
{1.0, 0.0, 0.0, 1.0} // Color |
}
,
{
{0.850651, 0, 0.525731}, // Vertex |
{0.675973, 0.000000, 0.417775}, // Normal | Vertex 1
{1.0, 0.5, 0.0, 1.0} // Color |
}
,
{
{0.850651, 0, -0.525731}, // Vertex |
{0.675973, -0.000000, -0.417775}, // Normal | Vertex 2
{1.0, 1.0, 0.0, 1.0} // Color |
}
,
{
{-0.850651, 0, -0.525731}, // Vertex |
{-0.675973, 0.000000, -0.417775}, // Normal | Vertex 3
{0.5, 1.0, 0.0, 1.0} // Color |
}
,
{
{-0.850651, 0, 0.525731}, // Vertex |
{-0.675973, -0.000000, 0.417775}, // Normal | Vertex 4
{0.0, 1.0, 0.0, 1.0} // Color |
}
,
{
{-0.525731, 0.850651, 0}, // Vertex |
{-0.417775, 0.675974, 0.000000}, // Normal | Vertex 5
{0.0, 1.0, 0.5, 1.0} // Color |
}
,
{
{0.525731, 0.850651, 0}, // Vertex |
{0.417775, 0.675973, -0.000000}, // Normal | Vertex 6
{0.0, 1.0, 1.0, 1.0} // Color |
}
,
{
{0.525731, -0.850651, 0}, // Vertex |
{0.417775, -0.675974, 0.000000}, // Normal | Vertex 7
{0.0, 0.5, 1.0, 1.0} // Color |
}
,
{
{-0.525731, -0.850651, 0}, // Vertex |
{-0.417775, -0.675974, 0.000000}, // Normal | Vertex 8
{0.0, 0.0, 1.0, 1.0}, // Color |
}
,
{
{0, -0.525731, -0.850651}, // Vertex |
{0.000000, -0.417775, -0.675973}, // Normal | Vertex 9
{0.5, 0.0, 1.0, 1.0} // Color |
}
,
{
{0, 0.525731, -0.850651}, // Vertex |
{0.000000, 0.417775, -0.675974}, // Normal | Vertex 10
{1.0, 0.0, 1.0, 1.0} // Color |
}
,
{
{0, 0.525731, 0.850651}, // Vertex |
{0.000000, 0.417775, 0.675973}, // Normal | Vertex 11
{1.0, 0.0, 0.5, 1.0} // Color |
}

}
;


Here is how we pass the information into OpenGL. Instead of passing in the pointer to the appropriate array, we pass the address of the appropriate member of the first vertex in the array, and provide the size of that struct as the stride argument.

    glVertexPointer(3, GL_FLOAT, sizeof(ColoredVertexData3D), &vertexData[0].vertex);
glColorPointer(4, GL_FLOAT, sizeof(ColoredVertexData3D), &vertexData[0].color);
glNormalPointer(GL_FLOAT, sizeof(ColoredVertexData3D), &vertexData[0].normal);


The the last parameter in each of those calls a points to the data corresponding to the first vertex. So, for example, &vertexData[0].color points to the color information for the first vertex. The stride parameter identifies how many bytes of data need to be skipped before the same type of data for the next vertex can be found. That might make a little more sense if you look at this diagram (sorry, it's wide, you may have to expand your browser to see all of this one:

stridediagram.png


What could be easier, right? If you don't feel like typing it all in, you can download the interleaved version of the spinning icosahedron. I've also updated my OpenGL ES Xcode Template with these new data structures<.

We're still not using triangle strips, but merging triangles into triangle strips is going to have to be a subject for a future installment, because it's time to go meet some people at WWDC.

Yet Another Post for WWDC First-Timer's

Brent Simmons of NewNewsWire fame has a fine blog posting full of information for WWDC first timers. I'm not a coffee drinker, so can't really comment on his statements with regard to that particular beverage, but in general, if you have a caffeinated beverage of choice, you probably want to arrange to get some and bring it with you. There are beverages provided, but not necessarily when you'll need them or in sufficient quantity to ensure you'll get one.

I do find it odd that Brent recommends Denny's for late-night after-booze dining but lambasts the Moscone food as "awful". The food at Moscone West is far from stellar, but it's considerably better than the nearby Denny's. Then again, we all have different standards when we're sober then when we're three sheets to the wind and I've ended up there on more than one occasion myself, so...,

Personally, I never found the Moscone food to be that bad and lunchtime can be a great time for visiting the labs without having to give up a session.

Saturday, June 6, 2009

T-Minus Sixteen Hours

Okay, I'm all checked in for my early morning flight tomorrow and I am greatly looking forward to WWDC this year.

If you see me, please stop and say "hi!". I'll be very easy to find on Tuesday, as I'll be the guy wearing an amazing Three Wolf T-Shirt :) The other days, well, I'll tweet my location periodically. I rarely leave Moscone West during the days and am going to try and get to as many of the after-hour parties and gatherings as I can.

I have not yet decided what I'm going to do about the keynote. Most likely I will play it by ear. I don't think I want to kill myself to get over to Moscone West really early in the morning, but I'll still be on Eastern time, so may just be up anyway. That's what happened last year - I found myself wide awake at 4:00am without much to do. If I'm awake early, I'll be among the crazy ones. If I'm not, then i'll be one of the lazy ones in the overflow room. Either is fine with me.

I am not making predictions this year. With the exception of the obvious ones (like new iPhone hardware this year), I don't have a very good track record at guessing what will come out of Cupertino, so I'm just going to mostly stay out of the fray.

But I don't think we're going to see a tablet or a non-AT&T phone. I will go so far as that. I think both are possibilities and neither would send me reeling in disbelief if announced, but I'm betting against either being announced on Monday. The time doesn't feel right.

Friday, June 5, 2009

Mike Ash's Blog

I've been following Mike Ash's (of Rogue Amoeba) Friday Q&A blog postings for quite some time now, but don't think I've actually posted about it before. I have been remiss - if you're interested in becoming a power Objective-C programmer, you should be subscribing to Mike's RSS feed and reading his posts religiously. He gets into the guts of the runtime and covers lot of stuff you should know, but can be difficult to learn from the official documentation.

Thursday, June 4, 2009

OpenGL ES From the Ground Up, Part 7: Transformations and Matrices

Okay, this is the posting that I've been dreading. Conceptually speaking, today's topic is the most difficult part of 3D programming, and it's one I've struggled with.

At this point, you should understand 3D geometry and the cartesian coordinate system. You should understand that objects in OpenGL's virtual world are built out of triangles made up of vertices, and that each vertex defines a specific point in three-dimensional space, and you know how to use that information to do basic drawing using OpenGL ES on the iPhone. If not, you should probably go back and reread the first six installments in this series before tackling this monstrosity.

In order for the objects in your virtual world to be at all useful for interactive programs like games, there has to be a way to change the position of objects in relation to each other and in relation to the viewer. There has to be a way to not only move, but rotate and scale objects. There also has to be way to translate that virtual three-dimensional world onto a two-dimensional computer screen. All of these are accomplished using something called transformations. The underlying mechanism that enables transformations are matrices (or matrixes if you prefer).

Although you can do a fair amount in OpenGL without ever really understanding matrices and the mathematics of the matrix, it is a really good idea to have at least a basic understanding of the mechanism.

Built-In Transformations and the Identity Matrix


You've already seen some of the OpenGL's stock transformations. One call that you've seen in every application we've written is glLoadIdentity(), which we've called at the beginning of the drawView: method to reset the state of the world.

You've also seen glRotatef(), which we used to make our icosahedron spin, and glTranslatef() which was used to move objects around in the virtual world.

Let's look at the call to glLoadIdentity() first. This call loads the identity matrix. We'll talk about this special matrix later, but loading the identity matrix basically resets the virtual world. It gets rid of any transformations that have been previously performed. It is standard practice to call glLoadIdentity() at the beginning of your drawing method so that your transformations have predictable results because you always know your starting point - the origin.

To give you an idea of what would happen if you didn't call glLoadIdentity(), grab the Xcode project from Part 4, comment out the call to glLoadIdentity() in drawView: and run the application. Go ahead and do it, I'll wait. What happens?

The icosahedron, which used to just slowly spin in place, scoots away from us doesn't it? Like Mighty Mouse, it flies away and then up into the sky, exit stage right.1

The reason why that happens is because we were using two transformations in that project. The vertices of our icosahedron were defined around the origin, so we used a translate transformation to move it three units away from the viewer so the whole thing could be seen. The second transformation we used was a rotation transformation to spin the cube in place. When the call to glLoadIdentity() was still in place, we started fresh each frame back at the origin looking straight down the Z-axis. Back then, when we translated 3 units away from the viewer, it always ended up at the same location of z == -3.0. Similarly, the rotation value, which we constantly increase based on the amount of time elapsed, caused the icosahedron to spin at an even pace. Because earlier rotations were removed by the call to glLoadIdentity() before it was rotated, the rotation was at a consistent speed.

Without the call to glLoadIdentity(), the first time through, the icosahedron is translated three units away from us and the icosahedron rotates a small amount. The next frame (a fraction of a second later), the icosahedron moves back another three frames, and the adds the value of rot to the amount the icosahedron was already rotated. This happens each frame, meaning the icosahedron moves away from us three units every frame, and the speed of rotation increases every frame.

It would be possible to not call glLoadIdentity() and compensate for this behavior, but we can't predict or compensate for transformations done in other code, so the best bet is to start from a known position, which is the origin, with no scaling or rotation, which is why we always call glLoadIdentity()

The Stock Transformations

In addition to glTranslatef() and glRotatef(), there is also glScalef(), which will cause the size of objects drawn to be increased or decreased. There are some other transformation functions available in OpenGL ES, but these three (combined with glLoadIdentity() are the ones that you'll use the most. The other ones are used primarily in the process of converting the three-dimensional virtual world into a two-dimensional representation, a process known as projection. We'll touch on projection a little in this article, but in most scenarios, you don't have to be directly involved with that process other then setting up your viewport.

The stock transformations can get you a long way. You could conceivably create an entire game using just these four calls to manipulate your geometry. There are times when you might want to take transformations into your own hands, however. One reason you might want to handle the transformations yourself is that these stock transformations have to be called sequentially as a separate function call, each resulting in a somewhat computationally costly matrix multiplication (something we'll talk more about later). If you do the transformations yourself rather than using the transformations provided by OpenGL, you can often combine multiple transformations into a single matrix, reducing the number of matrix multiplication operations that have to be performed every frame.

It's also possible to eke out better performance by doing your own matrices because you can vectorize your matrix multiplication calls. As far as I can tell, the iPhone's behavior in this regard is not documented, but as a general rule, OpenGL ES will hardware accelerate multiplication between a vector or vertex and a matrix, but not between two transformation matrices. By vectorizing matrix multiplication, you can actually get better performance than you can by letting OpenGL do the matrix multiplication. This won't give you a huge performance boost, as there are generally far less matrix by matrix multiplication calls than vector/vertex by matrix multiplication calls, but in complex 3D programs, every little bit of extra performance can help.

Enter the Matrix


Obviously, I have to make a reference to the movie "The Matrix", since we're going to spend the next few thousand words talking about matrices. It's sort of a geek requirement, so let me get it out of the way:
Unfortunately, nobody can be told what The Matrix is.
Only, it's not true in this case; matrices are really not that big of a deal. A matrix is just a two-dimensional array of values. That's it. Nothing mystical here. Here's a simple example of a matrix:

simplematrix.png


That's a 3x3 matrix, because it has three columns and three rows. Vectors and Vertices can actually be represented in a 1x3 matrix (remember this for later, it's kind of important):

vertexmatrix.png


A vertex could also be represented by a 3x1 array instead of a 1x3 array, but for our purposes, we're going to represent them using the 1x3 format (you'll see why later). Even a single data element is technically a 1x1 matrix, although that's not a very useful matrix.

You know what else can be represented in an array? Coordinate systems. Watch this, it's kind of cool. You remember vectors, right? Vectors are imaginary lines running from the origin to a point in space. Now, remember that the Cartesian coordinate system has three axes:

cartesian.png


So, what would a normalized vector that ran down the X axis look like? Remember: A normalized vector has a length of one, so, a normalized vector that runs up the X axis would look like this:

xaxisvector.png


Notice that we're representing the vector as a 3x1 matrix rather than a 1x3 matrix as we did with the vertex. Again, it doesn't actually matter, as long as we use the opposite for vertices and these vectors. All three of the values in this vector apply to the same axis. I know, it probably doesn't make sense yet, but bear with me, this will clear in up in a second. A vector that runs up the Y axis would look like this:

yaxisvector.png


And one that runs up the Z axis looks like this:

zaxisvector.png


Now, if we put these three vector matrices together in the same order as they are represented in a vertex (x then y then z), it would look like this:

3x3identity.png


That's a special matrix called the identity matrix. Sound familiar? When you call glLoadIdentity(), you are loading that matrix right there2. Here's why this is a special matrix. Matrices can be multiplied together, and multiplying matrices is how you combine them. If you multiply any matrix by the identity matrix, the result is the original matrix. Just like multiplying a number by one. You can always calculate the identity matrix for any given size matrix by setting all the values to 0.0 except where the row and column number are the same, in which case you set the value to 1.0.

Matrix Multiplication


Matrix multiplication is the key to combining matrices. If you have a one matrix that defines a translate, and another that defines a rotate, if you multiply them together, you get a single matrix that defines both a rotate and a translate. Let's look at a simple example of matrix multiplication. Imagine these two matrices:

simplemultiply.png


The result of a matrix multiplication is another matrix that is exactly the same size as the matrix on the left side of the equation. Matrix multiplication is not commutative. The order matters. The result of multiplying matrix a by matrix b is not necessarily the same as the result from multiplying matrix b by matrix a (although it could be in some situations).

Here's another thing about matrix multiplication: Not every pair of matrices can be multiplied together. They don't have to be the same size, but the matrix on the right side of the equation has to have the same number of rows as the number of columns that the matrix on the left side of the equation has. So, you can multiply a 3x3 matrix with another 3x3 matrix, or you can multiply a 1x3 matrix with a 3x6 matrix, but you can't multiply a 2x4 matrix with, say, another 2x4 matrix because the number of columns in a 2x4 matrix is not the same as the number of rows in a 2x4 matrix.

To figure the result of a matrix multiplication, we make an empty matrix of the same size as the matrix on the left side of the equation:

empty3x3matrix.png


Now, for each spot in this matrix, we take the corresponding row from the left-hand matrix and the corresponding column from the right hand matrix. So, for the top left position in the result matrix, we take the top row of the left side of the equation and the first column of the right side of the equation, like so:

multbreakdown.png


Then we multiply the first value in the row from the left-hand matrix by the first value in the right-hand column, multiply the second value in the left-hand row by the second value in the right-hand column, multiply the third value in the left-hand row by the third value in the right-hand column, then add them all together. So, it would be:

firstspotcalc.png


If you repeat this process for every spot in the result matrix, then you get the result of the matrix multiplication:

simplematrixfinal.png


And look at that. We multiplied a matrix (the blue one) by the identity matrix (the red one) and the result is exactly the same as the original matrix. If you think about it, it makes sense since the identity matrix represents our coordinate system with no transformations. This also works with vertices. We can multiply a vertex by a matrix, and the same thing happens:

vertexmult.png


Now, let's say that we wanted to rotate an object. What we do is define a matrix that describes a coordinate system that is rotated. In a sense, we actually rotate the world, and then draw the object into it. Let's say we want to spin an object along the Z axis. To do that, the Z axis is going to remain unchanged, but the X axis and the Y axis need to change. Now, this is a little hard to imagine, and it's not really vital that you understand the underlying math, but to define a coordinate system rotated on the z axis, we would adjust the x and y vectors in our 3x3 matrix, in other words, we have to make changes to the first and second column.

vectoraxes.png



So, the X value of the X-axis vector and the Y value of the Y-axis vector need to be adjusted by the cosine of the rotation angle. The cosine, remember, is the length of the side adjacent to the angle in a triangle. We also need to adjust the Y value of the X-axis vector by an amount equal to minus the sine of the angle and the X value of the Y-axis vertex by the sine of the angle. The sine of an angle is the length of the opposite side of the angle in a triangle. That's hard to follow, it might be easier to understand expressed as a matrix:

zrotation.png


Now, if you take ever vertex in every object in your world and multiply them by this matrix, you get the new location of the vertex in the rotated world. Once you've applied this matrix to every vertex in your object, you have an object that has been rotated along the Z-axis by n degrees.

If that doesn't make sense, it's okay. You really don't need to understand the math to use matrices. These are all solved problems, and you can find the matrices for any transformation using google. In fact, you can find most of them in the OpenGL man pages. So don't beat yourself up if you're not fully understanding why that matrix results in a Z-axis rotation.

A 3x3 matrix can describe the world rotated at any angle on any axis. However, we actually need a fourth row and column in order to be able to represent all the transformations we might need. We need a fourth column to hold translation information, and a fourth row which is needed to do the perspective transformation. I don't want to get into the math underlying the perspective transformation because they require understanding homogenous coordinates and projective space, and it's not really important to becoming a good OpenGL programmer. In order to multiply a vertex by a 4x4 matrix, we just pad it with an extra value, usually referred to as W. W should be set to 1. After the multiplication is complete, ignore the value W. We're not actually going to look at vector by matrix multiplication in this installment because OpenGL already hardware accelerates that, so there's usually not a need to handle that manually, but it's a good idea to understand the basic process.

OpenGL ES's Matrices


OpenGL ES maintains two separate matrices, both are 4x4 matrices of GLfloats. One of these matrices, called the modelview matrix is the one you'll be interacting with most of the time. This is the one that you use to apply transformations to the virtual world. To rotate, translate, or scale objects in your virtual world, you do it by making changes to the model view matrix.

The other matrix is used in creating the two-dimensional representation of the world based on the viewport you set up. This second matrix is called the projection matrix. The vast majority of the time, you won't touch the projection matrix.

Only one of these two matrices is active at a time, and all matrix-related calls, including those to glLoadIdentity(), glRotatef(), glTranslatef(), and glScalef() affect the active matrix. When you call glLoadIdentity, you set the active matrix to the identity matrix. When you call the other three, OpenGL ES creates an appropriate translate, scale, or rotate matrix and multiplies the active matrix by that matrix, replacing the results of the active matrix with the result of the matrix multiplication operation.

For most practical purposes, you'll just set the modelview matrix as the active matrix early on and then leave it that way. In fact, if you look at my OpenGL ES template, you'll se that I do that in the setupView: method, with this line of code:

    glMatrixMode(GL_MODELVIEW);


OpenGL ES's matrix are defined as an array of 16 GLfloats, like this:

        GLfloat     matrix[16];

They could also be represented as two-dimensional C arrays like this:

        GLfloat     matrix[4][4];


Both of those declarations result in the same amount of memory being allocated, so it's a matter of personal preference which you use, though the former seems to be more common.

Let's Play


Okay, at this point, I'm sure you've had enough theory and want to see some of this in action, so create a new project using my OpenGL ES template, and replace the drawView: and setupView: with the versions below:

- (void)drawView:(GLView*)view;
{

static GLfloat rot = 0.0;
static GLfloat scale = 1.0;
static GLfloat yPos = 0.0;
static BOOL scaleIncreasing = YES;

// This is the same result as using Vertex3D, just faster to type and
// can be made const this way
static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651}, // vertices[0]
{0.850651, 0, 0.525731}, // vertices[1]
{0.850651, 0, -0.525731}, // vertices[2]
{-0.850651, 0, -0.525731}, // vertices[3]
{-0.850651, 0, 0.525731}, // vertices[4]
{-0.525731, 0.850651, 0}, // vertices[5]
{0.525731, 0.850651, 0}, // vertices[6]
{0.525731, -0.850651, 0}, // vertices[7]
{-0.525731, -0.850651, 0}, // vertices[8]
{0, -0.525731, -0.850651}, // vertices[9]
{0, 0.525731, -0.850651}, // vertices[10]
{0, 0.525731, 0.850651} // vertices[11]
}
;

static const Color3D colors[] = {
{1.0, 0.0, 0.0, 1.0},
{1.0, 0.5, 0.0, 1.0},
{1.0, 1.0, 0.0, 1.0},
{0.5, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.5, 1.0},
{0.0, 1.0, 1.0, 1.0},
{0.0, 0.5, 1.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.0, 1.0, 1.0},
{1.0, 0.0, 1.0, 1.0},
{1.0, 0.0, 0.5, 1.0}
}
;

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
}
;

static const Vector3D normals[] = {
{0.000000, -0.417775, 0.675974},
{0.675973, 0.000000, 0.417775},
{0.675973, -0.000000, -0.417775},
{-0.675973, 0.000000, -0.417775},
{-0.675973, -0.000000, 0.417775},
{-0.417775, 0.675974, 0.000000},
{0.417775, 0.675973, -0.000000},
{0.417775, -0.675974, 0.000000},
{-0.417775, -0.675974, 0.000000},
{0.000000, -0.417775, -0.675973},
{0.000000, 0.417775, -0.675974},
{0.000000, 0.417775, 0.675973},
}
;


glLoadIdentity();
glTranslatef(0.0f,yPos,-3);
glRotatef(rot,1.0f,1.0f,1.0f);
glScalef(scale, scale, scale);

glClearColor(0.0, 0.0, 0.05, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_COLOR_MATERIAL);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glNormalPointer(GL_FLOAT, 0, normals);
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisable(GL_COLOR_MATERIAL);
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+=50 * timeSinceLastDraw;

if (scaleIncreasing)
{
scale += timeSinceLastDraw;
yPos += timeSinceLastDraw;
if (scale > 2.0)
scaleIncreasing = NO;
}

else
{
scale -= timeSinceLastDraw;
yPos -= timeSinceLastDraw;
if (scale < 1.0)
scaleIncreasing = YES;

}

}

lastDrawTime = [NSDate timeIntervalSinceReferenceDate];
}

-(void)setupView:(GLView*)view
{
const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0;
GLfloat size;
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0);
CGRect rect = view.bounds;
glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size /
(rect.size.width / rect.size.height), zNear, zFar);
glViewport(0, 0, rect.size.width, rect.size.height);
glMatrixMode(GL_MODELVIEW);

// Enable lighting
glEnable(GL_LIGHTING);

// Turn the first light on
glEnable(GL_LIGHT0);

// Define the ambient component of the first light
static const Color3D light0Ambient[] = {{0.3, 0.3, 0.3, 1.0}};
glLightfv(GL_LIGHT0, GL_AMBIENT, (const GLfloat *)light0Ambient);

// Define the diffuse component of the first light
static const Color3D light0Diffuse[] = {{0.4, 0.4, 0.4, 1.0}};
glLightfv(GL_LIGHT0, GL_DIFFUSE, (const GLfloat *)light0Diffuse);

// Define the specular component of the first light
static const Color3D light0Specular[] = {{0.7, 0.7, 0.7, 1.0}};
glLightfv(GL_LIGHT0, GL_SPECULAR, (const GLfloat *)light0Specular);

// Define the position of the first light
// const GLfloat light0Position[] = {10.0, 10.0, 10.0};
static const Vertex3D light0Position[] = {{10.0, 10.0, 10.0}};
glLightfv(GL_LIGHT0, GL_POSITION, (const GLfloat *)light0Position);

// Calculate light vector so it points at the object
static const Vertex3D objectPoint[] = {{0.0, 0.0, -3.0}};
const Vertex3D lightVector = Vector3DMakeWithStartAndEndPoints(light0Position[0], objectPoint[0]);
glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, (GLfloat *)&lightVector);

// Define a cutoff angle. This defines a 90° field of vision, since the cutoff
// is number of degrees to each side of an imaginary line drawn from the light's
// position along the vector supplied in GL_SPOT_DIRECTION above
glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 25.0);

glLoadIdentity();
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
}

This creates a simple program with our friend, the icosahedron. It rotates, as it did before, but it also moves up and down along the Y axis using a translate transform, and it increases and decreases in size using a scale transform. This uses all the common modelview transforms; we load the identity matrix, do a scale, a rotate, and a translate using the stock OpenGL ES transform functions.

Let's replace each of stock functions with our own matrices. Before proceeding, build and run the application so you know what the correct behavior for our application looks like.

Defining a Matrix


Let's define our own object to hold a matrix. This is just to make our code easier to read:

typedef GLfloat Matrix3D[16];


Our Own Identity Matrix


For our first trick, let's create our own identity matrix. The identity matrix for a 4x4 matrix needs to look like this:

4x4identity.png


Here's a simple function to populate an existing Matrix3D with the identity matrix.

static inline void Matrix3DSetIdentity(Matrix3D matrix)
{
matrix[0] = matrix[5] = matrix[10] = matrix[15] = 1.0;
matrix[1] = matrix[2] = matrix[3] = matrix[4] = 0.0;
matrix[6] = matrix[7] = matrix[8] = matrix[9] = 0.0;
matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;
}

Now, this probably looks wrong at first glance. It looks like we're passing Matrix3D by value, which wouldn't work. However we're using a typedef3 array, and due to C99's array-pointer equivalency, arrays are passed by reference and not by value, so we can just assign the individual values of the arrays and don't have to pass pointers.

I'm using inline functions to eliminate the overhead of a function call. This involves a trade-off (primarily increased code size) and all the code in this article will work just as well as regular C functions. To use them that way, just remove the static inline keywords and place them in a .c or .m file instead of a .h file. Note that the static keyword is correct (and a good idea) in C and Objective-C programs, but if you're using C++ or Objective-C++, then you should probably exclude it. The GCC Manual recommends making C inline functions like this one static because doing so allows the compiler to remove the generated assembly for unused inline functions. However, if you're using C++ or Objective-C++ the static keyword can potentially impact linkage behavior and offers no real benefit.

Okay, let's use this new function to replace the call to glLoadIdentity(). Delete the call to glLoadIdentity() and replace it with the following code:

    static Matrix3D    identityMatrix;
Matrix3DSetIdentity(identityMatrix);
glLoadMatrixf(identityMatrix);

So, we declare a Matrix3D, populate it with the identity matrix, and then we load that matrix using glLoadMatrixf(), which replaces the active matrix (in our case, the modelview matrix) with the identity matrix. That's exactly the same thing as calling glLoadIdentity(). Exactly. Build and run the program now, and it should look exactly the same as before. No difference.

Now, you know really know what glLoadIdentity() does since you've done it manually. Let's continue.

Matrix Multiplication


Before we can implement any more transformations, we need to write a function to multiply two matrices. Remember, multiplying matrices is how we combine two matrices into a single matrix. We could write a generic matrix multiplication method that would work with any size array and that used for loops to do the calculation, but let's just do it without the loop. Loops have a tiny bit of overhead associated with them, and unrolling loops in functionality that gets called a lot can make a difference. Since OpenGL ES's matrices are always 4x4, the fastest multiplication is to just do each calculation. Here's our matrix multiplication:

static inline void Matrix3DMultiply(Matrix3D m1, Matrix3D m2, Matrix3D result)
{
result[0] = m1[0] * m2[0] + m1[4] * m2[1] + m1[8] * m2[2] + m1[12] * m2[3];
result[1] = m1[1] * m2[0] + m1[5] * m2[1] + m1[9] * m2[2] + m1[13] * m2[3];
result[2] = m1[2] * m2[0] + m1[6] * m2[1] + m1[10] * m2[2] + m1[14] * m2[3];
result[3] = m1[3] * m2[0] + m1[7] * m2[1] + m1[11] * m2[2] + m1[15] * m2[3];

result[4] = m1[0] * m2[4] + m1[4] * m2[5] + m1[8] * m2[6] + m1[12] * m2[7];
result[5] = m1[1] * m2[4] + m1[5] * m2[5] + m1[9] * m2[6] + m1[13] * m2[7];
result[6] = m1[2] * m2[4] + m1[6] * m2[5] + m1[10] * m2[6] + m1[14] * m2[7];
result[7] = m1[3] * m2[4] + m1[7] * m2[5] + m1[11] * m2[6] + m1[15] * m2[7];

result[8] = m1[0] * m2[8] + m1[4] * m2[9] + m1[8] * m2[10] + m1[12] * m2[11];
result[9] = m1[1] * m2[8] + m1[5] * m2[9] + m1[9] * m2[10] + m1[13] * m2[11];
result[10] = m1[2] * m2[8] + m1[6] * m2[9] + m1[10] * m2[10] + m1[14] * m2[11];
result[11] = m1[3] * m2[8] + m1[7] * m2[9] + m1[11] * m2[10] + m1[15] * m2[11];

result[12] = m1[0] * m2[12] + m1[4] * m2[13] + m1[8] * m2[14] + m1[12] * m2[15];
result[13] = m1[1] * m2[12] + m1[5] * m2[13] + m1[9] * m2[14] + m1[13] * m2[15];
result[14] = m1[2] * m2[12] + m1[6] * m2[13] + m1[10] * m2[14] + m1[14] * m2[15];
result[15] = m1[3] * m2[12] + m1[7] * m2[13] + m1[11] * m2[14] + m1[15] * m2[15];
}


Again, this function doesn't allocate any memory, it just populates an existing result array (result) by multiplying the other two arrays. The result array should not be one of the two values being multiplied, however, because that would yield incorrect results since values are changed that will be used again.

But, wait… this can actually be faster. at least when you run the program on the iPhone instead of the simulator. The iPhone has four vector processors that are able to do floating point math much faster than the iPhone's CPU can. Taking advantage of these vector processors, however, requires writing ARM6 assembly because there are no libraries for accessing the vectors from C.

Fortunately, somebody's already figured out how to a matrix multiply using the vector processors. The VFP Math Library contains a lot of vectorized functionality, and it's released under a fairly permissive license. So, I took the VFP Math Library vectorized matrix multiply and incorporated it into my method, so that when it's run on the device, the vectorized version is used, but the regular version is used when run on the simulator (note that I've included the original comments with ownership and licensing information and identified that the code has been modified in order to comply with VFP Math Library license):

/* 
These define the vectorized version of the
matrix multiply function and are based on the Matrix4Mul method from
the vfp-math-library. This code has been modified, but is still subject to
the original license terms and ownership as follow:

VFP math library for the iPhone / iPod touch

Copyright (c) 2007-2008 Wolfgang Engel and Matthias Grundmann
http://code.google.com/p/vfpmathlibrary/

This software is provided 'as-is', without any express or implied warranty.
In no event will the authors be held liable for any damages arising
from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it freely,
subject to the following restrictions:

1. The origin of this software must not be misrepresented; you must
not claim that you wrote the original software. If you use this
software in a product, an acknowledgment in the product documentation
would be appreciated but is not required.

2. Altered source versions must be plainly marked as such, and must
not be misrepresented as being the original software.

3. This notice may not be removed or altered from any source distribution.
*/

#if TARGET_OS_IPHONE && !TARGET_IPHONE_SIMULATOR
#define VFP_CLOBBER_S0_S31 "s0", "s1", "s2", "s3", "s4", "s5", "s6", "s7", "s8", \
"s9", "s10", "s11", "s12", "s13", "s14", "s15", "s16", \
"s17", "s18", "s19", "s20", "s21", "s22", "s23", "s24", \
"s25", "s26", "s27", "s28", "s29", "s30", "s31"

#define VFP_VECTOR_LENGTH(VEC_LENGTH) "fmrx r0, fpscr \n\t" \
"bic r0, r0, #0x00370000 \n\t" \
"orr r0, r0, #0x000" #VEC_LENGTH "0000 \n\t" \
"fmxr fpscr, r0 \n\t"

#define VFP_VECTOR_LENGTH_ZERO "fmrx r0, fpscr \n\t" \
"bic r0, r0, #0x00370000 \n\t" \
"fmxr fpscr, r0 \n\t"

#endif
static inline void Matrix3DMultiply(Matrix3D m1, Matrix3D m2, Matrix3D result)
{
#if TARGET_OS_IPHONE && !TARGET_IPHONE_SIMULATOR
__asm__ __volatile__ ( VFP_VECTOR_LENGTH(3)

// Interleaving loads and adds/muls for faster calculation.
// Let A:=src_ptr_1, B:=src_ptr_2, then
// function computes A*B as (B^T * A^T)^T.

// Load the whole matrix into memory.
"fldmias %2, {s8-s23} \n\t"
// Load first column to scalar bank.
"fldmias %1!, {s0-s3} \n\t"
// First column times matrix.
"fmuls s24, s8, s0 \n\t"
"fmacs s24, s12, s1 \n\t"

// Load second column to scalar bank.
"fldmias %1!, {s4-s7} \n\t"

"fmacs s24, s16, s2 \n\t"
"fmacs s24, s20, s3 \n\t"
// Save first column.
"fstmias %0!, {s24-s27} \n\t"

// Second column times matrix.
"fmuls s28, s8, s4 \n\t"
"fmacs s28, s12, s5 \n\t"

// Load third column to scalar bank.
"fldmias %1!, {s0-s3} \n\t"

"fmacs s28, s16, s6 \n\t"
"fmacs s28, s20, s7 \n\t"
// Save second column.
"fstmias %0!, {s28-s31} \n\t"

// Third column times matrix.
"fmuls s24, s8, s0 \n\t"
"fmacs s24, s12, s1 \n\t"

// Load fourth column to scalar bank.
"fldmias %1, {s4-s7} \n\t"

"fmacs s24, s16, s2 \n\t"
"fmacs s24, s20, s3 \n\t"
// Save third column.
"fstmias %0!, {s24-s27} \n\t"

// Fourth column times matrix.
"fmuls s28, s8, s4 \n\t"
"fmacs s28, s12, s5 \n\t"
"fmacs s28, s16, s6 \n\t"
"fmacs s28, s20, s7 \n\t"
// Save fourth column.
"fstmias %0!, {s28-s31} \n\t"

VFP_VECTOR_LENGTH_ZERO
: "=r" (result), "=r" (m2)
: "r" (m1), "0" (result), "1" (m2)
: "r0", "cc", "memory", VFP_CLOBBER_S0_S31
);
#else
result[0] = m1[0] * m2[0] + m1[4] * m2[1] + m1[8] * m2[2] + m1[12] * m2[3];
result[1] = m1[1] * m2[0] + m1[5] * m2[1] + m1[9] * m2[2] + m1[13] * m2[3];
result[2] = m1[2] * m2[0] + m1[6] * m2[1] + m1[10] * m2[2] + m1[14] * m2[3];
result[3] = m1[3] * m2[0] + m1[7] * m2[1] + m1[11] * m2[2] + m1[15] * m2[3];

result[4] = m1[0] * m2[4] + m1[4] * m2[5] + m1[8] * m2[6] + m1[12] * m2[7];
result[5] = m1[1] * m2[4] + m1[5] * m2[5] + m1[9] * m2[6] + m1[13] * m2[7];
result[6] = m1[2] * m2[4] + m1[6] * m2[5] + m1[10] * m2[6] + m1[14] * m2[7];
result[7] = m1[3] * m2[4] + m1[7] * m2[5] + m1[11] * m2[6] + m1[15] * m2[7];

result[8] = m1[0] * m2[8] + m1[4] * m2[9] + m1[8] * m2[10] + m1[12] * m2[11];
result[9] = m1[1] * m2[8] + m1[5] * m2[9] + m1[9] * m2[10] + m1[13] * m2[11];
result[10] = m1[2] * m2[8] + m1[6] * m2[9] + m1[10] * m2[10] + m1[14] * m2[11];
result[11] = m1[3] * m2[8] + m1[7] * m2[9] + m1[11] * m2[10] + m1[15] * m2[11];

result[12] = m1[0] * m2[12] + m1[4] * m2[13] + m1[8] * m2[14] + m1[12] * m2[15];
result[13] = m1[1] * m2[12] + m1[5] * m2[13] + m1[9] * m2[14] + m1[13] * m2[15];
result[14] = m1[2] * m2[12] + m1[6] * m2[13] + m1[10] * m2[14] + m1[14] * m2[15];
result[15] = m1[3] * m2[12] + m1[7] * m2[13] + m1[11] * m2[14] + m1[15] * m2[15];
#endif
}

Now that we have the ability to multiply matrices together we can combine multiple matrices. Since our matrix multiply is hardware accelerated and OpenGL ES does not hardware accelerate matrix by matrix multiplication, our version should actually be a tiny bit faster than using the stock transformations4. Let's add the translate transformation now.

Our Own Translate


If you recall earlier, one of the reasons why we need to use a 4x4 matrix instead of a 3x3 matrix was because we needed an extra column for translation information. Indeed, this is what a translation matrix looks like:

translatematrix.png


We can turn that into a function like this:

static inline void Matrix3DSetTranslation(Matrix3D matrix, GLfloat xTranslate, GLfloat yTranslate, GLfloat zTranslate)
{
matrix[0] = matrix[5] = matrix[10] = matrix[15] = 1.0;
matrix[1] = matrix[2] = matrix[3] = matrix[4] = 0.0;
matrix[6] = matrix[7] = matrix[8] = matrix[9] = 0.0;
matrix[11] = 0.0;
matrix[12] = xTranslate;
matrix[13] = yTranslate;
matrix[14] = zTranslate;
}

Now, how do we incorporate that into our drawView: method? Well, we can delete the call to glTranslatef(), and replace it with code that declares another matrix, populates it with the appropriate translation values, multiplies that matrix by the existing matrix and then load the result into the OpenGL, right?

    static Matrix3D    identityMatrix;
Matrix3DSetIdentity(identityMatrix);
static Matrix3D translateMatrix;
Matrix3DSetTranslation(translateMatrix, 0.0, yPos, -3.0);
static Matrix3D resultMatrix;
Matrix3DMultiply(identityMatrix, translateMatrix, resultMatrix);
glLoadMatrixf(resultMatrix);


Well, yeah, that'll work, but it's doing unnecessary work. Remember, if you multiply any matrix by the identity matrix, the result is itself. So when working with our own matrices, we no longer need to load the identity matrix first if we're using any other transformation. Instead, we can just create the translation matrix and load that:

    static Matrix3D    translateMatrix;
Matrix3DSetTranslation(translateMatrix, 0.0, yPos, -3.0);
glLoadMatrixf(translateMatrix);


Since we don't have to load the identity matrix, we saved ourself a little tiny bit of work each time through this method. Also notice that I'm declaring the Matrix3Ds as static. We don't want to constantly allocate and deallocate memory. We know we're going to need this matrix several times every second all while the program is running, so by declaring it static, we cause it to stick around and be reused rather than having the overhead of constant memory allocation and deallocation.

Our Own Scaling Transformation


A matrix to change the size of objects looks like this:

scalematrix.png


A value of 1.0 for x, y, or z indicates that there is no change in scale in that direction. A 1.0 for all three would result in (you guessed it) the identity matrix. If you pass a 2.0, it will double the size of the object along that axis. We can turn the scaling matrix into an OpenGL ES matrix like this:

static inline void Matrix3DSetScaling(Matrix3D matrix, GLfloat xScale, GLfloat yScale, GLfloat zScale)
{
matrix[1] = matrix[2] = matrix[3] = matrix[4] = 0.0;
matrix[6] = matrix[7] = matrix[8] = matrix[9] = 0.0;
matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;
matrix[0] = xScale;
matrix[5] = yScale;
matrix[10] = zScale;
matrix[15] = 1.0;
}

Now, we are going to have to multiply matrices because we want to apply more than one trasnformation. To apply both a scaling and a rotation ourself, we need to multiply those two matrices together. Delete the call to glScalef() and the previous code we wrote and replace with this:

    static Matrix3D    translateMatrix;
Matrix3DSetTranslation(translateMatrix, 0.0, yPos, -3.0);
static Matrix3D scaleMatrix;
Matrix3DSetScaling(scaleMatrix, scale, scale, scale);
static Matrix3D resultMatrix;
Matrix3DMultiply(translateMatrix, scaleMatrix, resultMatrix);
glLoadMatrixf(resultMatrix);

We create a matrix and populate it with the appropriate translate values. Then we create a scaling matrix and populate it with the appropriate values. Then we multiply those two together and load them into the model view matrix. Now for the tough one. Rotation.

Our Own Rotation


Rotation is a little tougher. We can create matrices for rotation around each of the axes. We already know what Z-axis rotation looks like:

zrotation.png


X-axis rotation looks similar:

xaxisrotation.png


And so does Y-axis rotation:

yaxisrotation.png


These three rotations can be turned into OpenGL matrices with these functions:

static inline void Matrix3DSetXRotationUsingRadians(Matrix3D matrix, GLfloat degrees)
{
matrix[0] = matrix[15] = 1.0;
matrix[1] = matrix[2] = matrix[3] = matrix[4] = 0.0;
matrix[7] = matrix[8] = 0.0;
matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;

matrix[5] = cosf(degrees);
matrix[6] = -fastSinf(degrees);
matrix[9] = -matrix[6];
matrix[10] = matrix[5];
}

static inline void Matrix3DSetXRotationUsingDegrees(Matrix3D matrix, GLfloat degrees)
{
Matrix3DSetXRotationUsingRadians(matrix, degrees * M_PI / 180.0);
}

static inline void Matrix3DSetYRotationUsingRadians(Matrix3D matrix, GLfloat degrees)
{
matrix[0] = cosf(degrees);
matrix[2] = fastSinf(degrees);
matrix[8] = -matrix[2];
matrix[10] = matrix[0];
matrix[1] = matrix[3] = matrix[4] = matrix[6] = matrix[7] = 0.0;
matrix[9] = matrix[11] = matrix[13] = matrix[12] = matrix[14] = 0.0;
matrix[5] = matrix[15] = 1.0;
}

static inline void Matrix3DSetYRotationUsingDegrees(Matrix3D matrix, GLfloat degrees)
{
Matrix3DSetYRotationUsingRadians(matrix, degrees * M_PI / 180.0);
}

static inline void Matrix3DSetZRotationUsingRadians(Matrix3D matrix, GLfloat degrees)
{
matrix[0] = cosf(degrees);
matrix[1] = fastSinf(degrees);
matrix[4] = -matrix[1];
matrix[5] = matrix[0];
matrix[2] = matrix[3] = matrix[6] = matrix[7] = matrix[8] = 0.0;
matrix[9] = matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;
matrix[10] = matrix[15] = 1.0;
}

static inline void Matrix3DSetZRotationUsingDegrees(Matrix3D matrix, GLfloat degrees)
{
Matrix3DSetZRotationUsingRadians(matrix, degrees * M_PI / 180.0);
}


There's two methods for each axis' rotation, one to set by radians and another to set by degrees. These three matrices represent what are called Eular angles The problem with Eular angles is that we have to apply rotations along multiple axes sequentially, and when we set rotation on all three angles, we'll end up experiencing a phenomenon called gimbal lock, which results in the loss of rotation on one axis. In order to avoid this problem, we need to create a single matrix that can handle rotation on multiple axes. In addition to eliminating problem gimbal lock, this will also save processing overhead when rotations are needed on more than one axis.

Now, honestly, I don't pretend to understand the math behind this one. I've read doctoral theses on this (a matrix representation of quaternions) but the math just doesn't fully sink in, so you and I are just going to take it on faith that this multi-rotation matrix works (it does). This matrix assumes a single angle designated N, and a vector expressed as three floating point values. Each component of the vector will be multiplied by N to result in the angle of rotation about that axis:

multirotatematrix.png


This matrix requires that the vector passed in be a unit vector (aka a normalized vector), so we have to ensure that before populating the matrix. Expressed as an OpenGL matrix, it would be:

static inline void Matrix3DSetRotationByRadians(Matrix3D matrix, GLfloat angle, GLfloat x, GLfloat y, GLfloat z)
{
GLfloat mag = sqrtf((x*x) + (y*y) + (z*z));
if (mag == 0.0)
{
x = 1.0;
y = 0.0;
z = 0.0;
}

else if (mag != 1.0)
{
x /= mag;
y /= mag;
z /= mag;
}


GLfloat c = cosf(angle);
GLfloat s = fastSinf(angle);
matrix[3] = matrix[7] = matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;
matrix[15] = 1.0;


matrix[0] = (x*x)*(1-c) + c;
matrix[1] = (y*x)*(1-c) + (z*s);
matrix[2] = (x*z)*(1-c) - (y*s);
matrix[4] = (x*y)*(1-c)-(z*s);
matrix[5] = (y*y)*(1-c)+c;
matrix[6] = (y*z)*(1-c)+(x*s);
matrix[8] = (x*z)*(1-c)+(y*s);
matrix[9] = (y*z)*(1-c)-(x*s);
matrix[10] = (z*z)*(1-c)+c;

}

static inline void Matrix3DSetRotationByDegrees(Matrix3D matrix, GLfloat angle, GLfloat x, GLfloat y, GLfloat z)
{
Matrix3DSetRotationByRadians(matrix, angle * M_PI / 180.0, x, y, z);
}


This multi-rotation version works exactly the same way as glRotatef().

Now that we've replaced all three of the stock functions, here's the new drawView: method using only our own matrices and no stock transformation. The new matrix code is bold:

- (void)drawView:(GLView*)view;
{

static GLfloat rot = 0.0;
static GLfloat scale = 1.0;
static GLfloat yPos = 0.0;
static BOOL scaleIncreasing = YES;

// This is the same result as using Vertex3D, just faster to type and
// can be made const this way
static const Vertex3D vertices[]= {
{0, -0.525731, 0.850651}, // vertices[0]
{0.850651, 0, 0.525731}, // vertices[1]
{0.850651, 0, -0.525731}, // vertices[2]
{-0.850651, 0, -0.525731}, // vertices[3]
{-0.850651, 0, 0.525731}, // vertices[4]
{-0.525731, 0.850651, 0}, // vertices[5]
{0.525731, 0.850651, 0}, // vertices[6]
{0.525731, -0.850651, 0}, // vertices[7]
{-0.525731, -0.850651, 0}, // vertices[8]
{0, -0.525731, -0.850651}, // vertices[9]
{0, 0.525731, -0.850651}, // vertices[10]
{0, 0.525731, 0.850651} // vertices[11]
}
;

static const Color3D colors[] = {
{1.0, 0.0, 0.0, 1.0},
{1.0, 0.5, 0.0, 1.0},
{1.0, 1.0, 0.0, 1.0},
{0.5, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.0, 1.0},
{0.0, 1.0, 0.5, 1.0},
{0.0, 1.0, 1.0, 1.0},
{0.0, 0.5, 1.0, 1.0},
{0.0, 0.0, 1.0, 1.0},
{0.5, 0.0, 1.0, 1.0},
{1.0, 0.0, 1.0, 1.0},
{1.0, 0.0, 0.5, 1.0}
}
;

static const GLubyte icosahedronFaces[] = {
1, 2, 6,
1, 7, 2,
3, 4, 5,
4, 3, 8,
6, 5, 11,
5, 6, 10,
9, 10, 2,
10, 9, 3,
7, 8, 9,
8, 7, 0,
11, 0, 1,
0, 11, 4,
6, 2, 10,
1, 6, 11,
3, 5, 10,
5, 4, 11,
2, 7, 9,
7, 1, 0,
3, 9, 8,
4, 8, 0,
}
;

static const Vector3D normals[] = {
{0.000000, -0.417775, 0.675974},
{0.675973, 0.000000, 0.417775},
{0.675973, -0.000000, -0.417775},
{-0.675973, 0.000000, -0.417775},
{-0.675973, -0.000000, 0.417775},
{-0.417775, 0.675974, 0.000000},
{0.417775, 0.675973, -0.000000},
{0.417775, -0.675974, 0.000000},
{-0.417775, -0.675974, 0.000000},
{0.000000, -0.417775, -0.675973},
{0.000000, 0.417775, -0.675974},
{0.000000, 0.417775, 0.675973},
}
;

static Matrix3D translateMatrix;
Matrix3DSetTranslation(translateMatrix, 0.0, yPos, -3.0);
static Matrix3D scaleMatrix;
Matrix3DSetScaling(scaleMatrix, scale, scale, scale);
static Matrix3D tempMatrix;
Matrix3DMultiply(translateMatrix, scaleMatrix, tempMatrix);
static Matrix3D rotationMatrix;
Matrix3DSetRotationByDegrees(rotationMatrix, rot, 1.0f, 1.0f, 1.0f);
static Matrix3D finalMatrix;
Matrix3DMultiply(tempMatrix, rotationMatrix, finalMatrix);
glLoadMatrixf(finalMatrix);


glClearColor(0.0, 0.0, 0.05, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable(GL_COLOR_MATERIAL);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glNormalPointer(GL_FLOAT, 0, normals);
glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisable(GL_COLOR_MATERIAL);
static NSTimeInterval lastDrawTime;
if (lastDrawTime)
{
NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
rot+=50 * timeSinceLastDraw;

if (scaleIncreasing)
{
scale += timeSinceLastDraw;
yPos += timeSinceLastDraw;
if (scale > 2.0)
scaleIncreasing = NO;
}

else
{
scale -= timeSinceLastDraw;
yPos -= timeSinceLastDraw;
if (scale < 1.0)
scaleIncreasing = YES;

}

}

lastDrawTime = [NSDate timeIntervalSinceReferenceDate];
}


Wacky and Wonderful Custom Matrices


Are you still with me? This has been a doozy of an installment, hasn't it? We can't stop quite yet, though. And here's the reason why - so far I've only shown you how to recreate existing functionality in OpenGL. Yeah, what we've done can give a tiny performance boost thanks to the fact that our matrix multiplication is hardware accelerated, but that's not really enough of a justification in 99% of the cases to recreate the wheel like this.

But, there are other benefits to handling the matrix transformations yourself. You can, for example, create transformations that OpenGL ES doesn't have built-in to it. For example, we could do a shear transformation. Shearing is basically skewing an object along two axes. If you applied a shear axis to a tower, you'd get the leaning tower of Pisa. Here is what the shear matrix looks like:

shearmatrix.png


Here's what it looks like in code:

static inline void Matrix3DSetShear(Matrix3D matrix, GLfloat xShear, GLfloat yShear)
{
matrix[0] = matrix[5] = matrix[10] = matrix[15] = 1.0;
matrix[1] = matrix[2] = matrix[3] = 0.0;
matrix[6] = matrix[7] = matrix[8] = matrix[9] = 0.0;
matrix[11] = matrix[12] = matrix[13] = matrix[14] = 0.0;
matrix[1] = xShear;
matrix[4] = yShear;
}



And if we add the shear matrix, we get:

iPhone SimulatorScreenSnapz001.jpg


Try doing that with stock calls. You can also combine matrix calls. So we could, for example, create a single function to create a translate and scale matrix without having to do even a single matrix multiplication.

Exit the Matrix



Matrices are a huge and often misunderstood topic, one that many people (including me) struggle with understanding. Hopefully this gives you enough of an understand of what's going on under the hood, and it also gives you a library of matrix-related functions you can use in your own applications. If you want to download the project and try it out yourself, please feel free. I've defined two constants that you can change to switch between using stock transforms and custom transforms, and also to turn on and off the shear transformation. You can find these in GLViewController.h:

#define USE_CUSTOM_MATRICES 1
#define USE_SHEAR_TRANSFORM 1


Setting them to 1 turns them on, setting them to 0 turns them off. I've also updated the OpenGL ES Xcode Template with the new Matrix functions, including the vectorized matrix multiply function. Best of luck with it, and don't worry if you don't fully grok, this is hard stuff, and for 99% of what most people do in OpenGL ES, you don't need to fully understand projective space, homogeneous coordinates, or linear transformations, so as long as you get the big picture, you should be fine.

With great thanks to Noel Llopis of Snappy Touch for his help and patience. If you haven't checked out his awesome Flower Garden, you really should - it's an absolute treat.


Footnotes
  1. No, that's not an error. Stage right is what the audience would perceive as going to the left. Our icosahedron goes off to the left, so it is existing "stage right".
  2. Actually, not quite. When you call glLoadIdentity(), you're loading the 4x4 identity matrix, that illustration shows the 3x3 identity matrix.
  3. You might want to use a struct instead of a typedef to gain type safety. If you do that, then you'll have to make sure that you specifically pass parameters in by reference because structs are NOT automatically passed by reference, unlike arrays.
  4. Using Shark, the drawView: method went from being .7% of processing time to .1% of processing time. A substantial improvement in the speed of that method, but not in the overall application performance.

 
Design by Wordpress Theme | Bloggerized by Free Blogger Templates | coupon codes