Computer Animation Festival, Pixar, ILM and James Cameron - Report from SIGGRAPH
Posted by Timothy Chrismer on August 17, 2008
Friday -I got to see a re-showing of the entries for the Computer Animation Festival, today. It's great to see such jaw-dropping visuals on a huge screen, like the Nokia Theater. There were many shorts that I felt were noteworthy for their overall visual style and storytelling ability. Our Wonderful Nature: The Water Shrew was hilarious to watch. As was Chump and Clump. I definitely agree with the winners of the Student Prize category. The style and visual appeal of 893 was breathtaking. It was hard to look at that short and remember that it was made by students.
The Nokia Theater also hosted three Studio Nights this week. Tuesday, Pixar showed Frederic Back's The Man Who Planted Trees and had a short talk session with Back and John Lasseter at the end. Afterward, Leslie Iwerks's The Pixar Story was screened. It was really exciting to get to see the history of Pixar and John Lasseter in-person in the same night. Wednesday, Sony Pictures Imageworks hosted A Tribute to Stan Winston. Most notably, James Cameron was there to share stories of his collaboration with the iconic effects artist. He ended the night by screening the Blu-Ray enhanced Terminator 2: Judgement Day. Thursday, Lucasfilm hosted the pre-premiere screening of Star Wars: Clone Wars. Before the screening, we got a little insight to the background of the making of the film and series from John Knoll, a VFX supervisor at ILM, and Dave Filoni, director of the Clone Wars series.
This week, I got to see in-person a lot of the people and technology that are that backbone of this industry and make it great. The knowledge and memories gained in these few days will stick with me forever, and I can't wait to repeat it. See you all next year in New Orleans!
It’s better to trust the people you collaborate with - Report from SIGGRAPH
Posted by Timothy Chrismer on August 17, 2008
Thursday - As part of an attendee-rewards program, I was one of five Student Volunteers and attendees chosen to sit down and talk with graphics research pioneer and author, Andrew Glassner. It really was an eye-opening experience. After we introduced ourselves and explained our history, we were fortunate enough to hear a little insight from him on the realm of research.
Oftentimes, we like to think of ourselves as the sole owners of our own ideas. We think that we would be better off keeping our ideas to ourselves, until they're fully-realized, and then unleashing them upon the world to be met with great respect and awe.
"That," says Glassner, "just doesn't happen."
What usually happens, instead, is that the idea festers and we can never get everything quite finished enough to be fully-realized. When that happens, what we really need, despite our huge egos and Type A personalities, is another person in the loop.
Glassner went on to say that we have a natural tendency to fear collaboration. We fear that someone whom we trust will betray us and take our ideas to advertise as their own. "There's nothing wrong with being cautious like that," he noted. "The thing to keep in mind is that even if they do steal your idea, likely they won't be able to take it as far as you could, and you can always come up with something else."
The take-away that he gave at the end of our talk was that no matter what happens, it's better to trust the people you collaborate with until they prove otherwise. Ninety-nine out of one hundred times, they'll be loyal to you and you'll be better off because of it.
It was a huge honor to get to meet Dr. Glassner and it totally made my day.
I could see Modo and XSI being a good pipeline… report from SIGGRAPH
Posted by Pat Howk on August 16, 2008
Thursday - Modo FTW! - So today I went and watched a demo of modo 302 at the Intel booth. I want it! The modeling and unwrapping tools alone would make me switch from Maya to modo for modeling tasks. The animation tools aren't at all available for character animation, but you can do simple turn around or movements. What struck me was just the ease of use. To select a loop of edges you just double click an edge. You could click an edge, face, or vert and increase the selection by hitting a key, so it would increase the selection to the edges near it. Another thing is the modeling tools were similar to sculpting brushes. You can grab a face and "pull" it out and adjust it anyway you want. From the demos I saw you can literally create as fast you you imagine. That's it. I mean the people doing the demo's are pros but come on! They were making faces and weird creatures in as little as 5 minutes while I asked questions and they explained it to me. The renderer for modo is top notch now too. I would still keep your renderman or mental ray around, but the modo renderer is no slouch. I seriously want this piece of software! After seeing it in use I can't see why I've been using Maya for so long for things other than animation.
Another thing I saw, on the same day, is XSI's ICE (Interactive Creative Environment). This is XSI's node-based programing. This is hard to explain - you have to see it to believe it. So... I found this vid. It's visual programing. Like I said I can't explain it other than saying that it's like the hypergraph in Maya connecting shaders, but this is writing scripts. The Siggraph demo showed that someone made pong and space invaders only using ICE. They also showed some crazy rig and skin trick where you can drag and the last action you just did to the ICE viewport and connect other node's to that so that action is used by the other nodes... Like I said hard to explain. Watch the vid! I wouldn't mind trying it out.
I could see Modo and XSI being a good pipeline... Maybe someone could write Autodesk's Stereo camera rig in ICE!
And I've got to mention the Star Wars Clone Wars movie. Not Impressive. The animation was a bit stiff, the faces showed no emotion and all around it tried too hard to be funny when it just wasn't. With that being said, if this movie was left to be viewed on TV like originally planned, then it would have been different. As a TV show it's good. I just don't think it should be shown in theaters. It's not good enough for theaters. Other than that the battle scenes in the movie are really good. They were huge battles the action was great. I don't really want to say much about the movie because I don't want to ruin it for those that want to see it.
So that's it. My time at Siggraph 08 is finished. Friday I went to my shift to take down the slow art displays then back to the hotel. Not much to see on the last day. For those that haven't been this conference is huge, and exhausting, almost overwhelming. I'll be glad when I can finally sleep.
New Article: Solids vs. Surface Modeling: What and why you need to know
Posted by Tony DeYoung
on August 15, 2008
Colin Finkle, our industrial designer blogger, has written a new article for the site on the differences between surface vs. solids modeling. Surfaces and solids are the underlying math that defines the geometry of the forms you create. There are three ways to define 3D geometry: solids, surfaces and wireframes. Wireframes don’t play much of a role in CAD, but primarily in digital content creation (DCC) and gaming. The easiest way to understand the difference between surface and solids modeling is to think of a water balloon; the water in the balloon would be solids modeling, while the latex skin would be surface modeling. While you don’t necessarily have to understand surfaces vs. solids modeling to create high fidelity renderings, animations or simulations, knowing the limitations and the strengths of both can be very powerful knowledge, and pay big dividends in time and quality. Read article →
30-bits really IS visually impressive - Report from SIGGRAPH
Posted by Timothy Chrismer on August 15, 2008
Wednesday - In my previous overview of Pat's and my tour of the AMD/ATI booth, I mentioned that the new DreamColor monitor was being specially displayed as being compatible with the new FirePro line. After visiting the HP booth and reading an article on it in the August '08 Computer Graphics World magazine, (CGW) I wanted to explain a bit more about the DreamColor display.
The DreamColor was created through collaboration between HP and DreamWorks Animation after DreamWorks saw the need for an affordable alternative to expensive LCD displays for their productions. They already were in a technology partnership with HP, and it eventually became the HP DreamColor Technology initiative. Their new display, the LP2480zx, adheres to industry standards for color spaces and "customers can [even] control color nuances such as gamut, gamma, white-point, black levels and luminence."
HP is advertising the LP2480zx to be marketed worldwide starting at $3,499. In my opinion, it's a great price, considering the value and capabilities. I had the chance to see DreamColor in action, and I must say it's visually impressive. I'd definitely be looking into purchasing one, if student loans weren't an issue!
On another note, I got the chance to sit in on a session called OpenGL: What's Coming Down the Graphics Pipeline. The class was hosted by Dave Shreiner (ARM), Ed Angel, (University of New Mexico ARTS Lab) Bill Licea-Kane (AMD), and Evan Hart. (NVIDIA) For the most part, it covered the basics and history of the OpenGL pipeline. Even though I've studied the basics in texts before, I find there's something special to be gained from having it repeated in-person.
They started us off with flowcharts and a full overview of the pipeline, covering vertex and fragment shaders, and how they fit into the big picture. We then got to hear about the underlying mathematics and theory behind working in OpenGL. Bill Licea-Kane covered the specific shader coding principles, with many examples of functions in present and previous versions of GLSL. These principles were reinforced through a few sample shaders and examples. Finally, the entire session wrapped with a look ahead to what's coming for OpenGL. On Monday, they had announced OpenGL 3.0, and they went on to cite some of its new features including sRGB framebuffer mode, API support of texture lookup for OpenGL Shading Language 1.30, conditional rendering, and floating-point color and depth formats for textures and renderbuffers.
All-in-all, this sounds very exciting! I'm very anxious to see how well this runs in conjunction with the FirePro line this fall!
XSI ICE is a standout and as user-friendly as Shake - Report from SIGGRAPH
Posted by Ted Isla on August 15, 2008
Summary - SIGGRAPH has been one the best experiences of my life. For one week the entire computer graphics industry congregates to share one common interest, the evolution of computer technology. The convention was more than just a tech demo; it was a social gathering of great minds. Other than the Olympics, I believe SIGGRAPH was the second largest international gathering this past week.
It seems that the tech lingo of our CG community spreads widely across the globe. I found myself in conversation with some Japanese developers who were researching how to render a kitchen scene by ray tracing it with 10 bounces in just a matter of minutes with 6 lights. A fellow student volunteer from New Zealand explained how he coded his own expressions to generate particle effects. At home, I have trouble explaining the work I do to my family during Christmas. This was an enlightening experience to speak to people who could tell me how to get the right results from my HDRI Map!
Out of the major packages, one that stood out to me was XSI ICE. Its multi-threaded technology delivers high-end real time interactive results. The best feature is its node based command workflow. I found the system very user-friendly coming from an Apple Shake background. With this in hand, it shortens production time on a project without having to produce intricate lines of code.
The most underrated section of the convention was the New Tech demo located in the South Lobby. An exhibit that stuck out was the Copycat Arm system. The contributors were Kiyoshi Hoshino, Motomasa Tomida, and Emi Tamaki from the University of Tsukuba. Users were able to film their arm in front of a high-speed camera while an algorithmic program translated pre-calculated data to a robotic arm. In other words, the mechanical arm imitated the users movements without any pre-calibration.
I’m very fortunate to have participated in this year’s Student Volunteer Program. Not only were we able to network with peers, they fed us every day and gave away free stuff at the end of the week donated by our sponsors! And in compensation for the amount of shifts we worked, the Committee organized office luncheons and hall lectures throughout the week with industry representatives from Computer Graphics World, Dreamworks, Sony Imageworks, Disney, Howey Digital, Curious Pictures, and ILM. We even received cool hats from Reality Check Studios.
It is difficult to cover all the amazing things that happened these past five days. It’s just been hard searching for an equitable vocabulary to describe the entire experience. If there is one thing from SIGGRAPH that I genuinely earned, it is the new friendships I made with my fellow Student Volunteers whom someday I’ll be working with.
CUDA desktop rigs and whisper quiet workstations - report from SIGGRAPH
Posted by Pat Howk on August 14, 2008
Wednesday I spent what time I had on the exhibition floor and saw a few things I liked. First I went by the NVIDIA booth. The coolest thing they have at the booth is called a Quadro Plex 2200 D2 ($10.8k starting price). What it is is an external system that is packed with 2-4 Quadro video cards. When it’s plugged into your system you system recognizes it as one card. It also automatically adjusts the resolution and scaling if you power more than one monitor. The model I mentioned was the highest end and has 8GB memory, 120GBps memory bandwith, 1 Display Port, and 4 dual link DVI. I thought this thing was amazing.
The other cool thing I saw worth mentioning was AMAX. They make high end workstations and render farms. The workstation I saw had five high power fans that were whisper quiet! I literally had to put my ear up to the machine to hear it. More on that later tonight when I get more time to post. Attending the ILM talk in the SV booth now.
AMD Booth Tour - real-time lighting, dynamic tessalation, stereo 3D output - report from Siggraph
Posted by Pat Howk on August 13, 2008
Tuesday was the AMD/ATI booth tour! As soon as we got their we met up with Bill Shane, official title is “business development executive”. Bill tooks us around to the different displays withing the AMD/ATI booth and explained to Tim and I what was going on in each display. This very first thing he showed us was a workstation running one of the top-end FireGL and was demonstrating a car demo put together by Works Zebra of Tokyo that allowed you to customize a car any way you want. The interesting this about this demo was that the software was using the GPU to compute real-time lighting for the car. So no reflection maps on surfaces or lights. There’s no need. The FireGL was able to compute the lighting on the fly! Another big thing was they had announced today the FirePro line of graphics cards. Bill confirmed that the low end card would in fact be $99! I couldn’t get a price for the midrange FirePro the FirePro V5700. But I do know that the low end FirePro V3700 has 256mb graphics memory, 2 dual link enabled DVI outputs, and “next generation GPU with 40 unified shader proccessors” The midrange FirePro V5700 has next generation GPU with 320 unified shader proccessors, 512 mb memory, 2 Display Ports and one dual link DVI, and HDR rendering with 8-bit, 10-bit, and 16-bit per RGB color component! Those two cards are said to be coming in the fall.
Starting with the FireGL V7600 the cards all have HD component video out, at least 512 of memory, and Stereoscopic support! Along with that, the top of the line FireGL V8650 comes with 2GB of memory!
Going back to the booth now Bill showed us a station where they were demonstrating their GPGPU’s which are GPU’s without outputs. Which means you basically have the added bonus of two graphics cards without the second card doing anything beside computations. And the last thing I’m going to talk about here is when we watched a demo that was running the top of the line consumer Radeon card. It was an AI demo with “Froblins” a mix between a goblin and a frog. The demo showed the little guys mining gold and bringing it back to the center of town, it showed collision detection so that the Froblins won’t collide with one another and it show dynamic tesselation. Yes. They showed us an example as in the farther you zoomed out from the landscape the less triangles are in the scene, hence less detail. But the further you zoomed in then the more triangles where in the scene and you characters raising the level of detail the closer you got. That, my friends, was an amazing demonstration. Truthfully it was a bit overwhelming at the booth. There was so much that AMD/ATI is doing now that its hard to keep up with. To me those were the most interesting, and the ones I understood the most. I want to thank Bill Shane and AMD for doing this for us and being very nice and professional the whole time, even though he knew he was dealing with students. and newbie interviewers. It was very informative and he even left himself available, by phone or email, for more information if we had more questions.
Confucius Computer: Transforming the Future through Ancient Philosophy - Report from SIGGRAPH
Posted by Ted Isla on August 13, 2008
Wednesday - Confucius Computer was featured as one of the New Technology demonstrations at SIGGRAPH. The software is an innovated form of illogical media computation that explores into Confucius philosophy. It enables the user to learn his historic teachings by incorporating his philosophies with casual, everyday activities such as eating and listening to music.
The first station is Chat based. The user can engage in conversation with a virtual representation of Confucius and ask him questions or deliver statements. In return, he responds with wisdom of encouragement accompanied by relative vocabulary words in lieu of his philosophical advice.
Station two demonstrates algorithms that filters any form of music in order to make it “balanced”. The application will filter the rhythm and scale of the song, and output it harmonically in “positive” Chinese pentatonic style. Simultaneously during analyzation, it will also generate a painting in correspondence to cosmological theory of the five elements. The five elements being: metal, wood, water, fire, and earth. This also allows the user to manipulate the elements in the painting to generate a different music output.
Station three is about food! Here you can measure the balance of your Yin and Yang intake with every meal. The user is able to input a recipe and Confucius Computer will inform you whether the ingredient is hot, cold, or neutral according to traditional Chinese medicine.
Maya 2009 (and all Autodesk products) are adding 3D Stereoscopic tools - report from SIGGRAPH
Posted by Pat Howk on August 13, 2008
Monday, I caught the end of the Autodesk event and got to see a little bit about the new Maya 2009! The first thing I saw was the new particle system. It was really easy to recreate fluid effects, smoke, explosions. Everything done in the demo was created without writing expressions. To me this was the one of the best parts. They also demonstrated some real-time collision detection. But the good parts were the new Stereoscopic tools that are coming out for all of Autodesk's products. I'll focus on Maya since I'm a Maya user. The best thing was Maya 2009 has a built in stereo camera rig. Some really cool options for this camera rig are real-time 3D so that you can animate and model and do everything you want to do while wearing your 3D glasses! That way you don't actually have to render you scene just to see if you stereo is working properly. The next cool thing is the camera can actaully project a red color and a blue color right onto the screen. What this does is gives you a reference for the 3D. Meaning that everything in front of the red plane is going to look like it's coming out at you and everything behind blue is going to be in the background. This also will increase your work flow by allowing you to get a good idea of what your scene will look like before you even render your first scene. As it is now you had to make your own rig and continually render a scene just to see how far the depth is and if you need to tweek more. The whole Autodesk pipeline got a stereo upgrade to help make stereoscopy an easier thing to do.
The industry right now looks like it is getting behind theses 3D/stereo technologies 100%. Dreamworks reps claim that their moving to it and ALL of their upcoming 3D movies are going to be in stereo. These tools just make that transition a whole lot easier for pro's and students alike. If your a 3D student today then you can't afford to not be working on stereo projects in school. It's doesn't matter if you like it or not, from what I've seen the industry if moving at full steam with stereoscopy.