AMD makes a smart move with MachStudio Pro

Posted by Tony DeYoung on April 27, 2009

Last week I went down to Hollywood to check out my first hands-on demo of MachStudio Pro. I could go on and on for pages about MachStudio Pro. But to cut to the chase for this post,  AMD has made a great find and a smart strategic partnership.  The combination of FireGL/FirePro cards and MachStudio Pro are game changing and leaps and bounds ahead of anything else I have seen running on desktop hardware. For CG animation, Archviz, and Industrial design, 3D workflows will change.

Check out this short 4 sec video below.  It is 99 frames, for a scene of 2.2 million polygons, rendered on a FireGL V8650 (2 GB framebuffer) at 1024 X 576. It rendered in a little under 45 sec. It includes real-time lighting, shadows, gels (the lighting on her face), and ambient occlusion. Be sure to watch it in HQ mode.  To view the scene at the full quality at which it was rendered (i.e. without YouTube compression), download the .mp4 version.

Now what’s not really apparent about this video is how it was created - what was the workflow.  Traditional 3D workflows are model and animate, followed by lighting, materials, fx. Once you reach the lighting stage, you do a setup, then a test render. Then you adjust. Then you do another test render and repeat until you our out of time or until it is ‘good enough’.  Same is true for materials, fx, etc.  Each change no matter how small requires setup,  a test render and then final render.

MachStudio Pro running on a FireGL changes that workflow.  The setup and test render are virtually simultaneous. And they happen to the whole scene, not to just a few frames.  Moreover the final render is the exact same thing as the test render (just a larger viewport). This is hard to describe. Saying “real-time rendering” is just not meaningful or even accurate. So instead, let me just point out some of the things that really struck me during the making of the above video clip and as well as a few other projects (some of which are featured here.).

It’s like working in a 2D video editor
Scrub the timeline and watch the scene with fully-rendered FX as you watch the scene animate.  It felt more like I was in a 2D video editor compositing,  rather than a 3D product creating.

Work in scenes, not frames
When your setup, test render and final render are all the same thing - all simultaneous, you can work in scenes rather than being forced to work only on particular frames. (and not have to worry that the FX would not be replicated in other frames or that transitions and lighting would mismatch). 

No test renders
In Maya or any other app with renderer, when you apply lighting, you try to base it on your experience with how various settings affect the image. Experimenting with full quality rendering is simply not practical time-wise - especially across an entire scene.  So as a TD you become familiar with basic setting you know tend to work, and you stick with those. You setup the render (e.g. Mental Ray GI) and then you render. You can refine it but even minor tweaking can be tedious, especially for complex models and many frames.  In MSP I could experiment.  I could change anything related to lighting or FX and watch it impact the scene immediately.  No setup and then render.  As I setup I am rendering. This felt strange indeed - wonderful, but very strange.

Gels
I could apply gels that could focus on and/or follow a specific character or character fragments, and could animate these gels, apply soft shadows, etc.  So for example, I could apply a gel over the face of a character looking out the window.  The gel would simulate moving tree leaves casting shadows on the face in the moonlight. What made this so surreal is not only that I could change the properties of the gel and make decisions on quality simultaneously, but that I could scrub the timeline, and watch how the gel performed as the character animated.

Ambient Occlusion 
I am still having a hard time believing what I saw.  I could apply ambient occlusion in real time and adjust dynamically, for different objects.  No setup and hit render.  Just adjust and watch the impact - on the scene and animation, not just a frame. 

Depth of field
This was caught my attention repeatedly as I watched some of their artists develop projects for clients.  I was watching them work on a scene for a Bionicles movie (a scene not just a frame) and as it was animating, I was watching depth of field effects.  This is the kind of thing you see in a compositor with a 2D render, rather than in a live, fully interactive, 3D environment.

Subsurface scattering
The head-turner here was the real-time adjustment. Sure I can apply Mental Ray shaders in Maya. But the procedure is always adjust, preview, render - not adjust creatively, at the speed of your hands

Blooms and lenses 
Not just blooms on a set of frames, but blooms that could be part of a whole scenes - like a real camera lens doing the filming - and all adjustable as you worked, without the adjust-preview-render. In fact you could control everything about the camera lens (HDR lenses, by the way)

Artistic lighting freedom
I am used to the concept of ray tracing to create physics accurate lighting. The scene can look great.  But from an artistic perspective, that can actually be a limitation. Sometimes the effects you want are not something that can be duplicated in the real world.  But with MSP you can “get beyond photon reality”.  I could get creative: add a gleam to the eye, move the highlight higher on the hair, close the iris on the lens for the scene, but let the face glow, etc. Who ever thought you could actually art direct in 3D!

Obviously I’m only touching a very few points of what I saw or tried. But these were so mind-boggling to me, I thought they were worth the long blog post.

How does this actually work?  I have little clue honestly.  I know it is something to do with GPGPU computing and great use of FireGL/FirePro hardware. What struck me though, was when for each model I kept asking the number of polygons,  the StudioGPU guys would look at me like “what kind of irrelevant question are you asking”.  The polygon count was essentially a non-issue. Texture size and quantity was the bigger constraint. 

What I do know is that AMD made a brilliant move here. They have been progressively demonstrating that they have a great product for CAD. Now they are poised to own the the DCC market by both creating optimized drivers for leading DCC apps, and by working with StudioGPU to change the very nature of the 3D workflow using the GPU.

 

Comments

wow... this sounds great. I saw MachStudio at the ATI booth at last year's Siggraph, but from what you describe not only must it have been a very early release, but it also grew in leaps and bounds! Some of these features were working (and what I saw was equally amazing to me as described here), some are new or maybe I just didn't catch them but I can't wait to see this current version. What also caught me off-guard was the instant feedback on lighting and rendering choices - in contrary to traditional software where you preview and then render, MachStudio seems to always be rendering - and when you want to "render" your frames you simply tell it to write out the results in the form of a file on disc. It was impressive.
Page 1 of 1 pages

Add your comment

Note: All comments are moderated for spambots so there will be a posting delay.
Your email address will not be published.

Anti Spam: Please enter the word you see in the image below: