Interview with Jon Wadelton NUKE Product Manager - page2

Q: What unique technologies use NUKE?

NUKE has a powerful multi-channel workflow that allows you to work on multiple image streams at once in the composting tree.   This also extends to working with stereo images.  Multiple stereo images can flow down one pipe in the compositing tree.  This reduces clutter and makes for a cleaner more intuitive workflow.

NUKE also has a new Deep compositing system, which uses technology developed by Weta Digital on Avatar.  Traditionally when compositing CG renders together the 3D guys need to render what’s called a holdout in one of the images where the other image will be inserted.   As an example think of a scene in Avatar where Jake’s avatar character is running through the forest.  The 3D guys need to render the forest and then insert Jake’s avatar into the forest scene.  In order to do this they need to render a ‘hole’ or holdout in the forest where Jake will go.  This is all fine if you never need to move Jake, but what if the director needs to put the Jake character in another spot, or use a different take?   It means of course you need to render the forest again with the Jack holdout in a different place.  On a movie like Avatar this re-render takes hours per frame.   The Deep compositing system solves this problem.  Instead of rendering the forest with the holdout, the 3D guys render some extra information about the scene called Deep data.  This Deep data allows us to composite the Jake character into the forest on the fly in NUKE.   This means there are no extra renders and many hours saved.

 

Q: Recently you released KATANA 1.0. When do you plan to release a new version of NUKE? Can you reveal something from the next release such as new features?

A: Yes we have a major new release of NUKE coming out in our second quarter next year.  A major thing we’re looking at for the next release is harnessing the GPU for image processing.   We will have some of our compute intensive algorithms such Denoise, zDefocus, and motion estimation running on the GPU.   We’re also looking at some tight integration with a new timeline application HIERO. 

 

Q: Your partners are movie studios like Warner Bros, The Moving Picture Company and many more. How does this collaboration work?  

A: Yes, we work very closely with movie studios.  In fact most products at the Foundry actually started out their lives ‘in production’ as in house tools at major effects houses.  NUKE from Digital Domain, KATANA from Sony Pictures Imageworks and MARI from Weta Digital.   Both MARI and KATANA continue to be co-developed with teams working both at The Foundry and Weta and Sony respectively.

In addition to our close partners we also work closely when other studios to help solve problems they might have during production.  Sometimes it’s as simple as a studio having a problem with a shot and sending us the footage and The Foundry image processing research team seeing if that can solve that problem.  We recently did some work on Tron Legacy for just this sort of issue.  We value this sort of interaction very highly as it gives us real world problems to solve which we then eventually fold back into our products for future releases.   

 

Q: What are your plans for the future?

A: One thing that will be happening for sure is more data sharing between all our applications.  For instance set up a 3D render scene in KATANA, and automatically re-create a comp for that scene in NUKE.  Or from NUKE, import the KATANA 3D scene and and ‘on demand’ request a new pass or matte for a 3D element.    I see this sort of data sharing as a huge time saver, especially in smaller houses.

The major thing we’re working on is future proofing NUKE for future advances in hardware.   Our new ‘Blink’ framework which you’ll see in NUKE next year enables us to write image processing algorithms that can potentially run on any new hardware that becomes available.  For instance, CPU’s came out with a tech called ‘SSE’ a few years back which enabled some algorithms to run up to four times faster.  Software (including NUKE) has been slow to adopt it as it involves re-writing the algorithms and having a separate path for SSE vs non-SSE.  Then you throw a GPU into the mix, which is different again and now you have three paths you need to maintain.  Our ‘Blink’ framework is designed to fix all that.  We write the algorithm once and Blink sorts out whether it’s going to CPU, SSE or GPU. 

 

 

Popular