Compositor phase one almost completed

thumbnail-tile-based-compositor

Phase one of the Compositor redesign project is almost completed. Phase one has made a lot of people smile, even if the compositor is still not using OpenCL. Phase two will give you OpenCL accelerated compositing in Blender! Biggest challenge at this moment is that funding for phase two is not finished.

The old compositor only showed the final image after the image is calculated. If a user is using the compositor he/she has to wait until the calculation is finished. Only when the final image is shown to the user, you know what the effect is and can react on that. This can take a long time when just making tiny adjustments to settings.

Nowadays most PC have multi-core CPU’s. In the near future Quad-core CPU would be the standard. The old compositor only uses one CPU-core to calculate a node. Another node can be calculated on another core. But due to the dependencies between nodes, most of the time only one or two cores are used.

In the new compositor user interaction is updated. The compositor always display result to the user as fast as possible. Even if a small area of the image is calculated this part will be visible. The user can than already make a decision and start modifying the node set-up. The user can even point out a location on the image where the compositor must start its calculate.

The second idea is to make better use of the PC hardware. The calculations are scheduled on multi-core CPU’s. In the future heavy calculations will also be scheduled on GPU’s.

 

The video displays a model from Dolf Veenvliet inside the new compositor. It runs on a dual-core laptop 2.2 GHz.

The ideas are a success. Every time we show the new compositor we get very positive response. Even large resolutions composites (8K or 12K) are fun to play with. The result and changes in the workflow are extraordinary on a two Quad-core-Xeon system.

The project is not over. The goal of the next phase is to make use of GPU’s to accelerate the calculations even more. Even supporting multiple graphics-cards! The power of modern graphics-cards are huge. In tests we did an image based depth of field with extreme settings (fStop on 0.1) and the calculation only took 20 seconds on a HD image.

The funding of this project is still on-going. We need another $3000 to finish the next phase. If you want to fund, please press the donate button, you won’t regret it.

Comments are closed.