This is an old revision of the document!


Match Moving (Camera Tracking) Tutorial for Blender Using Voodoo

As a quick intro for you, this is the sort of thing we'll be showing you how to make: A virtual element within real camera footage. We're going to show you how to get the virtual element to move so that it matches up with the original camera movement.

Introductory Disclaimer

As with all our tutorials, we do not pretend to know what we are doing. Our tutorials are based on our own experiences and a lot of trial and error. We are not, by any stretch of the imagination, experts. We may get things wrong… often. If you find anything incorrect in this tutorial (or anything else on our website) then please let us know.

Also, this tutorial is for Windows since that was the system we were using when we figured all this stuff out. We will also assume that you have a working Python interpreter installed in your system path (as all Blenderheads should). This is quite an advanced tutorial so you should already be fairly adept with Blender.

Thanks,

John and Scott.

About Match Moving

One of the main reasons that we got into the idea of making a film was the fun stuff that we could do with Blender. Blender is a very capable 3D modelling and animation suite. However, while we really liked the idea of making an animated film and felt that Blender was fully equipped to let us, we did not really think we were up to it (due to a lack of talent and experience). We decided instead that it would be easier to include Blender graphics in a live action film. This process gets used a lot these days and there are three main ways of mixing real and virtual elements in a scene.

The first way (and arguably the simplest) is to use a matte painting. This is when you cover part of the frame with a picture or other video source. Matte paintings are used when the camera will not move but is in a fixed position. This tends to limit what one can do with a shot because the level of interaction between the real and virtual elements cannot be too complex.

The second way is to use some sort of 'green screen' technique (sometimes called Chroma Key). This is when you replace what is behind the real elements of a shot. This involves filming the real elements in front of a screen. The screen will be a unified colour that does not appear in the real elements (high luminance green or blue are popular choices). This colour can then be replaced with whatever background the film makers would like to appear there. The camera shots are still somewhat limited to how complicated you want your green screen set up to be. For example, if you want to track all the way around a real element, you must have a full 360 degrees of green screen. Also, if you want the camera to be moving at all, you will need to track its movements so that the background can be matched to it. Usually this involves drawing dots at known distances on the green screen.

The last type may have a name, but since we don't know what it is, we're going to generalise and call it CGI Compositing Effects. This is when you film something without any tricks and then paste animations on top of the film. This is used frequently in films but a particularly good example might be, “Who Framed Roger Rabbit?” While the drawn animations did not literally interact with the live action stuff, events within the live action shooting can be set up so that animations can be drawn on top of the film to make it look like they caused the events. For example, a door can be filmed to make it look like it opened by itself. A cartoon man can then be drawn in to make it look like it was him that opened the door. Similarly to Green Screen, if you plan on moving the camera during such a shot, then you will need some way to track it.

This last technique is the one we plan to use in our film and is the one we will attempt to demonstrate in this tutorial. We're going to use a fairly simple example to demonstrate the technique as most of the things we are planning do not involve especially complex interactions.

Technique

The largest problem we face is tracking the camera's movements. If the camera is stationary, then it is fairly trivial to put CGI elements into a shot (in terms of technology at least). While there may be one or two shots in our film that use this technique, most will involve a moving camera. However, the best shots can be achieved by limiting the camera's movements to simple panning. That is, the camera is on a tripod and does not change location, just where is it pointing. However, if you wish to use a completely stationary camera then you may still find other aspects of this tutorial useful.

We made a simple panning shot of a desk in John's bedroom, leaving a nice space on the desk where we were planning on adding a virtual object. If you want to follow along with the tutorial using this same footage, you can download it from us.

The first thing we did when we got the footage from the camera was to convert it to a series of Targa (.tga) image files (one for each frame).

Some common mistakes at this stage can be that you do not know the image size of the shots that were taken, their aspect ratio or their frame rate. Make sure that you know these details as they will be needed later.

Adobe Premiere Interpret Footage Window Incidentally, we used Adobe Premiere Pro CS3 to convert our MPEG camera footage to the required series of Targa files. After importing it, we told Premiere to interpret the pixel ratio as PAL Widescreen (by right clicking on the file in the project window and selecting “Interpret Footage…”) because that matched our MPEG file. We then put it into a sequence and changed the movie export settings to Targa before exporting it. We used the default name of “Sequence 01.tga” which a series of files with names like “Sequence 01001.tga” and “Sequence 01002.tga.”

Adobe Premiere Export Movie Window

We need to set up the Targa files so that they will work with Blender later on. This mainly involves making copies of the Targa files and naming them correctly. To deal with that, we wrote a little Python program that copies all the Targa files into two subdirectories and renames them appropriately. The code is shown below and can also be downloaded as a file:

import string
import os
import dircache
import shutil
 
backbufdir = "backbuf"
backgrounddir = "background"
 
full_list = dircache.listdir(".")
targas = list()
 
def numtotext(num):
	if num < 10:
		return "000" + str(num)
	if num < 100:
		return "00" + str(num)
	if num < 1000:
		return "0" + str(num)
	else:
		return "" + str(num)
 
for filename in full_list:
	if filename.endswith(".tga"):
		pth = os.path.join(".", filename)
		if os.path.isfile(pth):
			targas.append(filename)
 
if os.path.isdir(backbufdir):
	delfiles = dircache.listdir(backbufdir)
	for f in delfiles:
		pth = os.path.join(backbufdir, f)
		if os.path.isfile(pth):
			os.unlink(pth)
	os.rmdir(backbufdir)
 
if os.path.isdir(backgrounddir):
	delfiles = dircache.listdir(backgrounddir)
	for f in delfiles:
		pth = os.path.join(backgrounddir, f)
		if os.path.isfile(pth):
			os.unlink(pth)
	os.rmdir(backgrounddir)
 
os.mkdir(backbufdir)
os.mkdir(backgrounddir)
 
i = 0
 
for t in targas:
	i = i + 1
	s = "x" + numtotext(i) + ".tga"
	backgroundpth = os.path.join(backgrounddir, s)
	shutil.copy(t, backgroundpth)
	u = "x" + numtotext(i)
	backbufpth = os.path.join(backbufdir, u)
	shutil.copy(t, backbufpth)

You will need a working Python interpreter installed on your system (see www.python.org) to use this code. Just save it to a file and call it something like “TargaProcessor.py”. In Windows you just have to copy the file into the same directory as your original Targa files and double click it. This could take quite a while. You should now have two subdirectories (called, “background” and “backbuf”) filled with files with names like, “x0001” and “x0001.tga”. When the directories each have the same number of files as the number of original Targa files then you can continue.

two created directories in Windows Explorer

The next step is to take the Targa files in the “background” subdirectory and put them into a Camera Motion Tracking software package. Although there is a discontinued program called “Icarus” that is somewhat popular in the Blender community, we decided to use one called “Voodoo” because it seems easier to use and is still under development (which means it will continue to improve). Both programs are free to use, but Icarus can only be used for educational purposes. Voodoo can at least be used to make movies you might plan on making money from. Sadly, neither program is open source, but we live in hope.

You can get Voodoo from, http://www.digilab.uni-hannover.de/download.html

Load up Voodoo and choose “File → Load … → Sequence …”. Click “Browse” and select the first Targa file from the “background” subdirectory. Our files weren't interlaced and we took the shot from a tripod, so we picked those options. Voodoo File SelectionIt is also prudent to enter in some information about the camera that you shot the footage with. So, choose “File → Load … → Initial Camera …”. All you really need to worry about is changing the “Type:”. You can of course do more than that to improve the results, but this was good enough for us. We selected “PAL 16:9 anamorphic (DV)” as this was the closest match to our video footage.

All you need to do now is click on the “Track” button on the bottom right of the Voodoo window. You should see a bunch of little crosses appear on the frame as Voodoo tries to find points in the image to track. Voodoo Camera Tracker WorkingThe way Voodoo works is to try and find points in the image that are significant and then track them from frame to frame. By doing this for multiple points, Voodoo can reverse engineer the shot to figure out what movements the camera made. You might find that Voodoo asks you if you would like to refine the results. We'd suggest that you do. You might think that Voodoo doesn't seem to be doing anything after this, but if you look at the command prompt window part of Voodoo you still see a little activity. It can take a very long time to do this residual error correction so be prepared to wait for a while.

Once it eventually completes (The command prompt window will include the magic word, “FinalEstimation”) then we can export the information about the camera so that it can be used in Blender.

Final Estimation Voodoo Command Prompt Window

Click “File → Save … → Blender Python Script …” and save the information in a file somewhere (we called ours “cammotion.py”). This will be a Python file that we can use in Blender. Make sure you select “Export all” in the next window. Our own file is available to download.

Export All Points from Voodoo

You should now be able to quit Voodoo tracker and load up Blender.

We're going to assume you know Blender a bit because this is quite an advanced tutorial anyway. When you load up Blender, make sure you delete everything in the scene by selecting everything (hit “A” key until everything is highlighted in pink) and hitting the “X” key.

Blender Select All and Erase

Once the scene is empty, change to Blender's Text Editor window and open the “cammotion.py” file (or whatever you called it). Then run it and you're 3D View should now have a camera and a bunch of dots on it. Hit “NUMPAD 0” to see the camera's viewpoint.

Blender Text Editor Running Python Script

Blender Viewing Dots Through the Camera

Press “F10” to get to the scene panel and change the start in the animation tab to 1. Find out how many Targa files your animation produced to get the frame number for the end. In our case, we had 701 Targa files so we had 701 as our end frame. Change the Buttons Window's current frame to 1. Blender Animation Tab SettingsYou can now see the tracking of the camera in Blender. In the 3D View Window, hit “ALT+A” to see the animation from the camera's viewpoint (Hit “ESC” to stop it). You should see that the little dots match the green crosses that were seen in the Voodoo Tracker. The virtual camera movement should match the original camera quite well. The dot's are not in their correct 3D position, however, but will look that way when viewed through the camera. If you change to a different view you will see that the dots are probably in a surprising position.

To make things easier still for us we need to set up Blender a little further. We want to set up the virtual camera to match the same dimensions as the original and to show a background image that will help us sync things up. Still in the Scene Buttons Panel, change the SizeX and SizeY of the Format tab to match the original Targa files. Our Targa files were 720 pixels wide and 576 pixels high. However, the format of the original MPEG video had a 16:9 dimension ratio. A little bit of maths shows us that 720:576 is not a 16:9 ratio:

720 / 16 = 45
576 /  9 = 64

The two results would be the same if the ratios matched. However, we can simply adjust the aspect ratios of the Blender camera to correct this so that we end up with a 16:9 ratio output. We just need to set the AspX and AspY values to match the results of the above equations. For AspX, we use the result of the second equation because the width needs to be compensated by the ratio of the height. Thus, we set AspX to 64. Similarly, we set AspY to the result of the first equation as the height needs to be compensated for by the ratio for the width. Thus, we set AspY to 45.

Blender Resolution Settings

While still viewing from the camera, bring up the Background Image window (“View → Background Image…”). Click “Use Background Image” then click “Load”. Find and select the first of the Targa files in the “background” subdirectory (“x0001.tga” in our case). Now click “Sequence” and then “Auto Refresh”. Still in the Background Image window, change “StartFr” to 1 and change “Frames” to the number of Targa files (701 in our case). Again, you can check how it worked by hitting “ALT+A” from within the 3D View window (Hit “ESC” to stop it).

Blender View Background Image Menu Option

Blender Background Image Window

If you find that the dots do not perfectly match the background image, you may be using incorrect values for the AspX and AspY. This can sometimes occur if you chose the wrong initial camera type in Voodoo. To correct this, you'll need to fiddle about with these two values until things line up better.

At this point you should be able to add your virtual elements and match them up with the original scene by checking the view in the camera. Remember that the dots in the scene are not in their correct 3D position, but will show the rough focal distance of the original camera. So, elements that should be in focus in your shot should probably be placed at the same distance from the virtual camera as the dots. You may have to play about with the camera a bit to match the lens better. You can do this by selecting the camera and hitting “F9”. It's important that you set your objects at a good distance from the camera if you want to get a good perspective on them. This is largely a trial and error process.

Once you have your scene all set up, you'll need to tell Blender to use the original footage for the background during the render (you may not always want to do this but this is a nice quick way for this test). Go to the Scene panel in the Buttons window again (“F10”). In the Output tab, change the “backbuf” entry to the first file in the “backbuf” subdirectory (in our case this was, “x0001”). To make sure that Blender cycles through all the images, we can use “#” to mean the current frame. In our case, this means replacing “x0001” with “x#”. Then make sure you click the button next to “backbuf” so that a tick appears in it. Blender Backbuf Output Tab Select where you want Blender to output the video by changing the “/tmp\” in the above field to the desired directory. You can now select the output format you want to use for the video in the Format tab of the Scene Panel (F10). We choose FFMpeg with an MPEG 2 output at 100 quality. We kept our frame rate at 25 fps to match our original footage. Then just hit the “ANIM” button to produce your render. It will be saved in the directory you specified. ===== Thoughts ===== While this tutorial has focussed on match moving alone, there are many other things that you can do to improve your images. For example, you might include shadows and lighting that match the real elements of the shot. If there are reflective surfaces in the original shot, you might want to get your virtual elements to appear. The main thing to consider is how the object would affect the real elements if it was also real. We may try to give a tutorial on these issues at a later date. As an example, here is the same shot shown at the start of the tutorial with the addition of shadow and reflection: <html> <object width=“425” height=“350”><param name=“movie” value=“http://www.youtube.com/v/hKIpjz0n2uw”></param><embed src=“http://www.youtube.com/v/hKIpjz0n2uw” type=“application/x-shockwave-flash” width=“425” height=“350”></embed></object> </html> ===== Problems ===== The first potential problem we can see with our method for match moving is that the magic symbol “#” which we use for the backbuf will always make a 4 digit number. This means that our whole tutorial is based on the idea that your shot will not be more than 9999 frames. In PAL (25 frames per second), that would be just over an hour long so it's probably not an issue. In addition, there are other, more advanced, ways of getting the background into the final render (such as Blender's very powerful compositor). The second issue (which you may have realised for your selves) is that there were no moving elements in our original shot. Match moving largely relies on the idea that there are a lot of elements in the shot that are stationary and that it is mainly the camera that is moving. However, you can still include moving things in the shot (like actors and such), but you will probably get results that are not quite as good. Also, if you choose not to use a tripod shot and use a free moving camera, you may find the results will be way off. Some of the tests we did with this weren't very good. Our recommendation for good results is to limit the number of active elements in the original shot and try to stick to tripod-based panning. This tutorial has not covered shadows, reflections, lighting and when real elements move in front of the virtual elements. We hope to cover all of these in a later tutorial. We hope you've enjoyed this tutorial and we fully welcome feedback and improvements. We have no idea how half this stuff works, but this is how we did it. If anyone knows of a better way, please let us know. We love you all, John and Scott. ===== Spread The News ===== <php> $fulluri = 'http:' . $_SERVER['SERVER_NAME'] . $_SERVER['REQUEST_URI']; $namearray = split(”=”, $_SERVER['REQUEST_URI']); $pagename = $namearray[(count($namearray) - 1)];

echo '<a href=“http://digg.com/submit?phase=2&amp;url=' . $fulluri . '” class=“media” title=“http://digg.com/submit?phase=2&amp;url=' . $fulluri . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:digg_16x16.png” class=“media” title=“digg it” alt=“digg it” /></a> <a href=“http://digg.com/submit?phase=2&amp;url=' . $fulluri . '” class=“urlextern” title=“http://digg.com/submit?phase=2&amp;url=' . $fulluri . '” rel=“nofollow”>digg it</a> :: <a href=“http://del.icio.us/post?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“media” title=“http://del.icio.us/post?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:delicious_16x16.png” class=“media” title=“del.icio.us” alt=“del.icio.us” /></a> <a href=“http://del.icio.us/post?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://del.icio.us/post?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”>del.icio.us</a> :: <a href=“http://technorati.com/cosmos/search.html?url=' . $fulluri . '” class=“media” title=“http://technorati.com/cosmos/search.html?url=' . $fulluri . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:technorati_16x16.png” class=“media” title=“technorati” alt=“technorati” /></a> <a href=“http://technorati.com/cosmos/search.html?url=' . $fulluri . '” class=“urlextern” title=“http://technorati.com/cosmos/search.html?url=' . $fulluri . '” rel=“nofollow”>technorati</a> :: <a href=“http://reddit.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“media” title=“http://reddit.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:reddit_16x16.png” class=“media” title=“reddit” alt=“reddit” /></a> <a href=“http://reddit.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://reddit.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”>reddit</a> :: <a href=“http://www.stumbleupon.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“media” title=“http://www.stumbleupon.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:stumbleupon_16x16.png” class=“media” title=“StumbleUpon” alt=“StumbleUpon” /></a> <a href=“http://www.stumbleupon.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://www.stumbleupon.com/submit?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”>StumbleUpon</a><br/>';

echo '<a href=“http://www.newsvine.com/_tools/seed&amp;save?u=' . $fulluri . '&amp;h=Fictionality - ' . $pagename . '” class=“media” title=“http://www.newsvine.com/_tools/seed&amp;save?u=' . $fulluri . '&amp;h=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:newsvine_16x16.png” class=“media” title=“newsvine” alt=“newsvine” /></a> <a href=“http://www.newsvine.com/_tools/seed&amp;save?u=' . $fulluri . '&amp;h=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://www.newsvine.com/_tools/seed&amp;save?u=' . $fulluri . '&amp;h=Fictionality - ' . $pagename . '” rel=“nofollow”>newsvine</a> :: <a href=“http://ma.gnolia.com/bookmarklet/add?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“media” title=“http://ma.gnolia.com/bookmarklet/add?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:magnolia_16x16.png” class=“media” title=“ma.gnolia” alt=“ma.gnolia” /></a> <a href=“http://ma.gnolia.com/bookmarklet/add?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://ma.gnolia.com/bookmarklet/add?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '” rel=“nofollow”>ma.gnolia</a> :: <a href=“http://furl.net/storeIt.jsp?t=Fictionality - ' . $pagename . '&amp;u=' . $fulluri . '” class=“media” title=“http://furl.net/storeIt.jsp?t=Fictionality - ' . $pagename . '&amp;u=' . $fulluri . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:furl_16x16.png” class=“media” title=“furl” alt=“furl” /></a> <a href=“http://furl.net/storeIt.jsp?t=Fictionality - ' . $pagename . '&amp;u=' . $fulluri . '” class=“urlextern” title=“http://furl.net/storeIt.jsp?t=Fictionality - ' . $pagename . '&amp;u=' . $fulluri . '” rel=“nofollow”>furl</a> :: <a href=“http://www.blinklist.com/index.php?Action=Blink/addblink.php&amp;Url=' . $fulluri . '&amp;Title=Fictionality - ' . $pagename . '” class=“media” title=“http://www.blinklist.com/index.php?Action=Blink/addblink.php&amp;Url=' . $fulluri . '&amp;Title=Fictionality - ' . $pagename . '” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:blinklist_16x16.png” class=“media” title=“blinklist” alt=“blinklist” /></a> <a href=“http://www.blinklist.com/index.php?Action=Blink/addblink.php&amp;Url=' . $fulluri . '&amp;Title=Fictionality - ' . $pagename . '” class=“urlextern” title=“http://www.blinklist.com/index.php?Action=Blink/addblink.php&amp;Url=' . $fulluri . '&amp;Title=Fictionality - ' . $pagename . '” rel=“nofollow”>blinklist</a> :: <a href=“http://blogmarks.net/my/marks,new?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '&amp;mini=1” class=“media” title=“http://blogmarks.net/my/marks,new?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '&amp;mini=1” rel=“nofollow”><img src=”/lib/exe/fetch.php?w=&amp;h=&amp;cache=cache&amp;media=videos:blogmarks_16x16.png” class=“media” title=“blogmarks” alt=“blogmarks” /></a> <a href=“http://blogmarks.net/my/marks,new?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '&amp;mini=1” class=“urlextern” title=“http://blogmarks.net/my/marks,new?url=' . $fulluri . '&amp;title=Fictionality - ' . $pagename . '&amp;mini=1” rel=“nofollow”>blogmarks</a>'; </php>

Discussion

Carlos Carrasco, %2008/%06/%16 %20:%Jun:
Quote:usi (In the Output tab, change the “//backbuf” entry to the first file in the “backbuf” subdirectory (in our case this was, “x0001”). To make sure that Blender cycles through all the images, we can use “#” to mean the current frame. In our case, this means replacing “x0001” with “x#”.) i do that and i only see a black background, then i erase the "#" so it is "x0001" and it shows only the first frame. Does exist another way to put the original animation in the background using blender? please let me know Sorry for my bad English
rob h, %2008/%06/%21 %12:%Jun:
"Quote:usi (In the Output tab, change the “//backbuf” entry to the first file in the “backbuf” subdirectory (in our case this was, “x0001”). To make sure that Blender cycles through all the images, we can use “#” to mean the current frame. In our case, this means replacing “x0001” with “x#”.) i do that and i only see a black background, then i erase the "#" so it is "x0001" and it shows only the first frame. Does exist another way to put the original animation in the background using blender? please let me know Sorry for my bad English" _________________________________________ you can use the compositor and renderlayers to add the background, if you wanted shadows and/ or reflections this is the only way to do it. This tutorial is easy enough to follow on Linux, just substitute FFmpeg(or even blender's sequence editor) to convert the images. http://wiki.blender.org/index.php/Manual/Compositing
R.J.K., %2008/%09/%18 %06:%Sep:
I click on the download link, goes to a black screen and says no movie anyway i can get the download?
courtney parker, %2009/%02/%19 %00:%Feb:
when I animate this the virtual object is no longer in the shot.... but when Im in blender it is.... any solutions?
adam king, %2009/%03/%01 %22:%Mar:
i have the exact same problem as courtney parker because i add a mesh, and nothing seems to appear when i render... i dont know what could possible going wrong here? any ideas, please.. great tutorial aswell by the way.
Peter, %2009/%04/%29 %00:%Apr:
Hello, Do you guys have a .blend file on the last shown video with shadow and reflections ? i have search nearly the whole web for getting a clue how to do this. But i havent managed to get tutorials of this kind in my hands. You guys could be the ones to give this knowledge to me. I whould be please for this information. Best Regards, Peter
John Urquhart Ferguson, %2009/%04/%29 %01:%Apr:
A lot of people seem to be asking how we did the second video. We had planned to expand this tutorial to cover this (and still do), but our University schedule has been very demanding. Sadly we have lost the original blend file and I've yet to have time to make another. So, here is a quick summary of how we did it. To create the shadow, make a plane that will match where the table is. Set its material shader to "OnlyShadow." You'll need to add a spotlight into your scene and make sure the cube is casting shadows to see the effect. Obviously, make sure the light matches the original footage. To create the reflection, just mirror the object on the correct axis and save it as a new object mesh. Then move the new object to a different layer. Now add a colour filter to that layer in the node editor so that it matches the glass colour. I'm sure it's a little more complex than that, but hopefully you get the gist of it. I don't recall all the ins and outs. We promise we'll get back to this some day when our lives have calmed down a little.
Peter, %2009/%04/%29 %08:%Apr:
Nice, thanks. I thought the reflection was made somehow by reflecting in practice. But ok. Maybe something like the onlyshadows will be available as onlyreflection in a newer blender version above 2.48a (hopefully) :D Your way works in this scene, but not in a scene where you need to actually reflect something from a window, so that the reflection angle is 45 degrees or something like that. Thank you anyway. :)
John Urquhart Ferguson, %2009/%04/%29 %09:%Apr:
I think you can use the technique from any angle. You just need to make sure the object's origin and axes match the reflection surface. A guy called Rizzla870101 managed to do something like what you describe, but he had to use Yaf(a)Ray: "You simply create a plane and place it under the object you want to reflect. Apply a material to the plane, then enter the YafRay GUI. In the material menu, select the material you assigned to the plane, drag the transparency slider to its max value, and increase the reflection value as well (0.3-0.5). Click on the 'Fresnel' button and increase 'IOR' to '2' or '3'. Now you should have a 'real' reflection." He has a more general tutorial on his blog on using Yaf(a)Ray with Blender too: http://hexagonical.blogspot.com/
Peter, %2009/%04/%29 %15:%Apr:
Thank you for the tip. I'm not really into making the reflection manually. I whould want a automatic calculation of the reflection. So i'll try yafray. Thanks for the tip. I will come back when i get some results...
Robert hamilton, %2009/%11/%18 %03:%Nov:
Thank you for sharing your excellent tutorial; it's difficult to find much info on how to use Voodoo. I have a serious question about your discussion of the Voodoo license. In your tutorial you state: "Voodoo can at least be used to make movies you might plan on making money from." The problem is that when I boot up V 0.9.5 beta, I see at the bottom of the window the following: "Please first load a sequence and then setup an initial camera - ONLY FOR NON-COMMERCIAL USE." Also, in the 'about' pulldown menu it reads: "This software is developed for research and not for commercial purpose." On the other hand, the readme states: "This non-commercial software tool is developed for research purpose at the Laboratorium für Informationstechnologie, University of Hannover. Permission is granted to any individual or institution to use, copy, and distribute this software, provided that this complete copyright and permission notice is maintained, intact, in all copies and supporting documentation." Regarding filmmakers, this last statement might well be interpreted as meaning "use any way you want." I'm making a micro-budget, indie feature, and can't afford to buy matchmoving software at the moment. So, I was hoping to use Voodoo. But I don't want to invest a bunch of time only to find out later that Voodoo actually isn't legal for use in commercial projects. Actually, I can't find any real Voodoo license -- only the confusing, somewhat ambiguous verbiage I've quoted above. Perhaps they are walking a thin line, inasmuch as they have licensed their code to a commercial product called VooCat. Anything that might clarify this issue would be most welcome!
Robert hamilton, %2009/%11/%18 %04:%Nov:
Found the answer myself: According to Kai Cordes, Lead Programmer, " you can use it. . " (for commercial purposes.) http://therian.lfi-main.uni-hannover.de/digilabforum/viewtopic.php?t=676
John Urquhart Ferguson, %2009/%11/%18 %06:%Nov:
Thank you for this information, Robert. I believe when we first wrote this tutorial Voodoo was indeed allowed for commercial purposes. But it would seem to have changed. The link that you gave is from around that time and I would suggest it is now out of date. From what I can now gather, Voodoo can no longer be used to make commercial films. I am rather sad to hear this.