Differences

This shows you the differences between two versions of the page.

Link to this comparison view

tutorials:matchmoving [2008/05/23 02:32]
sympodius
tutorials:matchmoving [2010/06/18 05:16] (current)
Line 5: Line 5:
 <object width="425" height="350"><param name="movie" value="http://www.youtube.com/v/bpfjyWHczjM"></param><embed src="http://www.youtube.com/v/bpfjyWHczjM" type="application/x-shockwave-flash" width="425" height="350"></embed></object> <object width="425" height="350"><param name="movie" value="http://www.youtube.com/v/bpfjyWHczjM"></param><embed src="http://www.youtube.com/v/bpfjyWHczjM" type="application/x-shockwave-flash" width="425" height="350"></embed></object>
 </html> </html>
 +
 +
 +
 +
  
 ===== Introductory Disclaimer ===== ===== Introductory Disclaimer =====
-As with all our tutorials, we do not pretend to know what we are doing. Our tutorials are based on our own experiences and a lot of trial and error. We are not, by any stretch of the imagination, experts. We may get things wrong... often. If you find anything incorrect in this tutorial (or anything else on our website) then please let us know.+As with all our tutorials, we do not pretend to know what we are doing. These instructions are based on our own experiences and a lot of trial and error. We are not, by any stretch of the imagination, experts. We may get things wrong... often. If you find anything incorrect in this tutorial (or anything else on our website) then please [[john@fictionality.co.uk|let us know]]
  
-Also, this tutorial is for Windows since that was the system we were using when we figured all this stuff out. We will also assume that you have a working Python interpreter installed in your system path (as all Blenderheads should)This is quite an advanced tutorial so you should already be fairly adept with Blender.+Also, this tutorial is for Windows since that was the system we were using when we figured all this stuff out. We will also assume that you have version 2.46 of Blender. You can download it from [[http://download.blender.org/release/Blender2.46|blender.org]]. Thanks to amdbcg and rubicon for their suggestions.
  
-Thanks,+Thanks, 
  
 John and Scott. John and Scott.
Line 36: Line 40:
  
  
 +===== Technique =====
 +The largest problem we face is tracking the camera's movements. If the camera is stationary, then it is fairly trivial to put CGI elements into a shot (in terms of technology at least). While there may be one or two shots in our film that use this technique, most will involve a moving camera. However, the best shots can be achieved by limiting the camera's movements to simple panning. That is, the camera is on a tripod and does not change location, just where is it pointing. However, if you wish to use a completely stationary camera then you may still find other aspects of this tutorial useful. 
  
 +We made a simple panning shot of a desk in John's bedroom, leaving a nice space on the desk where we were planning on adding a virtual object. If you want to follow along with the tutorial using this same footage, you can [[http://www.fictionality.co.uk/downloads/CamTrackSource.mpg|download it from us]]. 
  
 +The first thing we did when we got the footage from the camera was to convert it to a series of Targa (.tga) image files (one for each frame). This is so that it can be used as background images in Blender and so that it can be processed for the camera motion tracking.
  
 +Some common mistakes at this stage can be that you do not know the image size of the shots that were taken, their aspect ratio, their frame rate and the focal length of the lens. Make sure that you know these details when you create your source material as it will improve the results.
  
 +Blender can be used to create the series of Targa files quite easily. Simply open Blender and delete everything in the “3D View” window by selecting everything (hit “A” key until everything is highlighted in pink) and hitting the “X” key.
  
 +{{tutorials:blenderdelete.png|Blender Delete Selection}}
  
 +Once everything is deleted, you will need to add a camera back so that we can render out the Targa files. Just hit the space bar while your mouse is over the the 3D View window and select, “Add -> Camera.”
  
 +{{tutorials:blenderaddcamera.png|Blender Add Camera}}
  
 +Go into the “Video Sequence Editor” window. Then select “Add -> Movie.”
  
 +{{tutorials:blenderselectvse.png|Blender Select Video Sequence Editor Window}}
  
 +{{tutorials:blendervseaddmovie.png|Blender Video Sequence Editor Add movie}}
  
 +Choose the source video file. Once you click “Select Movie,” you will be in a grab mode with the movie sequence. Move it so that it is on track 1 at frame 1. The number of frames of the movie will be the first thing on it's label. To make sure we render the whole sequence to Targa files, we must set the animation length to match.
  
 +{{tutorials:blendervsepositionmovie.png?600|Blender Video Sequence Editor Position Movie Strip}}
  
-===== Technique ===== +Press “F10” to get to the scene panelMake sure that the start frame is set to “1” and that the end frame is set to the number of frames in your movieIn our case, this was 725Make sure you select the “Do Sequence” option and change the output to what is needed.
-The largest problem we face is tracking the camera's movementsIf the camera is stationary, then it is fairly trivial to put CGI elements into a shot (in terms of technology at least)While there may be one or two shots in our film that use this technique, most will involve a moving cameraHowever, the best shots can be achieved by limiting the camera's movements to simple panning. That is, the camera is on a tripod and does not change location, just where is it pointing. However, if you wish to use a completely stationary camera then you may still find other aspects of this tutorial useful.+
  
-We made a simple panning shot of a desk in John's bedroom, leaving a nice space on the desk where we were planning on adding a virtual object. If you want to follow along with the tutorial using this same footage, you can [[http://www.fictionality.co.uk/downloads/encode_00024.mpg|download it from us]].+{{tutorials:blenderinitialframesettings.png|Blender Frame Settings}}
  
-The first thing we did when we got the footage from the camera was to convert it to a series of Targa (.tga) image files (one for each frame).+In our case, we want to set up Blender to match the same dimensions as the original source footage. Still in the Scene Buttons Panel, change the SizeX and SizeY of the Format tab to match the original video fileOur source file was 1440 pixels wide and 1080 pixels highHowever, the format of the original MPEG video had a 16:9 dimension ratio. A little bit of maths shows us that 1440:1080 is not a 16:9 ratio: 
  
-Some common mistakes at this stage can be that you do not know the image size of the shots that were taken, their aspect ratio or their frame rate. Make sure that you know these details as they will be needed later.+<code> 
 +1440 / 16 = 90 
 +1080 /  9 = 120 
 +</code>
  
-{{ tutorials:interpretfootage.png?310|Adobe Premiere Interpret Footage Window}} Incidentally, we used Adobe Premiere Pro CS3 to convert our MPEG camera footage to the required series of Targa filesAfter importing it, we told Premiere to interpret the pixel ratio as PAL Widescreen (by right clicking on the file in the project window and selecting “Interpret Footage...”) because that matched our MPEG file. We then put it into a sequence and changed the movie export settings to Targa before exporting itWe used the default name of “Sequence 01.tga” which a series of files with names like “Sequence 01001.tga” and “Sequence 01002.tga.+The two results would be the same if the ratios matchedHowever, we can simply adjust the aspect ratios of the Blender output to correct this so that we end up with a 16:9 ratio output. We just need to set the AspX and AspY values to match the results of the above equationsFor AspX, we use the result of the second equation because the width needs to be compensated by the ratio of the height. Thus, we set AspX to 120. Similarly, we set AspY to the result of the first equation as the height needs to be compensated for by the ratio for the widthThus, we set AspY to 90Incidentally, this will not affect the aspect ratio of image files, but since we plan on outputting a video file later, we may as well set it up nowAlso remember to set the output format to Targa and set the quality to 100.
  
-{{tutorials:exportmovietarga.png?650|Adobe Premiere Export Movie Window}}+{{tutorials:blenderinitialrendersettings.png|Blender Initial Render Settings}}
  
-We need to set up the Targa files so that they will work with Blender later on. This mainly involves making copies of the Targa files and naming them correctlyTo deal with that, we wrote a little Python program that copies all the Targa files into two subdirectories and renames them appropriatelyThe code is shown below and can also be [[http://fictionality.co.uk/downloads/TargaProcessor.py|downloaded as a file]]:+You can select the output for the Targa files by changing the “/tmp\” field in the output tab of the scene panel to the desired directoryThen just hit the “ANIM” button to produce the Targa files. They will be saved in the directory you specified
  
-<code python> +{{tutorials:blendersetrenderdirectory.png|Blender Set Render Directory}}
-import string +
-import os +
-import dircache +
-import shutil+
  
-backbufdir = "backbuf" +We have noticed that Blender will occasionally output blank frames at the end of a clip instead of the ones from the source video. This appears to me due to some sort of memory issue. We've found the best way to deal with it is to change the start frame to the number of the first blank Targa file and then re-render the blank frames. If it continues to output blank frames, try restarting your computer before the re-render. If that fails, you will have to remove the offending frames. The simplest solution to this is to cut off the frames that are causing the problems. To do this, go back into the “Sequence” view and right click on the right arrow of the video strip and drag it left to a point where you expect the video to be fine. Once no black frames show up and you are happy with the video, you can see how many frames the video strip has become (it's shown on the far right on the video strip) and change the length of the output video accordingly. In our case, we reduced our video length to 715 frames.
-backgrounddir = "background"+
  
-full_list = dircache.listdir(".") +{{tutorials:blendervsenewframelength.png|Blender Video Sequence Editor New Frame Length on Strip}}
-targas = list()+
  
-def numtotext(num): +Make sure that you save the Blender file somewhere as we will be using it again later.
- if num < 10: +
- return "000" + str(num) +
- if num < 100: +
- return "00" + str(num) +
- if num < 1000: +
- return "0" + str(num) +
- else: +
- return "" + str(num)+
  
-for filename in full_list: +The next step is to take the Targa files and put them into a Camera Motion Tracking software package. Although there is a discontinued program called “Icarus” that is somewhat popular in the Blender community, we decided to use one called “Voodoo” because it seems easier to useBoth programs are free to usebut Icarus can only be used for educational purposes, while Voodoo can be used for all non-commercial purposesSadly, neither program is open source, but we live in hope.
- if filename.endswith(".tga"): +
- pth = os.path.join("."filename) +
- if os.path.isfile(pth): +
- targas.append(filename)+
  
-if os.path.isdir(backbufdir): +When we originally wrote this tutorial, Voodoo was available to be used for commercial purposes as wellHowever, one of our readers has now informed us that this policy has changedWe find this rather sad news as we no longer know of a good, free camera motion tracking package that can be used for commercial purposesThere is always a lot of talk in the community about including Camera Motion Tracking as a feature of Blenderbut as far as we know no one has ever attempted thisIn any case, we thank Robert Hamilton for informing us of the change in Voodoo's terms and conditions.
- delfiles = dircache.listdir(backbufdir) +
- for f in delfiles: +
- pth = os.path.join(backbufdirf) +
- if os.path.isfile(pth): +
- os.unlink(pth) +
- os.rmdir(backbufdir)+
  
-if os.path.isdir(backgrounddir): +You can get Voodoo from, http://www.digilab.uni-hannover.de/download.html 
- delfiles = dircache.listdir(backgrounddir) +
- for f in delfiles: +
- pth = os.path.join(backgrounddir, f) +
- if os.path.isfile(pth): +
- os.unlink(pth) +
- os.rmdir(backgrounddir)+
  
-os.mkdir(backbufdir) +Load up Voodoo and choose “File -> Load... -> Sequence...”. Click “Browse” and select the first Targa file. Our files weren't interlaced and we took the shot from a tripod, so we picked those options. {{ tutorials:voodootargaselect.png?650|Voodoo File Selection}} It is also prudent to enter in some information about the camera that you shot the footage with.
-os.mkdir(backgrounddir)+
  
-i = 0+So, choose “File -> Load... -> Initial Camera...”. All you really need to worry about is changing the “Type:”. You can of course do more than that to improve the results, but this was good enough for us. We selected “HDV 1080i 16:9” as this was the closest match to our video footage. We also knew our focal length so we entered that as well (in our case it was “4”).
  
-for t in targas: +All you need to do now is click on the “Track” button on the bottom right of the Voodoo window. You should see a bunch of little crosses appear on the frame as Voodoo tries to find points in the image to track. {{ tutorials:voodootracking.png?400|Voodoo Camera Tracker Working}}The way Voodoo works is to try and find points in the image that are significant and then track them from frame to frame. By doing this for multiple points, Voodoo can reverse engineer the shot to figure out what movements the camera made. You might find that Voodoo asks you if you would like to refine the results. We'd suggest that you do. You might think that Voodoo doesn't seem to be doing anything after this, but if you look at the command prompt window part of Voodoo you still see a little activity. It can take a very long time to do this residual error correction so be prepared to wait for a while. 
- i = i + 1 +
- s = "x" + numtotext(i) + ".tga" +
- backgroundpth = os.path.join(backgrounddir, s) +
- shutil.copy(t, backgroundpth) +
- u = "x" + numtotext(i) +
- backbufpth = os.path.join(backbufdir, u) +
- shutil.copy(t, backbufpth) +
-</code> +
- +
-You will need a working Python interpreter installed on your system (see www.python.org) to use this code. Just save it to a file and call it something like “TargaProcessor.py”. In Windows you just have to copy the file into the same directory as your original Targa files and double click it. This could take quite a while. You should now have two subdirectories (called, “background” and “backbuf”) filled with files with names like, “x0001” and “x0001.tga”. When the directories each have the same number of files as the number of original Targa files then you can continue. +
- +
-{{tutorials:folders.png|two created directories in Windows Explorer}} +
- +
-The next step is to take the Targa files in the “background” subdirectory and put them into a Camera Motion Tracking software package. Although there is a discontinued program called “Icarus” that is somewhat popular in the Blender community, we decided to use one called “Voodoo” because it seems easier to use and is still under development (which means it will continue to improve). Both programs are free to use, but Icarus can only be used for educational purposes. Voodoo can at least be used to make movies you might plan on making money from. Sadly, neither program is open source, but we live in hope. +
- +
-You can get Voodoo from, http://www.digilab.uni-hannover.de/download.html +
- +
-Load up Voodoo and choose “File -> Load ... -> Sequence ...”. Click “Browse” and select the first Targa file from the “background” subdirectory. Our files weren't interlaced and we took the shot from a tripod, so we picked those options. {{ tutorials:voodootargaselect.png?650|Voodoo File Selection}}It is also prudent to enter in some information about the camera that you shot the footage with. So, choose “File -> Load ... -> Initial Camera ...”. All you really need to worry about is changing the “Type:”. You can of course do more than that to improve the results, but this was good enough for us. We selected “PAL 16:9 anamorphic (DV)” as this was the closest match to our video footage. +
- +
-All you need to do now is click on the “Track” button on the bottom right of the Voodoo window. You should see a bunch of little crosses appear on the frame as Voodoo tries to find points in the image to track. {{ tutorials:voodootracking.png?400|Voodoo Camera Tracker Working}}The way Voodoo works is to try and find points in the image that are significant and then track them from frame to frame. By doing this for multiple points, Voodoo can reverse engineer the shot to figure out what movements the camera made. You might find that Voodoo asks you if you would like to refine the results. We'd suggest that you do. You might think that Voodoo doesn't seem to be doing anything after this, but if you look at the command prompt window part of Voodoo you still see a little activity. It can take a very long time to do this residual error correction so be prepared to wait for a while.+
  
 Once it eventually completes (The command prompt window will include the magic word, “FinalEstimation”) then we can export the information about the camera so that it can be used in Blender. Once it eventually completes (The command prompt window will include the magic word, “FinalEstimation”) then we can export the information about the camera so that it can be used in Blender.
Line 139: Line 108:
 {{tutorials:finalestimation.png?600|Final Estimation Voodoo Command Prompt Window}} {{tutorials:finalestimation.png?600|Final Estimation Voodoo Command Prompt Window}}
  
-Click “File -> Save ... -> Blender Python Script ...” and save the information in a file somewhere (we called ours “cammotion.py”). This will be a Python file that we can use in Blender. Make sure you select “Export all” in the next window. Our own file is [[http://www.fictionality.co.uk/downloads/cammotion.py|available to download]].+Click “File -> Save... -> Blender Python Script...” and save the information in a file somewhere (we called ours “cammotion.py”). This will be a Python file that we can use in Blender. Make sure you select “Export all” in the next window. Our own file is [[http://www.fictionality.co.uk/downloads/cammotion.py|available to download]]. 
  
 {{tutorials:exportallpoints.png|Export All Points from Voodoo}} {{tutorials:exportallpoints.png|Export All Points from Voodoo}}
Line 145: Line 114:
 You should now be able to quit Voodoo tracker and load up Blender. You should now be able to quit Voodoo tracker and load up Blender.
  
-We're going to assume you know Blender a bit because this is quite an advanced tutorial anyway. When you load up Blendermake sure you delete everything in the scene by selecting everything (hit “A” key until everything is highlighted in pink) and hitting the “X” key. +We're going to assume you know Blender a bit because this is quite an advanced tutorial anyway. Load up Blender and open the file we created earlier. Go to the 3D View and make sure you delete everything in the scene by selecting everything (hit “A” key until everything is highlighted in pink) and hitting the “X” key.
- +
-{{tutorials:blendereraseselectedimages.png?400|Blender Select All and Erase}}+
  
 Once the scene is empty, change to Blender's Text Editor window and open the “cammotion.py” file (or whatever you called it). Then run it and you're 3D View should now have a camera and a bunch of dots on it. Hit “NUMPAD 0” to see the camera's viewpoint. Once the scene is empty, change to Blender's Text Editor window and open the “cammotion.py” file (or whatever you called it). Then run it and you're 3D View should now have a camera and a bunch of dots on it. Hit “NUMPAD 0” to see the camera's viewpoint.
Line 155: Line 122:
 {{tutorials:blendercameraviewpoints.png?600|Blender Viewing Dots Through the Camera}} {{tutorials:blendercameraviewpoints.png?600|Blender Viewing Dots Through the Camera}}
  
-Press “F10” to get to the scene panel and change the start in the animation tab to 1. Find out how many Targa files your animation produced to get the frame number for the end. In our case, we had 701 Targa files so we had 701 as our end frame. Change the Buttons Window's current frame to 1. {{ tutorials:blenderanimtab.png|Blender Animation Tab Settings}}You can now see the tracking of the camera in Blender. In the 3D View Window, hit “ALT+A” to see the animation from the camera's viewpoint (Hit “ESC” to stop it). You should see that the little dots match the green crosses that were seen in the Voodoo Tracker. The virtual camera movement should match the original camera quite well. The dot's are not in their correct 3D position, however, but will look that way when viewed through the camera. If you change to a different view you will see that the dots are probably in a surprising position.+Press “F10” to get to the scene panel and change the start in the animation tab to 1. Find out how many Targa files your animation produced to get the frame number for the end. In our case, we had 715 Targa files so we had 715 as our end frame. Change the Buttons Window's current frame to 1. 
  
-To make things easier still for us we need to set up Blender a little furtherWe want to set up the virtual camera to match the same dimensions as the original and to show background image that will help us sync things up. Still in the Scene Buttons Panelchange the SizeX and SizeY of the Format tab to match the original Targa files. Our Targa files were 720 pixels wide and 576 pixels high. However, the format of the original MPEG video had 16:9 dimension ratioA little bit of maths shows us that 720:576 is not 16:9 ratio:+You can now see the tracking of the camera in Blender. In the 3D View Window, hit “ALT+A” to see the animation from the camera's viewpoint (Hit “ESC” to stop it). You should see that the little dots match the green crosses that were seen in the Voodoo Tracker. The virtual camera movement should match the original camera quite well. The dot's are not in their correct 3D position, however, but will look that way when viewed through the camera. If you change to a different view you will see that the dots are probably in a surprising position. If it bothers you that the camera is at a funny anglethere is an empty in the same position as the camera. Right click near the camera until you see a dot selected instead of the camera. This empty is grouped to everything else created by the python scriptso you can rotate it to suit you. You can also use this empty to match the scene with shots that have worth match moveIt is a quick way to change everything in the scene to match the virtual elements with the original footage.
  
-<code> +To make things easier still for us we need to set up Blender a little further. We want to set up the virtual camera to show a background image that will help us sync things up. The dimensions of the virtual camera should still be set up correctly from before.
-720 / 16 = 45 +
-576 /  9 = 64 +
-</code>+
  
-The two results would be the same if the ratios matched. However, we can simply adjust the aspect ratios of the Blender camera to correct this so that we end up with a 16:9 ratio output.  We just need to set the AspX and AspY values to match the results of the above equations. For AspX, we use the result of the second equation because the width needs to be compensated by the ratio of the height. Thus, we set AspX to 64. Similarly, we set AspY to the result of the first equation as the height needs to be compensated for by the ratio for the width. Thus, we set AspY to 45. +While viewing from the camera, bring up the Background Image window (“View -> Background Image...”). Click “Use Background Image” then click “Load”. Find and select the first of the Targa files (“0001.tga” in our case). Now click “Sequence” and then “Auto Refresh”. Still in the Background Image window, change “StartFr” to 1 and change “Frames” to the number of Targa files (715 in our case). Again, you can check how it worked by hitting “ALT+A” from within the 3D View window (Hit “ESC” to stop it).
- +
-{{tutorials:blenderresolutionsettings.png|Blender Resolution Settings}} +
- +
-While still viewing from the camera, bring up the Background Image window (“View -> Background Image...”). Click “Use Background Image” then click “Load”. Find and select the first of the Targa files in the “background” subdirectory (“x0001.tga” in our case). Now click “Sequence” and then “Auto Refresh”. Still in the Background Image window, change “StartFr” to 1 and change “Frames” to the number of Targa files (701 in our case). Again, you can check how it worked by hitting “ALT+A” from within the 3D View window (Hit “ESC” to stop it).+
  
 {{tutorials:viewbackgroundimage.png|Blender View Background Image Menu Option}} {{tutorials:viewbackgroundimage.png|Blender View Background Image Menu Option}}
  
-{{tutorials:blenderbackgroundimagewindow.png|Blender Background Image Window}}+{{tutorials:blenderbackgroundimagesettings.png|Blender Background Image Settings}}
  
-If you find that the dots do not perfectly match the background image, you may be using incorrect values for the AspX and AspY. This can sometimes occur if you chose the wrong initial camera type in Voodoo. To correct this, you'll need to fiddle about with these two values until things line up better.+If you find that the dots do not perfectly match the background image, you may be using incorrect values for the AspX and AspY. This can sometimes occur if you chose the wrong initial camera type in Voodoo. To correct this, you'll need to fiddle about with these two values until things line up better. 
  
 At this point you should be able to add your virtual elements and match them up with the original scene by checking the view in the camera. Remember that the dots in the scene are not in their correct 3D position, but will show the rough focal distance of the original camera. So, elements that should be in focus in your shot should probably be placed at the same distance from the virtual camera as the dots. You may have to play about with the camera a bit to match the lens better. You can do this by selecting the camera and hitting “F9”. It's important that you set your objects at a good distance from the camera if you want to get a good perspective on them. This is largely a trial and error process. At this point you should be able to add your virtual elements and match them up with the original scene by checking the view in the camera. Remember that the dots in the scene are not in their correct 3D position, but will show the rough focal distance of the original camera. So, elements that should be in focus in your shot should probably be placed at the same distance from the virtual camera as the dots. You may have to play about with the camera a bit to match the lens better. You can do this by selecting the camera and hitting “F9”. It's important that you set your objects at a good distance from the camera if you want to get a good perspective on them. This is largely a trial and error process.
  
-Once you have your scene all set up, you'll need to tell Blender to use the original footage for the background during the render (you may not always want to do this but this is a nice quick way for this test). Go to the Scene panel in the Buttons window again (“F10”)In the Output tab, change the “//backbuf” entry to the first file in the backbuf” subdirectory (in our case this was, “x0001”). To make sure that Blender cycles through all the images, we can use “#” to mean the current frame. In our case, this means replacing x0001” with x#”. Then make sure you click the button next to backbuf” so that a tick appears in it.+Once you have your scene all set up, you'll need to tell Blender to use the original footage for the background during the render (you may not always want to do this but this is a nice quick way for this test). If you go back to the Video Sequence Editor you should see that the original video is still there from earlier onSelect the video and delete it. We're going to use the Targa files we created instead as this will ensure a perfect frame rate match with the Blender scene. Add the Targa files by selecting Add -> Images.” now navigate to the directory with your Targa files and hit CTRL+A” to select them all. You can right-click to deselect any files that are not part of the image sequence. Move the video strip to track 1 at frame 1You can then add in the scene by selecting, “Add -> Scene -> Name of Scene.” In our case the Name of Scene” was scene.” Add the scene strip on track 2 at frame 1First select the scene strip and then hold shift and select the original film strip as well. Now select, Add -> Effect -> Alpha Under.” A new strip will be created. Place it on track 3 at frame 1.
  
-{{tutorials:blenderbackbufoutput.png|Blender Backbuf Output Tab}}+{{tutorials:blenderaddalphaunder.png|Blender Video Sequence Editor Add Alpha Under}}
  
-Select where you want Blender to output the video by changing the “/tmp\” in the above field to the desired directory. You can now select the output format you want to use for the video in the Format tab of the Scene Panel (F10). We choose FFMpeg with an MPEG 2 output at 100 quality. We kept our frame rate at 25 fps to match our original footage. Then just hit the “ANIM” button to produce your render. It will be saved in the directory you specified.+You can now select the output format you want to use for the video in the Format tab of the Scene Panel (F10). We choose FFMpeg with an MPEG 2 output at 100 quality. We kept our frame rate at 25 fps to match our original footage. Then just hit the “ANIM” button to produce your render (“Do Sequence” should still be selected from earlier, as well as the resolution and aspect ratio). It will be saved in the directory you specified earlier.
  
 +{{tutorials:blendervsefinalrendersettings.png?600|Blender Final Render Settings}}
  
  
Line 200: Line 161:
  
 ===== Problems ===== ===== Problems =====
-The first potential problem we can see with our method for match moving is that the magic symbol “#” which we use for the backbuf will always make a 4 digit numberThis means that our whole tutorial is based on the idea that your shot will not be more than 9999 framesIn PAL (25 frames per second), that would be just over an hour long so it's probably not an issueIn additionthere are othermore advanced, ways of getting the background into the final render (such as Blender'very powerful compositor).+The first issue (which you may have realised for your selves) with our shot is that there were no moving elementsMatch moving largely relies on the idea that there are a lot of elements in the shot that are stationary and that it is mainly the camera that is movingHowever, you can still include moving things in the shot (like actors and such), but you will probably get results that are not quite as goodAlsoif you choose not to use a tripod shot and use a free moving camerayou may find the results will be way off. Some of the tests we did with this weren'very good. Our recommendation for good results is to limit the number of active elements in the original shot and try to stick to tripod-based panning
  
-The second issue (which you may have realised for your selves) is that there were no moving elements in our original shot. Match moving largely relies on the idea that there are a lot of elements in the shot that are stationary and that it is mainly the camera that is movingHoweveryou can still include moving things in the shot (like actors and such)but you will probably get results that are not quite as goodAlsoif you choose not to use a tripod shot and use a free moving camerayou may find the results will be way off. Some of the tests we did with this weren't very good. Our recommendation for good results is to limit the number of active elements in the original shot and try to stick to tripod-based panning.+You may also find that the perspective in Blender does not line up with the original shot. This usually stems from keeping the object at an incorrect distance from the camera. Generally, moving the object to different distances from the camera and resizing it to compensate can solve this problem. However, you might find that adjusting the focal length settings in Voodoo can also improve these perspective issuesThe ideal situation is for the original camerathe voodoo settings and the Blender virtual camera to all have exactly the same settings. If you can achieve this then you will get the best results.
  
-You may also find that the perspective in Blender does not line up with the original shotThis usually stems from keeping the object at an incorrect distance from the camera. Generally, moving the object to different distances from the camera and resizing it to compensate can solve this problem. However, you might find that adjusting the focal length settings in Voodoo can also improve these perspective issues.+This tutorial has not covered shadows, reflections, lighting and when real elements move in front of the virtual elementsWe hope to cover all of these in a later tutorial
  
-This tutorial has not covered shadows, reflections, lighting and when real elements move in front of the virtual elements. We hope to cover all of these in later tutorial.+We hope you've enjoyed this tutorial and we fully welcome feedback and improvements. We have no idea how half this stuff works, but this is how we did it. If anyone knows of a better way, please let us know
  
-We hope you've enjoyed this tutorial and we fully welcome feedback and improvements. We have no idea how half this stuff works, but this is how we did it. If anyone knows of a better way, please let us know. +We love you all, 
- +
-We love you all,+
  
 John and Scott. John and Scott.
 +
  
  
Line 227: Line 187:
 </php> </php>
  
-~~DISCUSSION~~ 
  
 +
 +
 +~~DISCUSSION~~