Converting Varicam 48fps to 24p

Warning, technical video post production post ahead.

A few weeks back we needed to shoot some greenscreen for a show that is being delivered at 23.98fps (aka “24p”). I’d had problems pulling a key in the past with motion blur at that slow framerate (I prefer 30 for TV work), so I suggested we increase the shutter speed in the camera. The DP seemed more comfortable adjusting the framerate, so he suggested we shoot it at 48 and only use every other frame of the footage. I figured I could write a script to do the conversion later.

We shot the footage, and the next week I sat down to write a python program to convert the material to 24 fps. This could have probably been done with final cut or something, but I don’t know that program so I did it the easy way: play around with the frames themselves using an image sequence (ie, a big folder of numbered tif files, each of which represents a frame of footage).

Normally this would be easy, just take every odd-numbered image. But in this case, although we shot at 48fps, the footage is at 60fps. Why? Varicam works like this: the front of the camera (the shutter and CCD) shoot whatever framerate you choose, but the tape recorder in the camera always records 60fps. Even if you shoot 6fps, the varicam writes 60 frames to the tape — 10 of frame 1, 10 of frame 2, etc. So when we shoot 48fps, there are 60 frames per second on the tape, 48 of which are unique and 12 of which are duplicates.

If I am going to convert the footage to 24p, I need to first remove the duplicate frames (60 -> 48 fps), then remove every other frame (48 -> 24). By analyzing the footage frame by frame, I determined that when the varicam shoots 48fps, it uses the following pattern:

0000100001000010001

Where 0 represents a unique frame, and 1 represents a duplicated frame (to fill out to 60fps). This pattern is repeated over and over again throughout the footage. (I only just noticed that the pattern is 19 frames long, not 20 like I’d expect, but looking at the footage that’s what it is.)

My python program goes through an image sequence, finds the pattern of duplicate frames, and then copies every other file that is not a duplicate to a new folder as a new image sequence. It makes the following assumptions: the files are .tif files, and “duplicate frame” means “exactly the same size” (not a bad assumption with digital media and tifs). It’s a little hacky, but looking at the resulting 24fps image sequences I don’t see any stutter or dropped frames.

There are some basic checks in the code so it hopefully won’t overwrite precious data, but I make no guarantees.

Code: 48-to-24.py

Grading a short film


I had the good fortune of being able to grade (color-correct) a graduate student’s thesis project last weekend. It’s called Mel’s Hole, dir. Kenji Miwa. It was my first narrative project, and my first using Apple’s Color. I usually do documentary work, where the highest priority is to make the footage look “good” and consistent. Also, I’m used to the Avid color corrector, which is not very good for matted secondary color corrections (“brighten his face here”) so it would have been hard to do the sort of aggressive grading that the director wanted.

He’s given me permission to show some before-and-after shots from the film, showing off some of the more fun corrections I got to do. Mouse over the images to see the uncorrected versions. (Shot on a Panasonic HVX200 with a lens adapter for low depth-of-field.)

(Note, these images sometimes appear much too bright on a mac. Set your monitor’s gamma to PC / video standard (2.2) to see the night-time shots correctly.)



The above shot represents the basic look for the film, which is a desaturated “bleach bypass” look. It’s high contrast, with substantial crushing of the blacks and whites. In this shot, we had to knock down the colors of the blanket, which was still too saturated even after we applied the look.



In this scene, the character walks into the woods, which were supposed to be dark and foreboding. By really crushing the blacks we were able to make the woods look deeper and more mysterious. This darkening caused the character to be somewhat lost in the busy-ness of the image, so I put a small tracked oval (the shot pans up) over the character to draw attention to him.



This scene takes place in the middle of the night, and I was instructed to make it very very dark, with a silvery-blue cast. Although the lefthand venetian blind did not have any light behind it, I was able to put one in, which serves to illuminate the character’s face (even though a light back there would really just silhouette him). There are still some bright highlights visible in the blinds, but I wasn’t able to get rid of them.



This shot was actually a last-minute idea. It is paired with another night-time shot, so we decided this shot should also be in night-time. I was able to do a good day-for-night, including drawing in the spill of the light at the bottom of the stairs.

I had a lot of fun doing this grade, and I really liked Apple Color — which makes sense, I doubt they would have bought a company that made a bad color correction program. I do have to say that the keyframing in that program absolutely blows, and the tracker isn’t great either. It also crashed immediately after finishing a render once. But on the whole, it was good at disappearing and letting me work.

The director and DP were great too. We hadn’t really done a lot with aggressive grading before, but once they saw what was possible they were able to direct me better and make requests that were creative but also doable.

World premiere is on May 2nd. The details are on Facebook.

(ps, just shoot me an email if you want me to grade your film — the first job is free!)