After playing around in Flash, GIMP and Blender for a month or so, I believe I’ve finally worked out all the settings I need to make a quality version of my video for all sorts of formats. So here is what I’ve done. Please do note the date I made this post, since I expect every single thing about my experience will change pretty much tomorrow. But for now, I hope this at least helps some people get closer to finishing their own first movie.

To summarize, my work flow went something like this:

01. Script animation
02. Record voices
03. Edit together voice track
04. Animate storyboard and time to sound collage (I used Flash)
05. Make full animatic in same program
06. Develop specific timing from start to finish
07. Write and create soundtrack
08-09. Work furiously on animating to the soundtrack; animate and record more sound as motions & micro-story develop

Much later … and we come to what this post is all about: how to get your finished animation out of the animation program and into presentable broadcast quality

10. Render full animation as individual 4K resolution frames into individual folders for each source file, out of Flash (Old versions as far back as CS3 still good!)
11. Individually hand-paint (digitally) for effects and touch-up with GIMP (Free!)
12. Arrange frames in sequence (one image sequence per folder) in Blender (Free!)
13. Render entirety as very low quality “timing” animation (320×240 for example)
14. Engineer final stereo (or 5.1 surround sound) mix based on final timing of all visuals
15. Place new mix in Blender with full sequence & export test high quality version
16. Make final visual and audio tweaks

My initial goal was to export to a media format I could simply integrate into the site, but as a kind of cool extra experience, I also kind of unintentionally learned how to make the animation for big official animation festival-type events. And now, perhaps, you can too!

However, you should know before starting your animation project, that there are aaaaaall these slightly silly standards you might be expected to follow. A big one, for example, is that you must plan to animate at 24 frames per second.

Planning the Video

First of all, let us begin with the planning. You really must have a decent, and perhaps slightly flexible, plan for the time you actually finish your project. I think something that helped me out was that the video was designed in math rather than pixels. That is to say in vector (using Macromedia/Adobe Flash, though these days you could use Animate to achieve the scaleable vector images) in a 2K Full Scope workspace. Why this resolution?

Because I wanted the video to translate to the printed page and appear as a comic.

I had no idea that 2K Full Scope (or 1.9:1 aspect ratio, at exactly 2048 wide x 1080 tall) also known as 2K “Full” would still be a controversial aspect ratio in 2017. I figured between aspect ratios that roughly make a 2-to-1 ratio for comfortable horizontal viewing for our two eyeballs, that most people would want wide, and 1.9:1 (Full) is closer to that than the very common 1.85:1 (Flat) format which is just a little less wide (1998 wide x 1080 tall) or several standards of HD-type formats that are closer to 1.77:1. And the beautifully wide “Scope” format seemed wrong for translating to the comic book tier I was looking to marry the video format to. (i.e.; three 1.9:1 ratio videos stacked roughly make a standard format of comic page, and that works best for my design) … I was wrong! For some reason, 2K/4K Full is less popular than HDTV, Scope or Flat. I ended up dealing with that in my own way as you may read below … moving on …

But a resolution is not just an aspect ratio. You have to choose some form of real pixel information since we don’t yet have vector video engines drawing raw math in real time (oh wait, it was this super innovative program called Flash that might have been the start of an entire new way of thinking about cinema, animation, interactivity and … why doesn’t that exist anymore? Oh well! And, don’t video games actually do something like this? Yes, they do!) — nevertheless, screens! We are looking at screens and screens use pixels.

So as I rambled earlier, 2K seemed rather futuristic in 2012 so I went with it.

The video was also to be a cinematic 24 frames per second. It is what I was told would look alright in animation classes. And I happen to agree. Objects can go any rate between 12 and 24 frames per second depending on its importance or what is being shown and the eye can still pick up individual drawings that don’t get lost in the motion, while retaining the gesture of movement. Yes, 24 frames per second seemed pretty ideal. It is also — incidentally — easier to animate than something like 30 or 60 frames per second, as you can guess. So there you have it. This was 2012, just five years ago, and my plan was to do many things, but one of them was to slightly “future proof” the video by making it a 2K video. Wowie, wowie! What a big format, I thought.

So my files started with
DIMENSIONS: 2048 x 1080
FPS: 24 frames per second
LENGTH: ? minutes ? seconds

A note about the length. Like a lot of seemingly arbitrary “standards” I really don’t think this should be important. And it’s not probably. The Internet is changing artist exposure dramatically. But just in case you accidentally end up submitting to some festivals for whatever reason, it seems I disqualified myself from a couple festivals by having the length of my movie cross the 15-minute mark. Some allow up to 40 minutes, and many expect maybe 30 or 20. So just a weird and regrettable note about realistic time constraints that I don’t even want to mention in case it makes you change what you are doing. There are also ways for that to cross over into some “long form” versions of animation festivals, I am sure.

You should really just make the length (and format, and pretty much everything, really) that suits your project and not let some obsolete and arbitrary festival decisions dictate anything about what fits your art. I am sure I will love what festivals have done for artists when I hear about those wonderful things having happened to those blessed folks, but I just wanted to mention that as artists we should create without fear of restrictions. Thanks for letting me drool that out a bit. Okay, wiping up and let’s move on.

Planning the Sound

This aspect took a great deal of time, and was initially one of the things I was entirely unsure about. I knew that I wanted to have the timing of the entire script figured out, so that we could write interludes, ditties and musical cues to the events of the movie and so I did end up timing out an almost frame-by-frame musical “script” on graph paper, to which the musical artists and I danced and played.

By the time all the sountrack and vocals were balanced and mixed together (using an old copy of WaveLab 6 with VST plugins — though I understand Audacity can be just as powerful given your time and patience with it) in a 17 track mix down, the animation was just getting its “final touches”. Those took an additional two years of refinement and patiently waiting for the RAM to recover from numerous crashes and freezes of Flash, or GIMP or Photoshop or other programs that I was surely taxing beyond their design.

In any case, I’ve learned that — while standards are constantly fluctuating — you could be alright using these standards for your raw audio:

44100 kHz sample rate (horizontal wave sections)
bit-depth of 24 bit PCM sound (vertical wave sections)

Don’t worry if this sounds confusing. It’s basically a recording and storage standard of some kind for raw audio, which we’ll get more into later.

To be extra “proper” I sampled (recorded) or resampled (a wave re-render action you can do in Audacity) everything to 48000 kHz, which for some reason is an expected video standard for “professional” video.

So my final recommended audio format would be:

48000 kHz
32 floating point bit depth

… saved in uncompressed 32 bit PCM .WAV files

Probably something else would be fine so why WAV? Why not WAV? Well, you can read more about formats pretty much all over the wise areas of the technogeek-net. But I’m just telling you what generally works and is expected of us by those people in the freaky “media world”.

There is an even more insane standard of 96000 kHz but I dare you to hear the difference. Dare you!!! (Well, okay, actually you can almost kind of sorta “sense” the difference, like with a kind of latent mutant power we may all have. For some reason I’ve just decided this post is a “quick and dirty” version of the whole process of getting from computer to cinema. Take that how you will.)

(You read me. Cinema!)

Rendering the Animation as an Image Sequence

Moving the files from vector (happy math) to raster (specific pixels) seemed daunting because I was quite concerned I would make a wrong choice. But by the time I had finished the animation (this is over 5 years later!) not only had 2K gained supremacy for many broadcast channels and filmmaker preferences, but 4K and 8K (faint!) are now on the ever-sharper progressively scanned horizon. So I went ahead and rendered the files prepped for 2K as a 4K resolution.

What does this mean? Effectively, because I made the animation in vector while technology raced ahead, I quadrupled my potential image size. My debut project could be in 4K! But what 4K? For every 2K aspect ratio we’ve barely become familiar with (Flat, Scope, HDTV, Full or others) there would essentially be a “double sized” version already coming. And would my animation planned at 2K even benefit from 300% more pixels? After all, some raster imagery is only slightly higher resolution than 2K and although it looks blurry …

Shrug. 4K! Let’s do it. Even making a 4K version and then shrinking it to 2K could tighten up some images that lost a bit in compression. That is, rendering at slightly lossy 4K and then reducing to 2K is a sort of comic book habit I picked up that nicely tightens up artwork. (Comic artists are often familiar with the notion of drawing at very large scale, then printing at 60% of the original size or even smaller, to increase work flow on the simply massive amount of time that comics take. Animators, I feel your pain. But really comic artists deserve your respect too. Let’s get along, people.)

HDTV starts with a 4K dimension of 3840 pixels wide — rather conservative. And although this doesn’t mess with the Full Scope ratio (since the ratio could remain the same by making the whole picture 2025 pixels tall), it does fall short of the slightly more render intensive Full 4096 x 2160. And to be quite honest, scaling up 10% or 15% from 3840 x 2025 to 4096 x 2160 would just be to fit into guidelines. I rather liked my awkward, unique and semi-flat resolution that also just happens to be a high enough resolution to print at a generous size. (Imagine a comic book at 10″ wide instead of 6″ and you have an idea).

So the format I exported from Flash ended up being:

135 dpi (instead of screen resolution which is 72dpi) and I chose image sequences based on the need of the scene:

.PNG with alpha channel for images that need a “window” to other images
.PNG with no alpha channel for sharper detail
.JPG 89% – 100% quality for sequences that don’t need much visual clarity

Arguably everything could use more visual clarity. Why would I ever just make 89% quality JPGs? Well, look, sometimes speed/efficiency of computing power is nice. JPGs just compress and edit so much faster on my system. Especially when I am otherwise facing 4K resolution PNGs. It just gets absurdly time intensive to make a single change to multiple frames. And the “sacrifice” in quality was negligible. Even on 4K monitors that I tested the images at I simply could not pick up on the lossy compression. It moves too fast.

When I examine the 4K images on an individual basis, I also noticed that some sharpness was undesirable and the digital blurring was a kind of nice “natural” effect (like a paper texture on physical art). In addition, saving some space on my little system started to look very attractive and as a result I ended up using close to 90% or 100% JPGs instead of PNGs in most of the movie. I don’t think most people will tell. And I was increasingly happy with this “compromise” with digital space-time.

Assembling the Sequence in Blender

One can render a sequence of images as a video in many programs, from bloated overpriced monstrosities like Adobe products to efficient little scripts like ffmpeg, but Blender has such a handy, fast and simple visual editor I actually immensely enjoyed the process of adding the image sequences to prep for a final export or two (or ten).

But wait! What about sound?

Well, luckily we timed every single bleep, burp and bump in our sound file before assembling the sequence, didn’t we? So when we added all those extra little incidental sounds and rendered a test version of the animation (or you are one of those people with a computer that can render 4K video in Blender on the fly, just by scanning the scrubber, you filthy rich person, you!) we could see that there was not a single frame out of place. Right?

But what if, like me, some prankster faeries decided you would animate an entire sequence at 23 frames per second instead of 24 and not notice until you are trying to figure out what went wrong? Well, in that case, the entire sequence is completely out of whack, and you are thankful that you have aligned all the other sequences with “snippets” of properly timed audio built into the Flash files. (The animation was chopped into roughly 1000-frame sequences or even shorter, because after that, Flash would run out of memory to hold more in its poor little early 2000s head while rendering 4K.)

And hopefully, you can isolate the problem and re-open the sequence and change the file to 24 fps, then add a frame every second throughout the sequence to make up for the frames you didn’t draw. And you might just have to draw a few missing frames here or there, right? Great. Glad we got that out of the way.

There aren’t any visual formatting tips I have for Blender, except that if you can help it, try to keep all your sequences in convenient folders so that in case you have to move computers or edit a sequence and have Blender re-point to the files, you can simply open the sequence folder and hit “A” to select all the frames and it will only select those in the sequence instead of every single frame of the animation. For animations like Lor’Avvu Prelude that are about 1000 seconds, that could end up being a really overly long sequence of 24,000 frames for example. So yeah, nice and tidy is helpful. Especially for people like me, who notice self-sabotage incidents now and then.

Exporting from Blender

Wait, this is a huge topic. What, exactly, are we exporting here? Who wants to see your animation? Why won’t they stop knocking on your door and slamming their palms on your window? Let’s take this one byte at a time, and back up to the expected formats of various people.

Try exporting your full animation sequence once, at 10% its final size, just so you can see the animation in its real time video glory and understand the timing. This helped me immensely with making final audio tweaks, and a subtitles track.

Animation Festival Standards

So you’ve finished your animation and, by seemingly miraculous coincidence, it seems that a dozen or more animation festivals are asking for their last submissions the very week you are planning to render the animation. It seems there are a lot of things to prepare. They want posters, trailers, promotional material and all sorts of fun things that you’d prefer not to be an after thought. (And they probably shouldn’t be!)

But maybe you can’t help it because you’ve been so busy on the animation you hadn’t even really given a thought to silly hype things. Well, in that case, here are a few of the extras you could easily move all the way down to the “planning” stage (way back when you were choosing a size and frame rate to begin the animation work.)

In applying for 17 animation festivals in something like 100 frenzied hours, here are the things that many (if not all) festivals would really appreciate you having ready for them in advance:

With time stamps. Yes, this sounds like a lot of MIGETEMON work, but it’s actually pretty interesting and easy if you set aside the time to do it. And while I am exaggerating a bit here, because most festivals only want the next item I mention, doing this work for any film with more than zero words of dialogue in it will help you with localization. And isn’t that a nice thought?

For starters, I would simply make an .SRT file, which is just a text file formatted in a particular way, so that every line is accounted for in the order that it appears. I think you could probably do this job for any animated short in a single day of work, if not a few hours.

Wait, but you don’t have time stamps yet because you haven’t made a render of your video yet. Fair enough. (Actually that makes perfect sense, sorry I even mentioned this, but just bear it in mind.)

This one seems to be much more critical. About 65% of the festivals (and like 99% of the festivals in countries where English is not a first language) requested a list of dialogue. I made a printed version with images of characters, though I think a simple dialogue list in a basic text format is all they want.

Simply take some of your favorite exported frames and put them in a folder. Many of the festivals request at least three and some want more. You could have a lot of fun picking ten pictures for each festival from your collection of thousands, but I’d say if you have a folder of them at the ready you will be happy about it.

I am proud to have already maintained credits in the plan for Lor’Avvu from the very beginning. It was especially gratifying to boast about the incredible talent that has helped on it so far and I went so far as including (small) pictures of everyone that did contribute. Because the video I submitted is perhaps a little odd in that it has no hard coded credits at all, for the purposes of storytelling, I was happy that many festivals asked for this. Many seemed especially eager for information about original music design.

This one was definitely not something I expected. Many even have specific file size requirements (resolution and measurement) for presumably some embarrassing mug shot that will appear on a list of creators. So I guess you should get a photo of yourself? I think you could also have fun with this and still be respectful of the requirement. (I wonder if the rather normal-ish picture I submitted will get me kicked out of every festival but since I’ve never done this before it’s probably just a weird insecurity.)

Aha, so now we get to the rendering part, because after you’ve prepared for months or years and done everything else, you will need …

Your Screener Copy

This is a kind of test version of your movie that is very high quality but which can be viewed on “pre-screener” portals like IMDB or FilmFreeway. Why do you need this?

Why indeed?

So let’s start with something else and get back to that after …

Your Sound in 5.1 Surround Sound

Some festivals prefer surround sound to plain stereo. Why? Aren’t they a bit sick of the trite swelling bass and redundant, expected sound of sort of not really being “inside” action but a kind of fakey very loud version of it? Apparently not! Apparently this is all quite a convincing illusion!

It works like this:

sketchy surround

track 1 : left speaker
track 2 : right speaker
track 3 : center
track 4 : kind of sub-woofer but also additional center sort of weirdness
track 5 : left rear speaker
track 6 : right rear speaker

So, if you want a quick mean surround mix as sometimes standardized, you just need two files and you can convert your stereo audio. You want your vocals (dialogue/narration) on one track and everything else (I will just call that “music” now though this could also include effects and cues and so on) on another track. Take these in Audacity and …

Duplicate stereo “music” audio twice so you have six channels of that.

1. left speaker music
2. right speaker music

3. left speaker music
4. right speaker music

5. left speaker music
6. right speaker music

Split all the stereo into six mono tracks. (No track should be panned left or right but all of them center)

Place your vocals in the third position and get rid of the third music track so you now have 6 tracks like this (with very rough, very amateurish volume suggestions) :

1. left speaker music (mono) 75% volume
2. right speaker music (mono) 75% volume
3. center vocals (mono) 100% volume
4. bass music (mono) 25% volume (maybe with increased bass effect)
5. left rear speaker music (mono) 25% volume
6. right rear speaker music (mono) 25% volume

Tweak various effects for various “positions”, for example specific “background effects” you can place in the rear speakers. But always save one simple version of your whole mix of many tracks to this six-track arrangement for easy export to the six surround speakers.

In Audacity preferences, go to Import/Export settings and choose to map specific speaker settings rather than default to stereo. When you export, set the number of speakers to 6 and map the tracks in that order 1 to 1. Save it as PCM 24-bit WAV. (It might be under “Other uncompressed options”). Make sure all your audio is 48000 kHz (more on that later).

This is not an adequate explanation for actual mixing, and there are fine YouTube videos on how to mix expertly, but this gives you a rough idea of how you can turn stereo into surround and it’s not as complex as you might have feared. This post won’t be about sound editing though, so please do use this is an invitation to explore it deeper. Audio can be a very rewarding experience for the animation. (Actually it is probably the best part of mine, thanks to the talented folks involved!)

Your Basic Archival Copy

Yes, this is your movie. Your movie. The whole finished FAIKO thing. This is a version that you will keep and review and maybe want to change but you can’t because every baby has to walk on its own some day even though you could really just keep it hidden forever and — No! — No, now it’s … it’s out there in the big world. To be judged. And looked at differently. And all that. Hopefully in a good way.

What is archival quality? What is the “top” version of an animation that you could render every other derivative version from? Well after reading many sites and opinions on the matter, I can honestly tell you that — Ugh, I have no clue. Everyone prefers something different.

But basically you have these things called compression codecs, right? And you can use them or not. And each can be assigned a bit rate that preserves the raw images better or worser. What could be worser? Well, consider your scene fading to a bad color or blurring right in the complex action that you painstakingly drew by hand for months. That kind of worser.

The best, by far, if you have the space, is to save it completely uncompressed. Probably several gigabytes, if it’s at all like what my image sequence is like.

But this is just for me; when I looked at the uncompressed settings and tested them against bits of compressed video, I was impressed with the image quality of compressed video to the point that I literally did not see a point to saving an uncompressed version of the video. That uncompressed version would be so high quality it would basically be an unedited string of my raw images (those JPGs and PNGs I exported directly from the vector program, then edited by hand in GIMP/Photoshop — GIMP preferred when available) with the soundtrack slapped on it — essentially, what I already have saved as my Blender sequence.

Therefore, to me, the Blender sequence is the raw video. And so my “archival” version became something that I wanted to have as a simple Quicktime or MPEG-4 codec with a very high bit rate.

My pseudo-“archival” (or “master”) copy of the movie could therefore become (bearing in mind this is a 4K video) :

Source video: 3840 x 2025 (Full Scope aspect ratio of 1.9:1)
Up-scaled to: 3996 x 2108 (Full Scope ratio but at “Flat” width of 3996 to save space and make it fit better into “Flat” ratios for projector screens that cannot show Full Scope 4K yet)
Up-scaled to: 4096 x 2160 (A true Full Scope resolution accepted by DCP — more on that later)

Either way, the H.264 codec (lossy but fast and easy and common)
(I would have chosen the superior Apple ProRes HQ but Blender does not export to that, and I would have chosen DNxHD but wasn’t sure how many people would be able to just plop my video with that codec into their player and have it run, well, simply.)

Video Bitrate: Hmm.


Yes, this could be anything from 1000 Mbps (megabits per second) for a crisp 4K of unarguably unaltered quality all the way down to 10 Mbps for some abysmal compression but basically the appearance of streaming.

How do I know this? I don’t!

But after reading and researching quite a lot about it, including my own test renders, I am confident that 250 Mbps (or roughly, stupidly “translated” to 31.25 megabytes per second) is quite a lot, because that translates to about 1.3 MB per frame (of this 24 fps clip!) and my raw frames are at best 4 megabytes each.

When you are streaming something like 18 compressed frames between key frames, (the GOP setting in Blender) you simply do not need the raw amount of data, and something like an average of 250 Mbps would be quite high.

If you set the Maximum bitrate to twice that — 500 Mbits — for spikes in necessary quality that are automatically detected by the compression software, then you have something just absurdly high quality.

And this is all rather immense for my series of images that are, at best, something like 90% under one megabyte each. Now, this is all very amateurish terrible advice because super lossy JPEG compression doesn’t translate at all to MPEG or H.264 compression in terms of what gets lost, but this was all to sort of wrap my head around the concept that “you get what you store” in terms of a ratio between data size and quality.

Ultimately, I decided that a basic — not perfect but not too lossy — quality for my first 4K render would be 100 Mbps in H.264, with a MOV or MP4 container.

Add to this the stereo audio that I spent time upgrading to 5.1 surround audio (simulating 6 speakers with Audacity) and I have the option of storing the audio at various qualities as well, but I decided to go ahead and store that in PCM, which doesn’t compress so much but which would (I assumed) preserve my 5.1 mix file intact. This is also somewhat acceptable for some screening copies, though it seems most people only demand AAC or AC3 audio and we’ll get to those at other times.

So final compression specs for my “Super Flexi-4K” file whose goal is to be :

1. Upscaleable to 4096 x 2160 (perfect Full Scope 4K)
2. Black-bar-able (add widescreen-style black bars to the top and bottom of the frame at another point or in another program) to fit into 3996 x 2160 (perfect Flat 4K)
3. Maintain highest quality audio (that can be downscaled to AAC if needed)
4. Relatively smallish in final size (not raw image data)

… became:

3996 x 2108 (a non-standard resolution)
H.264 codec
Quicktime (.MOV) container
GOP 18 (a little better than one keyframe every second)
Bitrate: 107,017 Mbps
Maximum bitrate: 200,000 Mbps
Audio: PCM
Audio bitrate: 320 kbps
(another common standard for audio bitrate, for some reason or another)

Final length: 23,617 frames (16 minutes and 24 seconds)
Render time on a contemporary i5-level computer normally used for heavy 3D work in Blender: about 5 hours
Final size: about 6.6 gigabytes

Pretty good file size and time efficiency for compressing about 20-30 gigs of raw imagery (and another 800 megs of audio!)

Your Screener Copy, For Reals though

Now let’s get back to the question of why I made all these choices. The truth is I have limited resources and as a result I don’t spend a lot of time rendering in a single computer over and over. I want something “portable” I can “fit” to various festival requirements as needed. And as a result, the data-thing that you finally have your video or film in when it’s being uploaded or sent to people over the interwebs for screening can be changed to fit the need from this one video file rather than from Blender.

Of course, the ideal is that I can simply render what I need from Blender as each festival requires! Yes! I know! (pant pant pant) But I was concerned that I could not.

Now, the bottom line is, if you can get your screener copy as close to final (as in finally showing all your cringy mistakes on a twenty foot cinema screen) quality as possible, here is what I found the festivals want to see (and yes, it really is all these weird things, I am not kidding that some people apparently prefer shitty formats to perfect digital copies).

  • A deliverable DVD with the film on it (very few festivals now)
  • A deliverable Blu-Ray disc (BD) with the film on it (very few festivals now, though I guess in 2019 they will want Red Ray yet instead of a simple USB stick?)

Many festivals:

  • MOV container file with H.264 compression and 24-bit 48000 kHz AAC stereo (or better) audio at 2K or HD quality (or better, but maybe not too much better)
  • MP4 container file with H.264 compression and 24-bit 48000 kHz AAC stereo (or better) audio at 2K or HD quality (or better, but maybe not too much better)
  • DCP format (Woah! Too many festivals! This is a doozy and I will get into this shortly.)

Yes, that’s right. Many different festivals want exactly the same thing except the container file. MOV or MP4. But the weird thing is that some will apparently only accept one and not the other.

And do they translate to one another? Nope! Not without recompression. So this is another reason to make a semi-flexible “master” copy of your movie that you can draw either container from. Or, probably more better-er, actually, just do what I was avoiding and render specifics from the raw data as per each festival requirements. Or make a super (uncompressed) version of your video to make derivatives from. Yes, exactly what I just explained how to avoid. There you go. Sorry about that. It’s really up to you.

There are also some awkward notes from festivals that want some particular thing that they worked out for themselves as special for whatever unfathomable but probably deeply experientially wise reason such as specifically a MOV file in ProRes 422 HQ compression but an MP4 in whatever the hell you paste together because — in my imagination that one guy who tried to submit a circular MOV file in hand-coded GIF compression on a linoleum-wood hybrid, or something. (And hey, I want to meet that guy actually!)

Anyway, so, for both cases (yes, both, we’re not going to talk about that DCP … thing … just yet) you’ll want to export the MP4 and MOV at a nice crisp quality.

Although some want close to “uncompressed” for the final copy, many accept (read: expect/demand) the following specs for the general submissions stage:

HD, UHD, 2K or 4K resolution
H.264 codec
AAC 24-bit 48000kHz audio
(with 320 kbps bit rate)

If you can make a MOV version of this and an MP4 version of this, I would do both. I would make both at medium-high bit versions (like 150 mbps for 4K or 50 mbps for 2K, though I know that contradicts my earlier thing I said where I settled for 100 mbps for pretty much everything, but just saying, if you were all super buffed out in computer specs and you could just render like a boss, that’s what I would do.)

And for the screener copies, that you just upload to a site or get to them through DropBox, Google, WeSendIt or whatever they prefer (and they do have preferences they will mention) I would make slightly more compact versions. It seems to me 50 mbps for 4K and 20 mbps for 2K is adequate for lighter “preview” copies. Some even just want a private Vimeo link. Note: private.

Did I mention you should not have uploaded your movie to any kind of public link yet? Did you figure that a lot of festivals want your movie to be exclusive to festivals for at least 4 to 5 months before the festivals are all over and then you can put the movie online? Cool. That’s why mine isn’t on this site yet, by the way.

When they see and hear your hard work and you actually get accepted to a festival (or not, and you’re free to share it with the entire world instead of a kind of weirdly privileged festival audience — no offense to the whole institution, which I think is actually a cool and weird culture that is worth preserving and respecting, I am sure, especially for reasons of complicated cross-cultural collaborations), who knows what kind of 8-track thing they will require you to deliver it on? But there is a chance it will be nothing more than this.

There is also a chance it will be the one and only dreaded …


Ahhh! What is this? Why do I have to submit this? This is like a professional sort of “cinema” version of your movie and it is a rather involved process. In fact, it is so involved, I am not even done doing it and I need to get back to y’all about what it entails. So, if you read this, then, well, that’s pretty fun and interesting and I hope you also read my comic, though.

But then, I should say, “Stay tuned!” for part 2 of this thing, which will be all about the dreaded DCP. For now, I recommend just doing a search on DCP (and not only reading the Wikipedia article that comes up but finding some video engineer sites) and also this:

(It is a pretty great paper about the amazing and problematic H.264 codec.)

Cwlgi        E        geva for now.