Thomas Doukinitsas
REFRAMING THE INTERFACE: Running out of time with Through the Medium...

REFRAMING THE INTERFACE: Running out of time with Through the Medium...

So... what happened with through the medium and why was it incomplete for the hand in date...?

The main problems were

  • Time scheduling and anticipation
  • Rendering
  • External Circumstances (Qualia's animation)
The original time scheduling was planned for me to work only on Through the Medium and D-Eb31e, and the original plan was to complete all of the films for Friday in order to have a few days to gather and generate production documents. However since the animation for Qualia took more time for Shahid the entire animation process was also handed to me, Lauren and Laura. This delayed everything else significantly, therefore D-Eb31e was exported on Monday and Through the Medium was exported in it's current form.

The main lesson to take from this is to definitely plan for the absolute worst, as if something can go wrong IT WILL.

The goal for the next few days is to finish Through the Medium and bring it to it's final stage, ready for the graduation show.



The visual effects for debbie posed a number of different challenges, such as recreating a virtual set and creating some complicated multi layered effects.

To automate the distortion effects i researched in to python coding and developed a custom plug in for nuke that created the desired distortion effect

A series of plug ins created for this years projects

The NoiseWarp interface (the custom plug in used for the distortions)

Most of the shots were straightforward thanks to the plugins, but there were some tricky ones which required a combination of multiple layers and techniques to achieve. Here is a progression of one of the trickiest shots:

Original Greenscreen Footage
Background Plate
CGI Virtual Set Extension
Hand drawn animated roto matte
Combined Background with Virtual Set
Distortions (via cr_NoiseWarp custom plugin)
Color Corrections and Defocus
Final Shot

REFRAMING THE INTERFACE: Rendering with Arnold... and it's Limitations

REFRAMING THE INTERFACE: Rendering with Arnold... and it's Limitations

Rendering with Arnold on the University machines posed some troubles as both me and Jonathan were trying to bridge the gap between Cinema 4D and the Renderer itself.

After researching i came up with a plan that used Arnold's native frame format (an .ass file for each frame) and some software called Kick Ass GUI that would automatically take these ass files and render them, therefore bypassing the need for a 3D package and talking straight to the renderer.

The problem however was that both our machines and the university machines had trouble with licensing. As at the university the licensing server is just another PC in a room, if someone turns it off everyone's renders at the university would come out with a watermark and sometimes fail.



Animating the CGI was a very interesting and fun process, during which i started to see the other half of the film coming to life. Working closely with Lauren over many Skype sessions (as this was over easter) we refined the actions to the voices, starting from the facial animation as this was the most important, then focusing on the body, camera and any secondary animation (sometimes featuring secondary characters and extras)

One thing that i noticed that duplicated the ammount of time needed to animate the scenes was the cloth simulations and scenes with multiple objects, something which i will consider next time...



Our stop motion shoot took four weeks for about 16-17 shots in total. We used the film cutting rooms, a Cannon DSLR and Dragonframe, a software supported by the university. We also used an Arri fresnel kit, tripods and for some days a motorized slider...

Getting everything shot was tough, as the room was small, especially for two people working for roughly 9-10 hours a day. Thankfully the heat caused by the lights wasn't a big problem for us as because we had done our risk assessment we brought in a fan to ventilate the hot air out of the room, poor clay however occasionally had his "meltdowns"...

The way we planned our dope sheets was using a previsualization in the form of CGI versions of the shots. These previs shots helped us figure out the timings and gave us a very detailed guide for creating our shots:

As we were filming we had the audio clip and notes written on the dope sheet based off of the above previs. This let us shoot an an incredibly quick pace, getting about 300 frames done every day.

This is a typical shot we would get (this actually took up an entire day):

As this was shot using stills we would end up with allot of data, with a resolution of roughly 8K, which would be plenty for our HD delivery.

Although the main set appeared throughout most of the shots, we also built a second set for one of the shots. As this set was put together quickly with no cost, the result was a bit of movement during the shot. Therefore as we had the slider and it could repeat the same movement again we decided to shoot a second plate to restore the background.

We also had two days in the bigger studio to shoot some live action elements that would intercut with out two characters. However we did not waste the day as simultaneously we were also shooting one of the shots.

This shot was one of the trickiest as it required a non-linear camera move. Although the slider was motorized, there wasn't any way of connecting it to Dragonframe's interface, therefore a move could not be programmed automatically.

However we still had the previsualizations that we did to get the dope sheets, and as these had the camera information in them for each frame i was able to write down a speed value. Then when it came time to shoot it we just input the value on the slider's contoller. This enabled us to get a smooth dolly out motion to reveal that Clay doesn't have his arm.

Overall shooting the stop motion was a tiring, yet very educational and fun experience. Pre-planning everything was definitely the reason we managed to pull it off in such a short amount of time, and i believe it's definitely worth it as the results speak for themselves.

BEYOND THE BOUNDARIES: Getting a room for Stop Motion

BEYOND THE BOUNDARIES: Getting a room for Stop Motion

Although our original plan was to possibly negotiate with the Animation course to use the stop motion room, we were quickly looking at alternatives due to the requirements of our project.

Luckily Rosie suggested that we could use one of the film cutting rooms as there are many of them and they may not be in use.

Therefore i went to the film production office and spoke with Nioski Deville, and she generously gave us access for four weeks, which allowed us to also store all of our equipement in the room and lock it up for use the next day.

The room did require moving boxes and a steenbeck machine out of it in to the next room, but this gave us enough space as the room was small.



To try figuring out some of the techniques behind the effects for "D-Eb31e" me and Shahid shot a test using the GH4 and the Motorized slider.

The idea is that by repeating the same camera move, but shooting all of the elements separately on a green screen i can separately isolate and edit all of the elements individually, avoiding other more time consuming techniques such as rotoscoping and paint work.

The first step is to synchronize all of the shots and then key and merge all of the individual passes to reconstruct the shot as it should look if it was shot for real.

To distort the wall a "displacement map" is needed, basically a black and white image where the black areas will remain unaffected, and the white areas will be displaced from their original position. If the texture is animated this can produce a really nice effect.

To create this map i'm using Nuke's built in noise gizmo, as well as some other ones downloaded from

To properly add the effect to the moving wall the background layer was tracked and the position of that was applied to the displacement map. I also added a color grade gizmo using the displacement map as a mask in order to create an interesting glow near the areas where the distortion was most noticeable.

although it started to look cool, to add a bit more warp and distortion i added a spline morph gizmo to the layer, but before the displacement map. A spline warp allows you to draw outlines around the image and then warp those outlines.

I also added the spline warp to the background elements in order to make them bend also.

As final touches i added some color effects and camera shake, and also a color correction layer.

This is also an earlier test using Primatte instead of Keylight, and using other methods to warp the wall and objects:

Personally i like the first version's wall warping, but the keying process from the second version. The render times for both of these are considerably high so i will need to rethink some elements to lower the render times.

Lastly, through using Nuke Studio i discovered that it can automate the export and import process for the visual effects files. This could save almost 1-2 hours spent exporting the images and setting up the projects.

REFRAMING THE INTERFACE: Illustrating the indiegogo trailer

REFRAMING THE INTERFACE: Illustrating the indiegogo trailer

To better portray our style and will for the project, Lauren suggested that we create a trailer/pitch video, but instead of just filming ourselves we create it as a 2D animation.

To illustrate the cartoon characters and props i used Adobe Illustrator.

Initially i created a rough single layer sketch to get the drawing right, and then i would trace over that separating each layer ready for after effects.



To promote D-Eb31e on indiegogo we decided to shoot a test trailer and a segment explaining how the costs are going to be spread across the production.

This trailer shoot also gave us a good opportunity to test out our Kit and the workflow we had planned.

Overall it was a good shoot and this is what we created:

BEYOND THE BOUNDARIES: Production folder and Presentation

BEYOND THE BOUNDARIES: Production folder and Presentation

This is our finished 74 page production pack about "Through the Medium":

And this is the presentation used in our pitch:



One very important thing that i notices in cartoons and computer animated films is the extreme expressions, that even though are physically impossible add to the comedy and exaggerate the movements.

I managed to find this video on fxguide that shows how extreme these poses can be, and how the laws of physics have to be bent for other elements to work (such as cloths)
I also found this interesting video on character design for cartoon character from an upcoming film.

BEYOND THE BOUNDARIES: Starting to get the group folder ready

BEYOND THE BOUNDARIES: Starting to get the group folder ready

After looking at examples during our group tutorial with Kathleen, we started preparing our group production folder.

The first thing we also looked at was creating a logo for our film. so that we could also use this across the printed production folder.

After going through lots of tests and ideas and guidance from Lauren, we designed this final logo that we believe incorporates both the characters and the mediums, and is clean and simple:

With the logo done, we've started layouting the production folder, using the logo and its colours. We have come up with a rough table of contents and are gathering all the necessary information for completing the production folder.

BEYOND THE BOUNDARIES: Indiegogo shoot for D-Eb31e

BEYOND THE BOUNDARIES: Indiegogo shoot for D-Eb31e

To better promote the indiegogo page we decided to shoot a trailer and a promotional video with Lucy explaining the concept of the film and why we need the budget.

The first attempt of lucy's talk was shot in the photography room with a DSLR, however due to some complications with the Audio we decided to reshoot it on the same day we would be shooting the trailer, using the URSA to better show off our production values.

The URSA was a pleasure to work with, however the battery only lasts about 1,5 hours and the card can record 25 minutes, and takes about 40 to unload. It was good that we figured this out on the test day as i believe now Laura has added in the rental of a CFast card in the budget to be more efficient

This was also an opportunity to test out the actual workflow that we will use in the final film. The footage was offloaded on to a hard drive using ShotPutPro, a file transfer software aimed at digital imaging technicians which also verifies that the copy was successful without any errors by comparing both sets of files.

The trailer shoot itself was a really nice opportunity to test out the high speed feature of the URSA, filming up to 120fps for a full hd shot. We managed to get some really nice results and hopefully the trailer will be compelling to watch and instructive for us as a crew.

Here are the dailies:



During our second camera workshop we had a more in-depth look at the C300 and the URSA, including their respective lens kit, matte box and follow focus, alongside the ronford heavy duty tripod and the TVLogic monitor.

We learned about proper practices in setting up these professional cameras, but also had the opportunity to test them out and compare them.

Here are some of the comparisons between both cameras:

Based on these tests i am leaning towards the URSA due to it being slightly sharper, and having better color management since we are using Davinci to grade. There are limitations such as card a



Finishing the base model:
Continuing on from the last blog post, i continued to use extrusion modeling to create the rest of the body. This is the result:

After that i added another skin material to the rest of the body to match the face material.

I Then continued to extrude to create the hands and fingers:


To start off the rigging process, i used Cinema 4D's Character Object, a system that allows a generic rig to be modified to fit the model, this works by adding key points to the model:

Facial Expressions with Morph Targets:
Since we wanted the character's facial expressions to be more expressive and stylized, beyond the boundaries of a conventional rig,  i decided to use morph targets. Copies of the main model are made, modified with the different expressions and those expressions are morphed between each other. I also used the "doodle tool" to draw over the viewport to get a guide for modeling the expressions. This also made the expressions feel more cartoony.

Fixing the weights / internal muscle system:
The automatic binding process had some unexpected results with connecting the bones to the appropriate part of the mesh, therefore manual intervention was needed to correct this, using Weight Painting.

Also to help with the look of the muscles as it is a main characteristic of our character, I added an internal muscle structure for the arms. However there are still a few problems with it.

One of the things me and Lauren thought of was that if there are any problems with the rig and there is not enough time to fix them, since we are creating a reflexive piece we could just write the problem in to the script and turn it in to a gag.

Rig tests:
The following tests were made to test out the current version of the rig and spot any problems that needed to be fixed: