Tuesday, 8 November 2011

Camera Projection

Yesterday was supposed to be the day when this shot came together, it was supposed to be a shinning moment, and then the track didn't work and blew my whole operation. I ended up doing a blog entry about reflections and shadows and some other hot air to try and cover up the fact my track didn't work.

Well, today, even though I'm ashamed of it I'm going to post the track that looks like arse, but I'm also going to post a little bit about camera projection, which in this case has been a bit of a get out of jail free card.

As I said in my last post, there was a number of benefits in recreating the physical environment within Maya, accurate lighting and shadows to name a few, but the ability to generate a believable camera projection in place of texturing every surface is another.

This image that I'm reusing again because I'm to lazy to render another shot out show's the camera projection at work. In my last post I spoke about how camera projection works to generate reflections onto the ball, this time however I am using the projection to actually texture the geometry.

This Maya view window animation gives a graphic representation of the geometry, the renderable camera and the projection camera.

This is my track that unfortunately didn't work out, which is always a shame. In this particular render, I hid the virtual geometry and just rendered out the ball with it's reflections (last blog entry) and then added the actual footage back in at the comp stage. This helped cut down render times although as you can see. Because the ball  movement didn't match the footage it looks horrible and there is next to nothing that can be done to fix it apart from retracking the footage.

So, what I decided to do was re-render the scene, only this time I would only render the virtual environment with 1 single frame of the footage projected onto it. This image to be precise:

As the original footage now appears nowhere in these sequence there can be no conflict between track and footage. The following shot is the virtual geometry with the one singe frame being projected into it by a locked off projection camera and the match-moved camera moving through the scene.

As you can see there are a few anomalies towards the end as the renderable camera moves out of the range of the projected frame, although if I had the inclination to fix it, this could be rectified with multiple projection cameras throughout the scene stitching together the whole footage from 4 or 5 frames. Also notice that the ball now fits perfectly in the sequence with no slippage at all due to there being no original footage, it is 100% virtual.

All in all I'm pretty impressed with the way it came out and it just goes to show that even when things fuck up badly, there's still a positive to come out of it!

Saturday, 5 November 2011

Shadows and Environmental Reflections

So, after creating a virtual environment from a live action plate I thought I'd take a look at what it was possible to achieve within this environment.
Unfortunately for this exercise when I rendered the scene out it soon became obvious that the 3D track was not up to par and as such the geometry I placed into the scene slipped and jumped about the place, thus making the whole shot look like pure arse.
This was a bit of a shame because all the work up to then looked promising. This however is another good lesson to learn and just re-enforces the practice of getting things right at the input, or initial stage of production. That age old allegory; "you can't polish a dog turd" still holds true today and is something I will be especially mindful of when it comes to producing the money shots.

So, as the sequence looked rubbish that doesn't mean that the still frames looked bad, so here's a little look at what I did.

Back at the old location again, I decided in true 3D CG style to put a sphere right in the middle of the scene. Why I hear you ask? Because I can. But, this isn't just any sphere, this is a sphere with a true to life shadow and environmental reflections!

If you saw my previous blog entry on 3D environments, you'll soon notice that from this match moved footage I recreated this environment within Maya. As this environment very closely matched the actual physical space and the camera move was (roughly) taken from the physical camera, it allowed me to use a technique known as Camera Projection to map the reflections onto the sphere. Here's how it works:

This mash of a rendering above is the geometry of the match modelled space, however instead of texturing this space with UV co-ordinates, I duplicated the 3D match-moved camera, movement, translation and all and used this second identical camera as a sort of film projector if you will. This projection camera projected the original footage that I filmed back onto the blank 3D geometry (which you can see above), thus creating perfectly matched reflections for the sphere to pick up on. Once this was achieved it was just a matter of rendering all the elements out separately for comp.
This is done via the Maya render layers where it is possible in render to turn off the primary visibility of geometry but still have it picked up in reflections.

This image is an example of that, as all the other geometry has had it's primary visibility disabled but is still casting reflections.
The final element in this shot was the shadow which again, was rendered in the same way. Using the 3D geo to help with the shadow, I rendered out a simple shadow pass on the alpha channel for use in comp.

So there you have it. All the elements of placing 3D geo into a match moved scene. There were a few final touches such as a colour and grade correction and a defocus on the ball to fit it in the scene but I'll cover them properly down the line. And maybe next time I can get track to work so there's some proper footage to look at!

Friday, 4 November 2011

3D Environments and plate stabilization.

Having looked at pretty basic match moving in previous posts I thought Id take it a step further today and try and Match Move some footage and then recreate the environment within Maya to match the camera.

After much trial and error of footage, also outlined in previous posts, I managed to find a piece of footage that was appropriate for this exercise. The main issue that I was coming up against was the jerkiness of the handheld footage, however on this take I managed to feed the footage into Nuke first, track the footage and smooth it out using Nukes de-jitter transform option. I might also add that the de-jitter option in Nuke is something that AE lacks and is one of the most useful plugs out there for film makers.

Above footage before the De-Jitter tracker was added

Footage after De-Jitter

Once the footage was stabilized I outputted it into Autodesk Matchmover once again and tracked the footage, creating a bunch of 3D track points. With the footage stabilized and less movement in the X and Y axis then previous shots, the solve worked out very well. Although there were a few focal adorations due to the fact it was my girl friends hand held stills camera on auto focus... 

Once the shot had been solved I imported to to Maya and got onto digitally recreating the environment. As I didn't have my tape measure handy when I shot this footage a lot of the geometry placement came down to guess work and trial and error. This was a good lesson for next time however, it is vitally important to take measurements of your environment!

The above frame is a snapshot of the very basic scene recreation. There is no detail as such and is all basic geometry, but with the match moved camera and the now virtual environment, this was the result of the render.

As you can see, the movement exactly matches that of the physical camera and the environment more or less lines up with its physical counterpart (apart from a few slips here and there!)
The real power of this technique is not in the recreating an already existing physical space, but more as a great frame of reference for adding 3D elements to the original footage. With the ground plane and walls added to the Maya scene, this ultimately helps to achieve realistic shadows, lighting and reflections for anything else I may want to add to this scene. Although this one is very rough, it is very promising for things to come!

Wednesday, 2 November 2011

Motion Graphics with After Effects

Yesterday I had a brief look at motion graphics and basic compositing inside of Nuke and how all these elements of a mock advert fit together. Today I'm going to be looking at something similar within After Effects.

As a general rule of thumb After Effects is sneered upon within the compositing community at large as something of a kiddies toy application, or a kind of a "my first compositing" package, and to be honest it's not with out some foundations. However this doesn't mean After Effect doesn't have it's place within creative industry, because as a motion graphics application, AE is second to none.

The tools that are provided with a Program like Nuke are perfectly suited to the high end compositing arena, more specifically working as a finishing package for assets created elsewhere (although with the addition of such elements like its 3D particle system that is beginning to change). After effects on the other hand is much more suited to creating these assets and compositing them as an in-the-box style of work flow.

The first clip that I made for this exercise was a pretty straight piece of motion graphics serving as another mock advert for my impending street scene. The matte of this particular shot was outlined in my Green Screen and Chroma Key blog entry a few days ago when looking at the technique of hard and soft matting.

This video differs from the Nuke motion graphics advert in the sense that all of the assets contained in this advert were created within After Effects itself and not having to rely on Maya to create anything. This is where the power of After Effects lies. * I should point out at this point that the workflow of After Effects is seriously compromised by it's stability and is something I have had a lot of issues with. This is an area where Nuke wins hands down as being one of the most stable applications I've used.

Anyway, after attaining a good key using Keylight (cross-application keyer works the same in AE as Nuke) I then set out to create the sequences look.

As the content of this particular shot oozed cheese, I wanted to create a style of advert that matched Adams dynamic performance. When I think about Cheesy adverts, generally I can't go past Japanese advertising. I love it. Generally my perception of these adverts is short and to the point with lots of bright colours.

As a point of reference I was influenced by a recent film clip by Mark Ronson. This clip is retro to the max and it is this old school retro visual style that I tried to emulate.

With this in mind I first set out to create some very simple animating vector graphics. I wanted them to be bright, colourful and above all simple sticking with this Japanese inspired theme. This I did with 3 simple checker board layers stretched out to create bars and animating over one another. The fonts were then created from a free download of Chinese font packs, again animated over simple, bright vector graphics.

This was all relatively easy and it all looked pretty good once it was done. After the animating part was finished I then turned my attention to the grade as this was what would sell the shit. Ungraded this shot look terrible. The lighting in this shot was one of the hardest things to contend with as it is so stark and all the lack of any definition made it hard to get a professional look.

The first thing I did was to create and adjustment layer that would effect all elements of the shot equally to give it some coherency. With these adjustment layers I reduced a lot of the red in his face and bumped up the contrast to give it a filmic look. On this layer I also added a glow to make the whites pop, this also served as a light wrap style effect that would tie the FG and BG elements together.

Thus creating the final product that is fit for Japanese television.

Tuesday, 1 November 2011

Motion Graphics with Nuke

Last time around I was looking at different techniques of Chroma keying, and all for very good reason, because today, I'm going to be showing you what I did with those lovely keyed sequences of my highly talented crack(ed) acting squad.

First up we'll look at Nesbeth. As you may recall when we last left him he was digging the smooth sounds in his new headphones and trying to nail a key on reflective objects within the scene. As I mentioned before, this didn't turn out to be a problem, because for this particular ad I used the key to artistic effect.

As you can see the final look of Matt in this ad is simply the alpha channel inverted. This is a very simple technique to achieve in Nuke. When using keylight key a green screen, this is the what you will get as an alpha channel. It is simple a matter of using the shuffle node, as handy little node that allows you to "shuffle" the colour channels of any give image around. In this case the alpha was inverted and then shuffled into the RG & B channels respectively, giving this as a final result.

Once the overall visual style was accomplished (and it is based on those Apple iPod adds) it was just a matter of finessing the final product. Again in true apple style I added a particle system which I created with Trapcode Particular and controlled via a few simple expressions written into the Particle emitters transform attributes (where it moves in 3D space) telling it to move around in a random fashion (random(x/2)/5 if you really want to know).
Particle system on its own screened over the existing keyed footage.

The next element to add were the headphones. This was pretty simple as I just made a loose roto shape and essentially cut them out to separate them from the rest of the footage. By doing this I was able to give the particles and the headphones the same colour and again control their changing colour by an expression written into a hueshift node. This time the expression being t*1, of time*1. This makes the hue of the effected element equal the time in the timeline, and thus will change as the time does.
Rotoscoped headphones enabled separation from the rest of the scene.

That was pretty much it for visual elements. This style of advert is very clean and would have been easily over cluttered so I though keeping it simple was the best plan of action. The only other element that was added was the text. This I wanted to keep with the particle theme and also was looking for an excuse to have a play with some Fluid Simulations within Maya. I soon figured out that it was actually pretty easy to take some text with an alpha channel and plug it into the fluids density, making the text the starting point of the fluid.

Once this was rendered from Maya, it was inverted in Nuke to fit the rest of the scene and a Min blend node was used to remove the white and keep the black (opposite to a normal alpha channel).
The final script in Nuke looked a little like this...Exactly like this actually.

This above image represents the node based system that Nuke uses as it's main interface. At the top of the tree there lies the main green screen footage, with the 2 nodes below the keyer and the shuffle node to create the inverted alpha look (B&W). On the right of that is the headphone roto branch with the Hueshift node (shifting colour) and 2 glows to light them up, these 2 paths were then mereged together.

This is the basis of how the Nuke interface works and once you wrap your head around it, it is a lot more streamlined and refined than working in layers.

Anyway, that's enough of that for the moment. Next time I'll be have a look at working with Motion graphics in After Effects.

Monday, 31 October 2011

Green Screen and Chroma Keying

For my last little exercise I decided that in order for the afore mentioned shot to take place I was going to need to create some adverts to place into the scene. As this is the future and all adverts are projections or on some large screen beaming they're inane messages onto unsuspecting citizens below, I decided to do a little green screen work.

With the kind help of my multi talented pals Matt Nesbesth and Adam Lyduch, we spent a brief half hour in front of a green screen recording some uber cheese.

Green screen work in a project such as this, and especially advertising is the process of separating the foreground and background elements by what is known as Chroma Keying. Chroma keying essentially takes a sample of a single unnatural colour, usually green or blue, and replaces this colour with transparency leaving a cut out of your designated object, also known as a Matte.

The first thing to take into account when doing chroma key work it light. It's really important to have good light on the green screen and as important to have good light on your subject. I noticed when doing this exercise that shadows are no friend of the keyer. This means to try and keep your subjects as far away from the screen as possible and to also make sure there are no kinks in the screen that could also cast shadows.

As this was a completely impromptu improv advertising session we essentially had to try and sell whatever was on our persons. This made for an interesting shoot.

This first video is of an hugely passionate and expressive Nesbeth. What a great job he's done here. I really feel his extacy and love for the music he's listening too.
This key was pretty straight forward except for what could have been some potentially problematic reflections on the headphones themselves. Reflective surfaces can be a real bitch as they pick up the spill from the green screen and punch holes in your matte.

As this image of the Alpha channel shows the holes left in the matte by reflections and spill from the green screen. Luckily this was not a problem with this particular shot as you shall see in preceding posts.

The second shot that I worked on from this shoot was of Adam Lyduch selling a phone. This was also an exercise in cheesyness.

Here there is a similar potential issue with a reflective, almost mirror like object of an iphone. Thankfully the phone was never on the right angle to pick up the green screen so we kind of dodged a bullet there.
For this second key I employed a process known as Hard and Soft keying. This is a method taught to me by my man on Frankenweenie, Mr. Jon Brady. Jon is a veritable wealth of knowledge on all things Nuke and what he doesn't know he will find out.
Hard and soft matting works by combining a number of different keys together to help achieve a natural looking key without loosing to much edge detail.

As this first image of the Alpha channel demonstrates, the edges of the key when compared to the original footage are very harsh with very little detail around the hair. This is the hard matte and is intentionality keyed this way. The hard matte is a very basic matte, the green is keyed, in this case using the program Keylight within Nuke and the edges shrunk, or dialated to the negative. The alpha channel from this hard matte is then piped to a second instance of keylight where it is added to the inside matte, meaning the opaque part, or white part of this image is added to the second keyer.

This second keyer, with the opaque hard matte piped into it is then used to bring back the edge detail and creating a more believable key. This can be seen in the hair and stubble area's as the softness and detail is clearly visible.

Tuesday, 25 October 2011

Location Concepts

This weekend I went out and had a look around Bristol for some locations that I thought would suit the ideas I have for my first VFX undertaking.

The initial idea I had was one of very contemporary Sci-Fi cinema, along the lines and concepts of the film Minority Report and iRobot. One aspect of modern culture that I see as pervasive and a trend that is likely to become more and more ubiquitous in coming years is advertising. Capitalism, but more specifically consumerism, seem to be the driving force that is taking over modern society and it is something that I can only see accelerating into the future.

This clip from the Stephen Spielberg film Minority Report is an influence on what I see for this first shot.

This is what forms the basis of my first shot. Originally my thoughts were to keep the footage as raw as possible, working on the idea of a slice of real life as opposed to a carefully scripted and rehearser, filmic camera move. I felt that this would give the shot more credibility. An example of this would be perhaps the way a lot of the VFX was done in the film Cloverfield. Cloverfield contained a lot of hand held footage and combined with modern VFX techniques gave a very unique and realistic look.

Cloverfield trailer; handheld VFX are very convincing.

My thoughts on how to achieve this kind of look had up until now been based on the motion tracking technique, where by the footage is tracked and elements inserted. However upon further inspection of these kinds of shots I realised that a lot of the action has been shot on green screen and composited after the fact. 

In early tests with motion capture I found it increasingly more difficult to get accurate camera solves, or, accurate tracks in low light situations and more recently jerky footage. This is now making me reassess how I will complete my shots.

The following sequence of shots comes from my own investigation into locations here in Bristol that would best suite the mood of this shot. Bristol has some great locations and the first place I checked out was Nelson street. Nelson Street, now known for its graffiti, is a great location due to its ungodly, Orwellesque, Brutilist type architecture. This sits in perfectly with the grittiness I was trying to accomplish for a sort of dystopian view of the future.

These 4 sequences are all from the same street. Looking at this footage I am pretty much sold as to this being my final location for this shot due to it's unique topology. However these location tests have proved very difficult to track accurately. I am beginning to see a lot of limitations with Autodesk's Matchmover 2012 in what it can track accurately. For now I am moving into Nuke X to try and make more accurate tracks with Nukes inbuilt camera tracker. Should this fail I will have to reassess the type of shot and movement of the camera and perhaps consider a track and dolley or maybe some kind of steadycam to help with the Solve.

Wednesday, 19 October 2011

Multi-Channel Compositing

Here I've got a bunch of passes from my Robot for comp. Compositing is the process of taking all the elements that are going into a shot or sequence and essentially tying them together into one coherent image. This process quite often entails colour matching, focus matching and lighting adjustment. Green screen or chroma keying are also part of the compositing process as the need to "pull mattes" is central to a compositors roll.

For this latest exercise I wanted to do some touch ups of my robo-man to a level of accuracy that can't easily be done in a 3D package. As the compositing process is generally, but not exclusively, done in a 2D environment it is a lot faster and economical to do these touch ups as opposed to the 3D world. This however is a trend that is being challenged with ever more complex 3D systems being built into compositing software.

After my render was completed in Maya I exported my Robot via the .exr file type. .exr's are a very very handy file format. Developed by Industrial Light and Magic, .exr's enable a 3D artist to embed as many passes into the one file as necessary. Having this level of control makes the compositors job much easier to tweak any aspect of a particular sequence.

Below is the process of my quick comp job on my robot and outlines the amount of data that can be packed into and .exr file.

This Image is my straight rendered .exr file directly from Maya into Nuke. The image is displaying all layers concurrently and looks blown out due to incorrect layer blend modes. However once the .exr is unpacked and manipulated this ceases to be a problem.
Most Image formats that people are familiar with such as JPG and GIF files generall carry 3 or 4 channels, and RGB and an A, or, Alpha channel for transparency information. As you will see with the next series of Pictures, one .exr file can carry infinitely more than this.

When rendering out multiple passes in Maya, it is generally done in a lighting type basis. That is, all the separate aspects of the lighting state are broken down into their constituent parts to allow greater control over the fnial output.

This first cell is the diffuse pass. This is a lighting pass that will only render light that is reflected from a surface in a diffuse, or Matte way. 

The next pass I rendered out is the Specualr pass. This deals with the highlight of the objects materials and generally appears on shiny materials. The blend mode for this was screened over the top of the diffuse pass removing the blacks and just leaving the highlights.

The next pass is the reflection pass. This, as the name suggests, has simply picked out all the reflections from shiny objects. This is very handy as the over look of the material can be altered by adjust this pass.

This next pass is the shadow pass. The blend mode for this worked on a subtract basis, that is all the pixels in this pass are used to subtract, or add darkness to the composite image.

This final pass is whats known as a depth map. It is used as a matte to control a z-depth blur. When attaching a depth map to a z-depth blur, the darker areas are uneffected by the blur, where the white area's are completely blurred. This is used to give the render depth of field, thus the name z-depth blur.

After tweaking the final comp came out like this.

As you can see when compared to the top image, the straight Maya render, multi-pass compositing adds the final sheen to a CG render. The process is extremely fast and is a great way to get results that are rarely seen with a 3D package alone.

Mental Ray Rendering

Today I was having a quick look at some of the render options within Mental Ray in Maya, and I have to say, it shits all over the Maya Software renderer. As a part of the course we were asked to design a character in Maya that would eventually get rigged and animated, I assume using Maya's IK and bone animation tools...or whatever, I haven't really looked to much into animating yet, that's still to come.

Anyway, as I'm looking at placing CG into real world environments, I thought I'd create a robot...which seems to be becoming a recurring theme on this blog... But this robot wasn't a large robot that turns into a car and destroys everything within close proximity, I was thinking a little more realistically about this one. I've always been a bit of a sci-fi fan, but believable sci-fi. So I was looking more at a design of a robot along the lines of the Isaac Asimov/Will Smith type genre. I was also influenced by the visual style of the Chris Cunningham video for Bjork's all is full of love. These to me seems to represent a more realistic representation of what robots will one day represent. This film clip and Asimov style narrative also raises a lot of philosophical and sociological questions, but I'm not getting into that right now. Needless to say it's a bad arse clip with some equally bad-arse CG and model making.

Chris Cunningham and Bjork's all is full of love.
Sonny the NS-5. IRobot.

But anyway, I've digressed into talking about robots again...Damn! So, as I was saying and as is the name of this post, I was looking at Maya's mental Ray renderer, specifically Global Illumination and Final Gather. Up until now whenever I had modelled, textured and rendered any object, I had strictly used Maya's ray trace shadows and reflections and be done with it. Although high quality ray trace shadows can look very effective, there is still a something intrinsically CG them. This is where Global Illumination and Final gather come in. 

When rendering with just ray trace shadows, the geometry receives light and casts shadow as it would in the real world, but it doesn't reflect light onto other objects or ambient lighting. This ambient lighting is such an important part of rendering and can take a good render and make it look a hell of a lot more realistic.

The first rendering here of my robot here was done using just ray trace shadows. Although the shadows are mathematically correct they don't look realistic.

The second render here is without Ray Trace shadows, but instead has used Global Illumination and Final gather to calculate the lighting. The Global Illumination function causes the light source to emit virtual photons into the scene and them map them as they decay and interact with the geometry in the scene. I have initially had a few issues with getting this to work but the final result gives a much more true to life representation of the objects.

This light bouncing and ambient lighting effect can been better demonstrated when getting up close to the geometry to see its effects. Again, the first rendering is with Ray Trace shadows turned on, without Global Illumination or Final Gather and the second is the opposite.

Note the darkness of the shadows in the top render as opposed to the ambient light in the shadows in the bottom render. The effect is quite subtle but I think makes a big difference.

So there you have it, a brief look at the finer point of Mental Ray and of course more robots. I'll be looking into how to actually animate this guy in coming weeks and eventually try and drop him into some match moved footage. Perhaps creature comforts style talking about what it was like being raised in Bristol...

Sunday, 16 October 2011

3D Tracking

In an earlier post I took a quick look at the process of Matchmoving, however in retrospect that probably wasn't the best example of matchmoving or 3D tracking process as the movement was only in 2 dimensions. When talking about matchmoving there is a very important difference between 2D and 3D tracking, namely 3D tracking is a much more annoying. 2D tracking is something that can be performed in programs like After Effects and simply relies on the tracker following certain points around the frame. 3D tracking on the other hand is somewhat more complex a process. 3D tracking points are collected via manual track points or auto track points and used to calculate and rebuild a virtual 3D environment with a 3D package such as Maya.

As I have found throughout this latest exercise, it is vitally important to have as much information about your physical camera as possible. The focal length (length of the lens) and the film back (camera aperture or camera censor size) help turn a nightmare track into something a bit more manageable. That and I might also add that Autodesk Matchmover 2011 is a serious pain in the arse. I downloaded a free student licence of Maya 2012 and Matchmover 2012 kindly from Autodesk with a 36 month licence. Anyone interested the address is http://students.autodesk.com/?nd=download_center Make use of being a student and get as much free shit as possible.

But I digress. After downloading and installing Matchmover 2012 the automatic tracking was a breeze. You'll need to break your footage down into an image sequence, I used a Targa sequence which decompiled pretty quick and good quality with Adobes Media Encoder.  Then it's simply a process of upping the quality of your auto track and the length that tracks will stick to the points. Once the track was finished, I solved the camera create 3D points within the scene and added a coordinates system to tell the 3D package which way is up. This is a very important step and helped to line everything up within the Maya axis.

3D Track points within Matchmover after automatic tracking and camera solve. Not the inverted axis with the Y pointing downwards. (Mistake within MM but rectified in Maya).

Once my camera created in Maya it was simply a matter of recreating the lighting of my environment. I used Maya's IBL (image based lighting) to create my environment from within Mental Ray with a full panoramic photo of my living room. This also created reflections within my geometry to help sell the cup. The shadows and reflections on the table were created with a Background shader. This allows a material to only display specular and reflections within the environment. After a bit of tweaking I rendered it out pretty low res and did some colour grading in AE to help tie it all together.

Left middle - perspective view of cup and background shader geometry. Right middle - IBL environment and camera tracks and image plane.

All in all I'm really pleased with the way it came out. Even though it is a very simple shot and piece of geometry, the actual track and coordinates systems proved the can work perfectly and open the door to some much more complex shots to come. Watch this space!

3D Track with geo and grade
Raw Footage.

Tuesday, 11 October 2011

3D Modelling

So, the last week or so I've spent locked away in a small cupboard learning the finer points of Maya, thanks to the online Escape Studios Maya Fundamentals course. This has already made my coming back to uni for my final year worth while as alone these courses are around 4k. So kudos to UWE for getting on-board with the industry leading software trainers.

Anyway, as I've just finished the extensive modelling and texture parts of the introducing Maya part (a small part of the entire course) I thought I'd do some very basic modelling to put what I've learnt into practice.
This exercise works on a few different levels. As my ultimate goal is to create live action VFX, learning to model objects to then place into match moved scenes in Maya is good practice for projects further down the road when I look into full 3D matchmoving.

What I've discovered thus far is that modelling makes up only one aspect of the whole process to create photo realistic sequences. Texturing, lighting and animation are as important as the models themselves. A film that stuck with me as having amazing modelling, texturing and realistic lighting was the latest Transformers film, The Dark of the Moon.

Optimus Prime from Transformers: The Dark of the Moon.

The level of detail in the textures and the detail in the models themselves was something to behold. I don't care that the story was non existent, the film had large robots, fast cars, explosions and hot chicks. What more do you want?

So, here comes the anticlimax when I show you what I've been working on....I decided to model a simple car to begin with, with some simple metallic textures. The idea behind this is that it is something that I can later build upon and drop into some live action tests.

My lame ass blob car

At the moment it's pretty low detail and needs a lot more of the subtitles added, as well as some more complex textures, but for the moment it will suffice and is a good start for further explorations when I get onto doing proper shots. 

Thursday, 6 October 2011

Matte Painting

So, as a pre cursor to getting into the animated/live action VFX, I thought I'd do  few pre-viz matte paintings to get the feel of the old faithful, Photoshop again.  Matte painting goes back a loooong way in the VFX/SFX industry. I'm sure you've all seen those amazing paintings in the background of some Hollywood lot of mountains or galaxies etc. Something a little like this no doubt: 
The classic Wizard of Oz matte paintings. Follow the yellow brick road. 

Needless to say with the advent of programs like Photoshop, Matte painting has moved on considerably since then. The art of matte painting now involves as many aspects as the compositing process, using CG, real life photo textures and good old fashion painting techniques. Here's a good example of what is now possible through the magic of computers.
Couracant from the new Star Wars films. Bad ass Shiz!

The technique for something like this would be a mixture of CG modelling basic geometry which is then passed to the Matte Painter to weave some magic and add the detail and subtitles by hand.
Not only is matte painting for the purpose of static (and sometimes dynamic) backdrops, but with the development of such 3D packages as Maya and Max, a matte painting can of touched up geometry can then be imported back into the 3D package and projected back onto the original geometry as a way of adding high quality texture and a level of precision that is not possible through 3D texturing alone.

Anyway, here's a few Pre Viz matte paintings I've done on a quick snap I took of the Avon Gorge here in Bristol. They're just matte tests to get a bit more practice in Photoshop, but the process of making these shots has given me a glimpse at the possibilities of matte painting. 

These 3 shots are all the same photo and I was having a little play with the history of the Avon Gorge playing with some visualizations of Yesterday, Today and Tomorrow.
Still very rough but a worth while exercise!

Tuesday, 4 October 2011

Match Moving

First port of call on this magical journey: Match Moving.
So, what the fuck is Match Moving I hear you ask? Good question. Match moving is the process of 3D camera tracking, that is finding fixed points in a live action plate(piece of filed footage) that can be tracked in 3d space in order to create a virtual camera in your production application. Sounds complicated? Not really.
It breaks down like this; In every piece of footage there is a paralax, that is objects closer to the camera move faster and objects further away move slower. You feed a piece of footage into your match moving program, in this case Autodesk Match Mover. The program analyses the footage and finds points to stick too, these are called tracking points. The program will try its hardest to stick to these points, be it a lamp post of tree, something that is static in your footage, although if the footage is moving about violently enough this will need to be rectified by hand.

In this example, you can see the tracking points from the 3d track of this street attaching themselves to fixed points (some..!) to generate a virtual camera for our 3d package.

Once the program has analysed the footage, it is down to you to add a co-ordinates system, that is, tell the program on which axis these points lie and a rough estimate of their distance. The program will the "solve" the shot and create a virtual camera based on the tracking points and the footage fed into it. 

Once the camera has been solved you should end up with something looking a little like this; a virtual camera, exactly copying the movement of its real world counterpart. This will allow you to add geometry to your scene and with some colour correction and a few other tricks get it looking prety good.
That's pretty much the nuts and bolts of match moving. I know all the tech geeks out there will righty be saying that you could quite happily do the same shot with a 2D tracker, but that would defeat the point of looking into matchmoving, that and I couldn't find a 3d Piece of footage...

So, in conclusion here's my animation test into matchmoving. This one is pretty simple, but it's a good place to start. I will be looking into much more complex solves in future containing green screen tracking markers and full 3d solves.

P.S. here's something that has utilised the same technique, that and it's got a break dancing robot which rocks!

Exploring VFX

Hi there,
my names Tav Flett and over the coming few months I will be exploring different techniques involved with live action VFX. This exploration will be a part of my 3rd year self directed study for the undergraduate course Media Practice with Animation at The University of the West of England (UWE), Bristol.

My intention is to not only look into the technical side of VFX production, but to lead a critique of this rapidly growing industry in the UK and it's current situation within higher education.

During the past 6 months I was lucky enough to work in the industry on Tim Burtons upcoming stop frame animation Frankenweenie. The film is a remake of his 1984 short film of the same name. The feature currently in production is along the lines as Corpse Bride and the Nightmare Before Christmas. 

On Frankenweenie I was predominately using the compositing program Nuke and it is this program along with Maya and Matchmover that I will be exploring this semester. I chose these programs specifically as they are currently the industry standard applications in feature film for 3d work, compositing and match moving.

So that said, let the journey begin. Hopefully I can learn a few things before the year is out and if you're reading this, then hopefully you can too!