While working on some larger scenes with Terrain objects and Clouds I made a few observations and conclusions. On a woody hilltop the individual trees can still be made out up to at least 4 miles away, particularly along the ridges. When making a scene in Carrara we can use the Surface Replicator to easily fill a Terrain object with plants, rocks, etc. One limitation is that the replicator will only allow for 100,000 maximum objects. When using real world scaled objects that means we can only use a 2 square mile Terrain object to fill out with a dense forest of mature plants and probably only a 1 square mile or less object for really dense growth and a full canopy. Carrara can handle a large number of terrain objects filled with replicated trees and render them in good time with low quality GI Sky Light.
Then I saw there were going to be problems when using the replicator with some more complex scenes. When we have a road over the terrain, a clearing or town, or want to put in a lake or river and bank we might be able to use a Terrain Layer or paint onto a texture map to solve these distribution issues but that there are real problems when Terrain objects overlap. I always prefer to have a separate Terrain object for a landscape feature, use a Zero Edge Filter and emerge from the Infinite Plane or a larger base and especially to build a mountain range. If Surface Replicators were used on those overlapping Terrains full of trees then tree tops are going to poke up out of the ground, intersect each other and make a mess. I couldn’t think of any other solution for that and to help with the other issues above than with a new simple plugin.
The Terrain Intersection Shader can be used with the Surface Replica Shader Distribution to return black or white by itself or to Multiply against a more complex distribution Shader. Give each Terrain object a list of other overlapping terrains and the plugin will check which is higher and give that the white value. It can also be used to save time where there are other objects like a tower or town or to make a clearing without needing to paint a texture map for the terrain – especially if you change your mind and want to move things around.
I’ve had trouble in the past with Terrain objects and the Carrara SDK. Which ever way I try the Rendering Quality Mesh can’t be used, only the Preview Quality Mesh is returned. The Surface Replicator is able to use it so there must be a way but I’ve never got an answer. The Terrain Primitive is not included in the SDK. However since this new plugin is not optimized that isn’t a big issue. The Preview Quality can be turned up a bit to get a better result but the Preview Quality will give a quicker distribution without me needing to write the extra code.
You can find this plugin in my Laboratory. Mac version for Carrara 8 will be available soon or on request.
Carrara’s own Volumetric Clouds render quickly and look great but they can only produce results for a limited range of realistic clouds. Undeveloped flat based Cumulus and broken up Fractus clouds can be made with the Cumulus 1 and Cumulus 2 shapes.
The Big Cumulus shape is a bit more lumpy. None of the existing cloud shapes can be used to make a really puffy Cumulus cloud with distinct rounded outlines on top. Another limitation of the existing cloud types is that only the noise animates.
Using extreme values the secret of the cloud shape is revealed.
It turns out that new cloud shape plugins can be made with the Carrara SDK. So I set out to test and create my own new cloud shapes. The idea was to make something that worked much like the existing shapes but to give them a bit more ‘purpose’ when they animate. In real clouds the puffs appear to rise, grow, tumble and churn. I would look for a way to get something in between a simulation and what looks real to find that purpose. One of the challenges was to formulate cloud shapes so that they did not need to run a simulation and my solution was to follow random but cyclic paths and to evolve the cloud over time. I favoured this approach rather than using a fountain or a particle effect.
With the added complexity my Cloud shape is much slower to render, especially when the first ray hit occurs and Carrara builds its lighting cache. The amount of detail to generate smaller and smaller puffs can be controlled for a faster render, along with the normal volumetric cloud accuracy settings. My Cloud has many extra parameters and I have placed up to 9 clouds in the one volume arranged into banks.
To test the smooth transition from above to below the surface of the murky water this animation lowers the camera to sit on the water level and then lets the waves wash over it. Once below the water the visibility has been increased to 30 ft to be able to see more. In reality nothing would be seen more than a few feet in front of the camera at the surface and the light would rapidly be absorbed at any depth.
Not much effort went into modelling the carp or the animation. To improve this it also needs some small scraps of stuff floating in the water and other small particles. These should be given no gravity but will require some physics and collisions to suggest any current and movement of the fish if possible.
When you look into a body of dirty water objects fade into the murky depth. As objects get deeper and deeper down they appear to become darker and take on the colour of the murk. In this photograph the water is very murky from the clay, reducing the visibility to less than a foot depth. The water, fallen branches, sticks and fish take on the dull colour of the clay.
To create this effect inside Carrara we have Absorption and in-scattering in the Transparency shader channel. Within the set Attenuation distance the transparency is gradually ‘turned off’ to return only a black colour from that depth – that is when the Absorption channel is a Value set to 100%. Anything deeper in the water than the Attenuation distance returns black – thus adding nothing to the final shading result. I’ve used a shader for the water following the old rule that color, transparency and reflection should have their total value add up to 100%. The Fresnel effect will switch over between using more reflection at the shallow angle and more transparency when looking directly down.
This close up shows the same scene with the Absorption turned ON in the LEFT side and turned OFF on the RIGHT side. Without the absorption the objects in the murk become clearly visible – including the shadows on the bottom of the lake.
So next what if we want to put the camera under the water ? We still get the ray-traced colour effect on the objects seen through the surface above the water but without using Distance Fog or changing the whole Scene Atmosphere it doesn’t look like we are under the murky water at all.
If we change the Scene Atmosphere from the Realistic Sky to use Distance Fog then we’ll get something like the desired effect but in doing so loose the view of the sky and anything out of the water. The effect of the underwater murk needs to stop at the surface of the water and interact with it for a better result. Here is the same scene using Distance Fog but with a radius of 10ft rather than 1ft so we can still see what is going on.
Problems will arise If we want to use the same scene in smooth animation. We can key frame switch some of the effects but we can’t switch the between Distance Fog and Realistic Sky dynamically. Layering effects, multi-layer rendering would offer some solutions with existing technology.
To achieve this and other effects I’ve been making the Murky Volume plugin. This is a render of the same scene underwater using the plugin. The effect is similar to the Distance Fog but it interacts with the surface. Again the visibility has been extended to 10ft in order to see something. Where the view is looking directly up through the water we can now see the sky and foliage clearly.
An old trick was to take a polygon hair prop with a higher transparency/alpha then make a duplicate, maybe scale it up a fraction, offset it bit and then you could get a thicker final mesh to render. This gave me the idea to make a simple new plugin to Thicken out the results of the simulations. It might also give a more realistic render and rely less upon painting in the layers of hair into the texture map.
For the Thicken beta version 0.1 I’ve allowed for a single additional layer with a surface offset, xyz displacement and a scaling factor. I’ll probably be adding a second layer and second column of settings. I don’t think I’ll add multiple layers because that will make for a very tricky UI. I’ve also considered giving the layers their own shading domain but I’m not sure if that is safe yet – Carrara can cope with it when you add and remove them in the vertex modeller so it should be possible.
The Thicken plugin can of course be used to conveniently add a surface offset or indent a mesh by telling it to hide the original mesh.
I used it in the render of this cloth hair simulation. There are only a few artefacts where the layers of hair went through each other, before the thickening was applied – you can see these on the left shoulder. This next attempt at dynamic cloth hair used twice as many layers as the last one, with 5 down the side of the head and two more at the back. The self collisions performed nearly perfectly but it took more than 12 hours to run when draping over the figure in a 5 second animation sequence. I also improved the alpha map technique by using 2 pixel thick strokes to paint the tips of the hair.
After a few drafts I’m getting encouraging results with dynamic hair. This symmetrical hair was modelled lock by lock, each one layered over the other. The roots of the hair fully conform. From the side of the head down till below the ears is the falloff area then right down to the tips the hair is all dynamic. It’s important to get the initial style right where the hair does conform because it will try to return to that shape during the simulation.
Improvements are needed in the number of layers of hair. It needs another layer at the side of the head and one or more lower at the back of the head below the ears. There should also be more variety in the locks – I duplicated and moved the same ones at the side of the head for each layer to save time. The texture and transparency maps need more effort and care to paint with higher contrast in the strands and cleaner detail for the alpha on the tips. I also made a scalp prop to fit underneath based on a copy of the figure’s head geometry.
I have shown animations before with dynamic cloth hair but these did not have self collisions or enough length to drape over the costume/figure. This example used a large sphere on an angle to cover the whole head and the ears and a capsule for the neck. The collisions over the costume required the modified code which can recorded and store simulations at a higher fps.
While working on this I found that the conforming falloff feature of my plugin wasn’t working quite how I wanted or expected. It’s supposed to falloff gradually from fully conforming to fully dynamic with a painted map or zone or both – but there was not much ‘falloff’ apparent. I made a slight change to the simulation code to improve this. My idea is that if the falloff for a vertex is 50% then it will be pulled back half way from where the physics take it to the conforming position. The conform rate value is now used as a speed limit instead. The conforming force is still very strong but the falloff now produces a smoother transition. Collisions overrule this so I will need to experiment a bit more. I want this falloff because in cloth simulators that have an on/off pixel constrain selection this creates really obvious hard bends along that dynamic edge.
I made another quite simple but important change to the plugin code and that was to add in a self-collision margin. In my simulator each vertex of the cloth mesh is treated as a particle with a thickness. Unless the user overides that thickness all of the vertex particles are thick enough so that they just touch each other. That is one reason why a regular mesh is very important. The new margin value allows for thinner vertices and now the cloth folds can get much closer. The hair’s need to have self collisions become obvious because without this new change the dynamic locks would pass through and intersect with each other. This has a big impact on the design and modelling the style because the hair must start the simulation without being tangled.
Another problem with my simulator has become apparent when trying to run a layered simulation where one cloth items drapes over another.
My plugin is a deformer and not a physics solver. The accurate simulations are run by getting Carrara to slowly advance the time sequencer. The current tests I’ve been doing included attempts to combine dynamic hair with a dynamic costume – where the hair would drape as a second layer over the costume. The simulation is normally saved at the scene’s frame rate, so when the higher fps was used on the hair the costume would not move in slow steps but jump to the next set of stored data only when at a time multiple of the scene frame rate.
A number of separate cloth mesh objects can go into the one simulation but hair and different layers and different types of cloth will need different properties.
To get this to work better I set both the dress and the long hair into record mode and then ran the simulation from one of them to get the high frame rate. This caused an immediate crash and an impossible to find bug. I had to give up trying to find how and where it was happening because no single specific part of the code was causing it. I believe that there is no easy way to fix it and the code is not ‘thread safe’. Therefore only one simulation should be set to record and run at any time.
The solution for the layered cloth is simple enough. A new setting and change to the plugin is required so that the costume simulation can be run first and all of the frames are then saved. If the simulation is set to run at 150fps with a scene rate of 25fps then all 150 snap-shots of the cloth will be saved per second for use in the next simulation that layers over it. This does make for a much larger file size so some kind of management of the stored data to discard the extra frames could also be added.
The next release of the cloth plugin will have this high definition recording and playback feature. I will also need to add a similar feature to the Jiggle plugin for it to work consistently with my cloth. It might also help when combining cloth with the Carrara physics solver or strand based hair. I have not tested that yet.
While working on the next video tutorial I appear to have found a bug. The drop sheets over primitives and figures are clearly okay, but when I re-created a conforming costume this caused the plugin to crash unexpectedly.
I was able to narrow the problem down to the costume mesh. The mesh must be triangulated before converting it into a conforming costume. The previous example with the Long Green Dress was triangulated but the new test costume was not.
The 0.0498 release version of the plugin removes the quads / triangle mode but this problem is more likely related to how the facet mesh moves through the modifier stack. The real cause is as yet unclear. The fix is to make sure to manually triangulate the mesh.
After the public release of the latest code for the Cloth Deformer I wanted to revise the results I posted last year and re-create them with some video tutorials.
A drop sheet is always the most basic test to run – if that fails to work then the simulator is worthless. So for the second tutorial I wanted to drop a bed sheet over a moving figure and then I decided to try and push the simulator and keep the figure moving under the sheet for at least 10 seconds.
It took a several draft simulations to get it right. There were a few problems when the hands moved into the sheet and then proceeded to push through it – so I had to adjust the animation to avoid this problem and keep the hands out of the way. Since the simulation result is not known until it has run, getting a hand to interact with the cloth part the way through is just too complex.
Something to keep in mind with cloth simulation is that in reality you can be constrained by clothing and cloth. Tight non-stretch pants won’t allow you to run easily and when you’ve been double wrapped up all cosy in a bed sheet and quilt it is hard to suddenly spring out of bed without kicking and fighting your way out.
I also used different settings from the tutorial video. I needed to set much higher friction and dampening values – otherwise the sheet slid off the figure and half the way off the bed by the end of the simulation to leave her exposed. Also in the tutorial video some kooky key frames had got in and turned her elbows into bent straws. I didn’t notice that until it was too late and uploaded, so hopefully the focus is all on the cloth.
Each simulation took at least 4 hours to run. I spent at least 4 hours on key framing the animation (which isn’t that great) and the rendering took almost 4 hours as well – so running the simulation (while I slept) wasn’t really that big an issue in the production time.
The twitching in the animation was much lower than I expected but I’m still more personally interested in the potential of the still frame results with my plugin.
What really worked surprisingly well was when she turns over to one side and also how the legs moved under the sheet.
A final note was that I might have made the animation physics even more interesting by using the Jiggle Deformer but alas it is not currently compatible with the Cloth Deformer. The trouble is that Jiggle will not produce the same results when it is run at 150 fps as when it is run at 25 fps. I will have to add a new recording mode into JIggle to use it with Cloth. This would need to allow the results to be saved and played back without recalculation.
One of the features of the new version of the Cloth Deformer is Grab Zones. Interaction with hands and other moving objects that need to grab the cloth can be faked by using a spherical zone(s) at the start of the simulation to grab any number of vertices of the cloth. When the grab object is moved the held vertices move with it to maintain the same local position in the zone, so they rotate as well as translate.
This animation shows a Michael 5 figure for Genesis trying to get attention by waving a slightly ripped t-shirt. The cloth was given 4 seconds of animation to drape before starting to wave it about. Some proxy objects were used on his arms and hand to prevent any possible collisions with him and get a very fast simulation result. The shirts self-collisions all work well – but moving at that speed it’s hard to see any poke-thrus anyway.
In future I’d like to be able to have the grab zones turn off somehow to release their held vertices. Also to be able to grab vertices during the simulation would be important. That will be easy to code but a special ‘zone’ primitive is something I’ll need first so that it can have the right on/off switches in the properties interface.