2011年7月3日

強力Plug-ins介紹::SOuP Development:: Examples SOuP


 The upresFluid node resolves a few major issues in the fluids workflow of Maya by separating dynamics and resolution from the "look", and by implementing of wavelet turbulence algorithm (based on Theodore Kim and Nils Thuerey's open source library) for additional details as a post process.
Fluid dynamic simulation strictly depends on the container resolution. If we change the resolution the fluid will behave differently. This is not good in production because as we design our fluid, we want to use low resolutions for fast turnarounds while working out the dynamics. But later, when we have figured out the dynamic aspect and start tweaking the look of the fluid, we often want to increase the resolution to get more detail (or the opposite) and if we often have to start again, tweaking the simulation to accomodate for the new resolution.
The upresFluid node effectively eliminates the dependency between resolution+simulation and shading+look. For this purpose we use two containers. In the first one we focus on the fluid motion. There we can keep changing the fluid resolution in order to to achieve the best motion, without worrying about the shading part. Once we are happy with that we focus on the second container used as a static display driver for shading and rendering purposes.
The best part of all this is that now we can change the resolution of the display driver container on the fly - at any point. It will inherit the basic data from the source container and will interpolate it to fit it's own resolution set by the upresFluid node.
As a finale we can turn on the wavelet turbulence feature to add plenty of additional detail that is impossible or at least extremely hard to achieve otherwise.

* The current version of the upresFluid node does not fully support auto-resizing of fluids. Technically it works but sometimes the wavelet turbulence pattern gets offset.
Fire and explosion examples by David Schoneveld
You can find more information (video tutorials and other examples) on his Vimeo channel:
http://vimeo.com/17032492
http://vimeo.com/17033053




basic smooth algorithm:


smooth + boundary preservation


 David Corral shown me an effective way for smoothing of geometry using Laplacian algorithm. I optimized some things, added boundary preservation and two methods for volume preservation (fast and accurate).
The fast method is useful for objects with "simple" topology - notice how on the third video some verts in the eye corners and ears are starting to missbehave. The accurate method will take care of that, but on the price of additional calculations.


Maya PaintFX is very powerful L-System but lacks the ability to instance custom geometry to it's elements. To fill the gap we can use pfxToArray node to extract the paintFX data for further modification and custom usage.
In the first video we selectively read subsets of points.
♣Here we pipe the extracted data into geometry instancer using arrayToDynamicArray nodes.


The rayProject node projects point clouds (meshes, curves, surfaces and particles) onto mesh objects. There are multiple options for precise control over what gets projected, where and how. A subset of the effects that can be produced with this node are also known as shrink-wrapping.

In this example the paintFX tree gets projected onto a mesh sphere.
Game engine like shadows - set of polygons projected onto the ground surface that resemble the shape of a moving character.
An interesting twist is that part of the animation where the shadow polygons morph into the character and then go back in shadow mode.
Two additional (simpler) examples.


 The peak deformer can do miracles if you need to make the blobby looking nParticles mesh more liquid like.
On the left side we have standard nParticles mesh, on the right side is the same geometry but with peak deformer applied to it.
SPH sim by Ivan Turgeon - PFVE.
In case liquid sims are not your "forte" and you are still not clear what's the role of the peak node in the example above - here is another (simpler) one for you where the shrinking effect is exagerrated. Again - standard nParticles mesh on the left and peak+smooth on the right.


♣ The scatter node can generate point clouds on the surface of mesh geometry or inside its volume. Here a scatter node creates points inside the volume of a deforming mesh geometry. This point cloud can be used in conjuction with pointCloudFluidEmitter to emit fluid from the entire volume of given geometry and not just from its surface.
Same for pointCloudField - we can affect dynamic properties of objects using the entire volume of object, but not just its surface points. Don't forget that we can transfer point attributes from the mesh surface to the point cloud using the attributeTransfer node - things like point colors, point velocities, etc.
♣ We can block out areas by painting weight maps (vertex colors) on the source mesh geometry, this way we can control where the scattered points go. In this example the scatter node creates points on the surface of deforming object. Notice how the point cloud is forming only around the white areas.
 Here we have a mesh cube with two faces deleted. The scatter node still figures out what the volume of the object is like and does the right thing. Also the scatter node has a feature that allows us to generate points only within specified range from the geometry surface.
♣ In general, geometry is never prepared for fluid emission. Modelers model things based primary on rigging, animation and lookdev needs. So we end up with too many, too few, or irregularly placed points.
In this example we have a box with 8 points only. If we decide to use the standard Maya fluid emitter, we have two options:
- emit from these 8 points - pretty useless
- emit from the entire surface
Here we can use fractal or bitmap textures to control the emission process, but they do not allow for localized control and do not react on other events in the scene. The solution is simple - we can use scatter node to resample the geometry. The result is regularly placed points on the surface of the object, inside its volume, or both.

Then the point cloud can be piped directly into a
pointCloudFluidEmitter node or go first through attributeTransfer node that can assign additional bits of data such as emission rate, density, fuel, temperature, color, etc., for more precise control over the emission process. Notice how the cube gets filled with points and the emission happens from the entire volume, but not just the poly faces or vertices. Also, there is a local override of the color emission in the right corner.
projections

boolean projections
boolean projections + textures
(checker + grid in this case) 

source geometry uv based
♣ Using texture based distribution we can precisely shape the scattered points in many different ways, including "boolean" operations from multiple projection planes, textures and UV Sets.
As you may know already the scatter node can be used to directly drive particles, geometry instancers and procedural shatter nodes.
With the texture based distribution we gain complete control over the scattered points and in this way over the mentioned above systems.
 Scatter nodes can be used to drive particles in a procedural manner. This way you don't have to rely on dynamic simulation if you want to stick particles to geometry for example. You can freely scrub the timeline for and back and things will just work.
Here baked point cloud is driving meshed nParticles to create the effect of mud sticking on the character. Peak deformer is used to offset the points of the generated mesh along their averaged normals to make it look more like liquid.

ComputeVelocity node calculates the velocities of the baked point cloud then attributeTransfer node passes them to the mud geometry. If you render with motion blur turned on, you will see that even the mud geometry is changing all the time the motion vectors stay consistent.
♣ Basic example showing instancing of "sprites" to scattered point cloud on the surface (left) and inside an object (right).
AttributeTransfer node is used to properly orient the instances along the normals of the box vertices.


♣ Using the shatter node we can shatter mesh geometry, being it static or deforming.
It relies on input point cloud generated by scatter node, particles or nurbs curve. Then voronoi cells get calculated and geometry is cut on the boundaries.
In the example scenes (as on the shown videos) the shatter nodes are in "auto evaluate" mode, but generally you will be using the "bake result" button located inside the shatter node's AE. This way we get the shattered geometry only when needed. The shatter node can generate solid or surface shards.
 Here I animate first the number of points within the point cloud - the output reacts accordingly on the fly. Then I animate the distance between the different shards. Finally I increase the resolution of the sphere.

Additional nodes can be used to further refine the shape and distribution of the scattered point cloud. This way we can precisely place or remove points.
In this example attributeTransfer and bounding object nodes influence the positions of the scattered points. The red points are the original point cloud, the blue ones are the post modified positions. Notice how the shards react on that - the closer the points the finer the shards. In this example i used only one bounding object, but you can use more if needed.
♣ Shattering of deforming geometry. Notice how the shards stick to their relative positions. The trick here is to pre-cache the input point cloud coming from the scatter node. Inside the scatter node's AE is located a button that will allow you to bake the point cloud to nurbs curve. Then you can deform that curve with the geometry and feed it into shatter node.
The nMaxCutPP attribute drastically improves performance by limiting the lookups needed to create a new shard to the closest n points. The denser the input pointCloud, the bigger the performance improvement is.
Lowering too much the value of this attribute may lead to artefacts - such as overlapping shards. In this particular example, setting nMaxCutPP to 30 resulted in 2.5x shorter time needed for shattering the entire geomtry.
 Shattered geometry in action.
♣ The scatter node has inPositionPP attribute that can be used to supply custom point cloud to it and in this way to bypass the generation of points internally. Many interesting effects can be achieved by supplying vertices, particles, voxel or pfx data to the scatter node for post-processing - for example - uniform filling of objects (as shown here).
Also, the scatter node distance to surface data for each point - notice how in the two provided two examples the voxel colors get yellow when deep inside the object and dark when close to the surface.

Data flow:
fluidAttributeToArray extracts voxel positions from the fluid container and passes them to the scatter node. The scatter node strips all points outside the mesh object. Remaining point positions and distances to surface data gets passed to pointCloudFluidEmitter node that uses them to emit fluid properties into the container.
In the provided examples the pointCloudFluidEmitter is in attribute transfer mode, which forces the container to resamble the shape of the input geometry.

With this technique we can easily achieve the best case scenario for fluid emission - always in the center of the voxels.
 This is a more involving example - here we have local shattering of geometry that grows over time. We split the data flow in two separate streams and combine them at the end.
First stream is used to remove all ground faces that do not interact with the dancing character.
The second stream is used to generate scatter mask (point colors) so we get points only where the character touches the ground. Notice that here we use the original ground geo, but not the one from the first data stream, where we remove faces at each evaluation step. This way we ensure tatic shards. Then we plug the scattered point cloud and the remaining faces into a shatter node to get the desired result.


♣ ComputeVelocity calculates point velocities of the dancing character and stores them in array. ArrayToPointColor converts this array to point colors. AttributeTransfer transfers colors from character to ground plane (hidden here) based on proximity between their points. PointCloudFluidEmitter emits fluid properties only from the area where character contacts the ground, and the fluid is colored accordingly.
 How to render changing point count geometry with proper motion blur ? Easy.

In this example we have particles falling over moving teapot. BoundingObject is passing the particle positions and radiuses to group node. The group node collects the face ids around the contact points where particles collide with the teapot surface.
This componentsList gets passed to polySmoothMesh and deleteComponent nodes. The polySmoothMesh subdivides the faces to get more resolution, so when the deleteComponent node does its thing we get round holes.

ComputeVelocity calculates the velocity vectors of each point of the geometry before it gets modified (the original moving teapot). AttributeTransfer node transfers these values to the final geometry, so even the point count changes over time we still get consistent motion vectors. Using the remapArray node we post-modify the velocity data. The velocity vector array gets converted to a set of pointColors by the arrayToPointColors node and in this particular case it is named "velocity". Finally the modified teapot mesh's attribute "motionVectorColorSet" points to that "velocity" colorSet and passes it directly to the render.
♣ How to render changing point count geometry with proper motion blur? Part 2.
This is a more involving version of the example above.
In addition to everything from the teapot setup, here we have group nodes that collect the boundary faces of the tearing surfaces. They pass the inverted componentsLists to deleteComponent nodes that are plugged to separate meshShapes - so we always get the boundary faces no matter what is happening to the upstream geometry. Then we emit particles from these faces. This way we get blood particles only where and when the geometry gets teared. Using this simple approach we can eliminate a lot of tedious work by hand needed to ensure proper particle emission from the right place and at the right time.
Notice on the rendered video how even the point count and order changes we still get everything properly motion blurred. I used only one collision sample here, that's why some pieces get stuck inside the knifes, and the blood could look a lot better. Good enough for a fast'n'dirty example.


 BoundingObject reads particles positionPP, rgbPP, radiusPP and feeds group and attributeTransfer nodes with them.
The Group node has an option to store componentsList and objectGroup data for previous+current states (by default it considers only the current state). This data gets passed to deleteComponent node that deletes faces from the leaves geometry.
AttributeTransfer node slightly attracts the leaves around each particle and recolors them (in red - all particles in this example are red). As result we get an "acid rain" effect.

Mind, there is no transparency hack or anything like that. It is all procedural geometry manipulation.
♣ Procedurally delete geometry. Group node collects the face ids inside the bounding object and passes them to deleteComponents node.
♣ BoundingObject in pointCloud mode reads particle positionPP and rgbPP attributes. AttributeTransfer node transfers them to the ground surface. Alpha channel is modulated by "alpha" ramp attribute located on the boundingObject node - that's how we get multiple circles around each particle.
Transfering of point positions produces the "swimming" effect - each particle attracts ground points around itself.


 Point node randomizes grid points in the XZ plane and assigns to them random colors. AttributeTransfer node transfers the colors to another plane. The result is a Voronoi noise.
Here we "project" it on a flat plane, but it can be used for things like fracturing objects with complex topology.
♣ You can achieve the same result by simply spraying particles around.


 Bound node creates sparse voxel grid around static or deforming geometry with consistent or changing point count and order - a walking character in this case.
The blue wireframe is actually a mesh shape with "display shading" turned-off.
video 1
video 2
Bound nodes can be used for effortless "downresing" of complex objects for simulation purposes.
The first video shows out of the box simulation of the proxy geometry (1300 points) generated from the original tree (31000 points). The second video shows simulation of the original geometry.
Notice the frame rates.


♣ PolyCyllinder is deformed by wave deformers and its position is animated. PointAttributeToArray node passes the point positions and tangents to pointCloudField node. The tangent vectors are interpreted as velocites and are applied to the particles. Second pointCloudField node attracts the particles around each mesh vertex so they do not escape when pushed by the first pointCloudField.
Using pointCloudFields we can use any geometry or custom arrays to control dynamic objects in ways that are hard to achieve otherwise.
 Often it is much easier to animate particles to mimic desired motion and emit fluids out of them than trying to acheive that with fluid sims only. Fluids are just too prickly sometimes. The "flamethrower" effect is one of these cases.
Particles move through fluidContainer, pointCloudFluidEmitter reads their positions and emits fluid properties in the voxel grid. In this example i use only the particle positions, but in addition you can feed the pointCloudFluidEmitter with per-point radius, density, heat, fuel and color (optionally - from specified colorSet). The node can use pointCloud (arrays), swept geometry or regular mesh, surface, curve or particles as input. As mentioned - in this example we keep things simple - just positions.

If you play the first video you will notice that something is amiss. The fluid tries to do its own thing instead of fallowing the particles - not looking very flamethrowerish.

To make things better we slap a pointCloudField node that uses particle positions, radius and velocities to push the fluid in the desired direction. As a result the second video looks a lot more like a flamethrower. On a similar note - the pointCloudField can use pointClouds (arrays), swept geometry, mesh, surface, curve or particles as input.


 This node measures how much the geometry stretches or contracts. There are multiple color coding methods. In this case red is compression, green is neutral, blue is stretching. You can use these color maps to control wrinkle, muscle, veins and whatever other maps you may need for your characters or other things.
There are two modes - distance based (shown here) and in-between angle based. The first method measures distances between points (edge lengths), the second one measures angles between edges - this is useful when we have deformations without stretching/contraction - for example bending of skinny elbow - points get closer, but their edges keep same length.


♣ Texture based fluid emission is good but often we need more precise control over what we emit and where. In this example textureToArray node converts animated ramp texture to point colors. AttributeTransfer node uses boundingObject to override the texture colors in specific area of the surface. In this case we do it for colors, but it can be anything else - density, fuel, etc.
PointCloudFluidEmitter picks the final colors and emits them into the voxel grid.
Using similar techniques we can build very precise and flexible fluid emission systems. For example, we can emit fluids based on the tensionMap values from the example above, or if you look at the example below, we can procedurally apply multiple textures based on proximity to (complex) geometry or pointCloud and then emit fluids based on that. Throw some extra boundingObjects in the mix to override/block/edit things and you get some pretty interesting stuff going on.
On a similar note:
take a look at the fluidAttributeToArray example - there one fluidContainer is used to emit properties into another fluidContainer.
♣ TextureToArray node converts animated ramp texture to point attributes (in this case - per-point weight). This weight data is passed to peak deformer. The peak deformer offsets points along their averaged normal.
This effect can be used for many things - static or animated wrinkles, liquidish looking deformations, bulging flesh, etc.
In this example we have and local override of the weight map calculated byt the textureToArray node. So we don't get bulging for the points that are inside the boundingObject.
TextureToArray feeds a peak deformer with pixel values from noise procedural texture. Second peak deformers makes the blobby "mushroom" effect. AttributeTransfer adds point colors.


 Using the trajectory system you can non-destructivelly manipulate animation paths directly in the viewport. With "nondestructive" i mean that you can work simultaneously in the graph editor and with the trajectory's manipulators in the viewport and the animation curves will always be in tact. If you change something in the graph editor the trajectory stuff will automatically update in the viewport and the opposite - you edit the path in the viewport the animation curves in the graph editor will update according to that.
The displayAttributes node allows you to display attribute values in the viewport. This is very handy when you want to debug things during playback or when you simply want to display things around.


♣ We can cook complex objects (meshes, curves, surfacec, etc) at different times - effectively offseting them in time.
In this example we have a running character. I inserted between the skinCluster and the visible geometry a timeOffset node and animated its offset value.


 The fire componet of the simulation exists in the small fluidContainer only. FluidAttributeToArray node extracts the voxel properties from there (in this case position + density only) and passes them to pointCloudFluidEmitter emitting smoke into the big fluidContainer.
Using this technique we can split the main elements of the fluid simulation (in this case - fire and smoke) between different containers for more precise and independent control over simulation and shading.


♣ I always wanted to be able to "voxelize" geometry and render it that way. ComputeVelocity calculates the point velocities of the dancing character and passes them to a pointCloudFluidEmitter node in attributeTransfer mode. At each step the pointCloudFluidEmitter will empty-up the fluidContainer before emitting fluid properties, effectively transfering attributes from input pointCloud or geometry to the fluid.
 Basic stuff. I painted some point colors that get emitted into the fluidContainer by the object.


♣ MultiAttributeTransfer feeds cluster deformer with point weights based on proximity between character and ground geo.
The closer they are - the stronger the weight is.
Point radius, falloff ramps and other attributes can be controlled globally for the entire set of points or through weight maps for localized control.
The cluster handle is translated on -Y - that's how we get the ground deformations. You can use the peak node to offset points in the same manner for complex geometry (it will do it based on point normals instead of globally for the entire object like the cluster).
♣ Similar to the example above, but in this case we "remember" the contact areas between character and ground plane.
Mind, this is not the regular soft body trick - it is all procedural - no dynamic simulation involved.


 Component texture attachments in Maya are based on object groups. Using the group SOuP node we can control interactivelly that otherwise implicit system.
In this example we have a ramp texture with cranked up noise attribute assigned to cople of objcts. There is a character walking around them that has a different texture assigned. Based on proximity we "transfer" the ramp texture from different objects to the character geometry.
♣ Basic "Summer and Autumn leaves" example where attributeTransfer node transfer point colors from boundingObjects to leaves geometry.
♣ As you may know it is very difficult to query scene data from within particle expressions - basically nobody is doing this because the performance hit is huge. There are no out-of-the box tools that bridge particles with the rest of the scene other than colliders and force fields. Using SOuP nodes you can easily do any of that.

In this example particle positions and velocity gets altered by the point normals from another object in the scene. As result instanced geomtry orients along the vertex normals. Using this approach we can make insteresting effects by making particles play nice with the rest of surrounding them objects.


 MultiAttributeTransfer allows for localized control over deformer weightsMaps. In this particular case we have 4 blendShape targets applied to a head geometry. Each boundingObject is connected to multiAttributeTransfer node that controls the point weights of one of the four targets. The same result can be achieved by using attributeTransfer and arrayToMulti nodes. That's why there are two example scenes supplied.
Notice that blendShape weightMap attributes (much like the skinCluster's one) do not react on "dirty" flags. That's why there is a point node at the very end of the chain that has 4 getAttr lines in its pre-loop section to force-refresh the blendShapes.
Wonder if developers will ever fix this problem to allow for procedural control without having to "hack" things all the time?
Credit goes to the guys at Cinemotion for providing the head geometry and blendShape targets for this example.


♣ Using arrayDataContainer nodes we can create interesting effects like wetmaps or accumulated damage. Generated data can be used to drive blendShapes (example above) and texture maps.
AttributeTransfer node transfers per-point weights from fighter geo to the static guy. This data gets passed to arrayDataContainer node and then to arrayToPointColor one. MentalRay vertexColors texture pipes it into a shading network where it is used for blending between two textures.
 The arrayDataContainer node has attribute called "sink". At each evaluation step it sinks a little bit of the stored in the node data, creating a "wetmap" effect.
♣ PositionPP, radiusPP, rgbPP, weightPP get transfered from particles to the water surface. As result ripples form around every particle that hits the water. Peak deformer displaces the ripple points along Y. Also pointCloudFluidEmitter emits fluid properties from the white areas of the ripples.
♣ Here particles transfer weight over to the nCloth meshes via the arrayDataContainer which maintains the values over time allowing fluid emission. The pointCloudFluidEmitter gets its PositionPP from a pointAttributeToArray and the inDensityPP comes from the arrayDataContainer.
 Very similar to the above method except the reverse is happening here with the weight transference. The emitting mesh already has a weight of one but as particles land on its surface, the contact points turn black which prevents fluid emission, hence we can "put out the fire" so to speak.
♣ With the ability to now invert our weight transference, we can now pipe a boundingObject's weight value through the nComponent node of a dynamic constraint and use it to control a weld's weight per vertex attribute. So we could zip and unzip things or cause breakages in constraints using particles for example.


 Maya provides a simple way to "emit" geometry using particles, but there is this nasty cycling that happens to the geometry when the particles start dying. Also, there is no way to propagate per-particle attributes to the geometry points. Here is how we create this effect the right way:
pointAttributeToArray nodes extract particle positions and map them to the idIndex arrays. PointCloudToCurve nodes get this data and create nurbsCurves. Loft node creates polygonal surface, attributeTransfer maps the particle colors to the polySurface (optionally we can transfer and velocity for proper motion blur). Ramp controls the opacity of the surface along its length. Render :)


♣ Using arrayToDynArrays nodes we can build kDynArrayAttrsData structures to control the geometry instancer nodes in a procedural manner, without the need to go through particles and expressions.
In this example fluidToArray node extracts the fluid properties and passes them to few arrayToDynArrays nodes that feed an instancer node. As result we instance geometry to fluid voxels. We can map voxel properties to instances in many different ways. In this i tried to keep things simple:
voxel density - instance scale
voxel velocity - instance aimDirection
If the voxel is empty (density = 0) the related instace gets hidden for better performance.
 This time using 3D fluidContainer and instancing of multiple objects. If you check the example scene, pay attention to how the multiple instances get randomized using fractal texture and textureToArray node.
♣ This is a more complex example showing procedurally instanced feathers. Scatter node generates points randomly placed on the character's geometry. AttributeTransfer nodes properly adjust their normals that will be used later to orient the instanced feathers. Data gets collected and passed to the instancer node by arrayToDynArrays nodes. As result we get a guy fully covered with feathers.
There are actually two of these systems in the scene - one for the body and another one for the scalp (the big feathers).
Notice how unlike instancing to particles you can scrub the timeline for and back and things just work.

Using similar approach we can easily create things like objects built from lego bricks for example. Finally, don't forget that the instancer node has built-in LOD, where we can display the full res geo, bounding boxes only or nothing. Very useful when things start getting heavy.


Not sure how to name this effect, but for now it goes by the name of sparse convex wrap.


The rayProject node can be very useful to create permanent collision deformations. In addition we use point node to apply vertex colors based on amount of deformation.


♣ There is a simple way to turn any particle shape into a point cloud container reacting on input events.
Here one particle shape influences the size of another one.

By Sergey Tsyptsyn
 Transfer point colors from geometry to particles.
Remember how hard it was to make particles react on surface properties from surrounding them geometry. Well, not anymore.
By Sergey Tsyptsyn
♣ AttributeTransfer node influences the radius of particles passing through boundingObject.
By Sergey Tsyptsyn
 PfxToon color transfer to particles.
By Sergey Tsyptsyn
♣ Nucleus lacks one very useful feature we enjoyed in the old rigid body solver - collision detection. SOuP brings it back online.
By Sergey Tsyptsyn
♣ Another example of procedural control over particles from externel events - notice how the particle colors always match the animated texture of the surface underneath.
By Sergey Tsyptsyn

沒有留言:

張貼留言