Sunday, October 7, 2012

Particles aren't Particularly Pretty

This week, I explored the Particles tab of the Geometry context. I had very high expectations about particles; I know that particle effects can be very impressive, and I know that Houdini animators are in high demand precisely because of the software's great support for particles.

This short list of nodes has a great potential.
Or does it?

Well, I was quite disappointed.


The main problem with particles, in my opinion, is the tradeoff between control and randomness. When creating particles, the default is to create particles at each point in the geometry, in the order of the point numbers. This causes the particles to appear in rank and file, which doesn't look right. By randomizing the point numbers (using the Sort node), the particles look much better, but we lose a lot of control. The particles appear at different positions and move in different directions, which leads to some regions ending up cluttered, and others ending up empty. Not having any control over which regions is really annoying, because that has a big effect on the visual organization of the scene.

Left: Particles created in an orderly fashion are predictable, but look boring.
Right: Particles created randomly look much better, but are less predictable.


I hope that those problems are partly due to the fact that I am using particle operators in a Geometry context instead of a Particle context. The 6 geometry nodes I have explored today are a bit limited, but there are still 67 more particle-related nodes to explore in the Particle context! Allow me to explain.

Houdini has a few different kinds of networks, which are used for different purposes. The one I most want to learn is the Geometry context, which is used to manipulate the shape of the objects in the scene. I still have 189 nodes to explore there. Other contexts are used to define which materials they are painted with, how they move, and how they interact with each other. One of those, the Particle context, is entirely dedicated to particles. I expect that learning the 67 nodes from that context would allow me to create better-looking results than the mediocre trinkets I managed to create this week.

6 down, 189 to go!
I'll do better next time.

67 nodes doesn't look so big compared with the number of Geometry nodes I still have to learn, but for once, I'll try to finish what I have started!

Sunday, September 30, 2012

Tools for curved surfaces

I have a love-hate relationship with NURBS. I love them for the same reason I like vector art: they scale to arbitrarily-high resolutions. I hate them because the NURBS tools often produce ugly artifacts, and it's not obvious how to fix them.


In two dimensions, curves are approximated using a large number of square pixels: zoom in too much, and you'll see the pixels. Vector-based images don't have this problem because they store the curves themselves, not the pixels.

Above: pixels.


In three dimensions, curved surfaces are approximated using a large number of flat faces: zoom in too much, and you'll see the triangles. NURBS-based geometry doesn't suffer from this problem because it stores the curves themselves, not the triangles.

Above: polygon edges.


This all sounds great in theory, but when I experimented with the Houdini tools in the NURBS category, I was repeatedly disappointed by unexpected artifacts in the generated curves. Sometimes the artifacts were tiny, but visible — and sometimes the generated curves were completely wrong. In both cases, there was no obvious way to fix the problem.

Left: subtle artifact.
Right: not so subtle.


As you can see below, eventually I did manage to produce some good-looking shapes. There was no magic artifact-fixing step; I sat down and learned how each tool worked, how they were calculating their results. This allowed my to understand why I saw the artifacts I saw, and guided me towards ways of using the tools which were less likely to generate artifacts. It's better to stick to profile-curves generated through the Carve tool, for instance.

17 down, 195 to go!
Yes, I know there are 20 icons,
but some of those have appeared before.

Oh, and one of those tools doesn't exist,
it's a digital asset I created myself.
Can you spot which one? 

I learned a lot during this series, but I still have many more tools to go through. Houdini has a lot of stuff!

Saturday, September 8, 2012

The tools from which everything else is built

In my quest to learn all of Houdini's 252 Geometry tools, I have explored the 32 node types listed in the Manipulate submenu.

Above: the Manipulate submenu.

I thought these would just be steps 9 through 40 of a long and boring quest of knowledge, but no! I discovered some very important tools in there, and I might never have discovered them if I had not tried to learn every single one of them.

The best example of a well hidden gem is the SSS Scatter node. Neither its icon, its name, or even its documentation gives any indication of what it does.

Poor SSS Scatter probably doesn't get much use.
Its icon hasn't been updated to the new style...
and its documentation is pink, for some reason.

If I wasn't trying to go through the entire list of tools, I would never have tried the SSS Scatter node. When I want to implement a particular effect, I look through the list of names, trying to guess which one could help me; but "SSS Scatter" never sounds like what I am looking for, regardless of what I am looking for. So, what does the darn node do, anyway?

It gives chickenpox.

It creates points all over the object's surface! Very useful-looking. I hope that, among the remaining 212 nodes, there is a version which creates points all over an object's volume, too.


The other reason why there are some useful nodes which I wouldn't have discovered otherwise is that so many of those nodes are redundant. There is a Twist node, for example, which does exactly what it says on the tin. Many 3D packages have a Twist modifier, because in those packages, that's the only way to twist objects; in Houdini, however, twisting an object doesn't need to be a primitive operation.

Twisting is a simple mathematical transformation, whereby points are rotated around an axis by an angle which varies along the length of the axis. I could implement that using the Point node, which applies a mathematical formula to each point of an object. Of course, writing a mathematical formula is much harder than using a pre-existing Twist node; the point is that being able to write any mathematical formula is much more powerful than having to pick among a list of pre-existing nodes. It's the powerful nodes in which I am interested, but they are buried under so many redundant nodes like Twist.


In the Manipulate category, I found three powerful nodes of that kind: Point, Primitive, and VOP SOP. The Point node, as I have already described, allows mathematical transformations to be evaluated on all points. The Primitive node is similar, except it works on faces instead of points. And, last but not least, the VOP SOP. Again, not a very descriptive name nor icon, but how much raw power!

Like the Point node, the VOP SOP also applies a transformation to all points. But instead of using a hard-to-read and hard-to-write mathematical formula, you can double-click on the VOP SOP node to dive inside and manipulate a sub-network. There is a whole new world with 229 extra node types in there!

32 down, 212 to go.
Plus 229 inside the VOP SOP sub-network.
Plus 74 at the scene level, 98 at the channel level,
138 at the compositing level, 41 at the render level,
67 at the particle level, 56 at the texture level,
264 at the dynamic level...



At this rate, I don't think I will ever master all of Houdini's nodes.

Monday, September 3, 2012

The first eight tools

One trinket for each tool in the Edge category.

After I discovered that development time is linked to familiarity, not complexity, I began a quest: I want to become familiar with all of Houdini's 252 geometry nodes, one at a time. Here are the first eight.

The goal is not to visually represent the effect of each node; Houdini already has node icons for that purpose. Instead, the goal is to for me to play with each tool, getting a feeling for what it does, and combining it with other nodes in order to get a simple, but nice-looking result. If you want to follow along, I recommend that you try to reproduce my trinkets, with no other indication than the picture of the final result and the name of the main tool I wanted to illustrate. Trying to get to a specific result will be harder than just trying to get something nice, but the extra challenge will be worth it: learning how to use a piece of software as a game, through a series of challenges is much more stimulating and rewarding than aimless experimentations. It's true! I'd go back in time and challenge my two-hours-younger self with the above image, if I could.

For the rest of this post, I would like to reveal the steps necessary to create the parts of this image which are not part of the game. This part, I am sending to my future self, in case I forget the intricacies.


First, the lighting. There is no light. When there is no light, Houdini adds a default light behind the camera, and that's what you see here. Except for this:

Artificial shadows.

By default, Houdini doesn't render any shadow. This causes the inside of the sphere to have the same color as its outside, which in turn makes the 3D shape hard to see. I'm sure there must be a way to somehow turn on shadows, but for now, I used a trick: I painted the inside of the sphere in a darker shade than the outside of the sphere.

If this was a real physical object, that would be it: the two sides of the sphere would be distinct, and I could paint each side a different color. Virtual spheres, however, store their color in their vertices, and that color is used by both sides.


Implementing a two-sided material was harder than I thought, but in the end it thought me a lot, and I am glad that Houdini does things this way. In any other 3D application, I would be looking through the material options, looking for a "backface color" field or something like that. If I succeed at finding the button, then the problem would be solved, and I would then know how to make two-sided surfaces. If an extensive search revealed that there was no such button, then it would mean that two-sided surfaces are not supported by the application, and I'll have to live without them, end of story.

In Houdini, however, giving up is never the solution. I wish more applications were like that.

After I failed to find the button, I double-clicked the default Clay material to see how it was implemented. I played with the wires, unplugging and re-plugging them to see what they were doing, and that allowed me to figure out which one was responsible for the color. I tried to force a green color, to see if I knew what I was doing, but the entire surface became flat — clearly, that was the wrong place to interfere. I continued to investigate, and found another plug which yielded a better result. Aha! That's where I should plug my two-sided apparatus.

Playing with wires.

I then added a few nodes to check whether the surface normal is pointing towards the camera, and if not, to darken the color by an adjustable amount. At each step of the way, I could debug my network by plugging an intermediate result into the color output and observing the rendered result. On that subject, an important remark: fancy materials like this one are only visible in the Render view, and not in the Scene view.


After the shader, I moved to the Compositing network in order to join all the inputs into a 2D table. In the past, I had exported all the individual images and layed them out in Gimp, adding some text as needed. This strategy has the disadvantage of leaving the procedural realm. Once I render a 3D model and process the resulting image inside Gimp, it is too late to go back to the 3D model and make adjustments. Very often, I only notice erroneous details once the image has been posted on this blog; at that point, I either adjust the model, re-render, and re-process it inside Gimp, or, more often, I give up and hope nobody notices.

But why use Gimp? Houdini has procedural 2D tools, and the only reason I wasn't using them was because I wasn't familiar with them. I decided to fix that! It was again harder than I thought, but it was worth it, because I did change my models often afterwards.

The first step was to render the different models to different files. The easiest way to do that is to render a different model at every frame, and to render the animation to a sequence of files. I did this using a Switch node, switching to a different input at every frame.

Yup, that's a lot of inputs.

Then, we need to get this sequence into the Compositing network. I first tried a Render node, but it was always complaining about a failed render for some reason. I had to render to a sequence of files, then load them into my Compositing network using a File node. Even that was hard, because the renderer was quitting early, without writing the files. When that happens, the easy solution is to save and reopen the project (no need to quit Houdini).

Once I got the sequence of images inside the Compositing network, it was easy to arrange them into a table using the Mosaic node, and it was moderately easy to reorganize the images by reordering the inputs of the Switch node. The hard part was to add the text: there is a Font node, but it took me a long while to figure out that the reason I couldn't see the text was that it was rendering the font to a separate image plane! I can see how that could be useful, but I don't think it's a sane default.

Above: the reason why you can't see the output of your Font COP.


Finally, I added a few Subdivision and Facet nodes here and there to make the models look nicer, but alas! Once I turned off the Preview setting in the render pane to admire my high-quality renders, I saw a few inexplicable lines between faces which were supposed to be sharp.

Subtle, but annoying.

I still don't understand what is causing this, but the solution was to turn off Ray Variance Antialiasing, from the Mantra node's Sampling properties.


Hope this helps, future self!

Sunday, May 13, 2012

Why coding things takes so long

Work distribution has been... peculiar, this iteration. Observe.

  • Day 1: Completed all the two-box models I had scheduled for the iteration. Yay!
  • Nothing happens for a while.
  • Day 10: I've got so much time on my hands. Maybe I should do more work?
    Yes! I shall complete all the two-box models for the entire condo! Piece of cake!
    (writes down the task in the TODO list, then goes to sleep)
  • Nothing happens for a while.
  • Last day: Frantically creates two-box models for one extra room.
    Two more rooms left. Time's up; game over!

Well, now we know where most of the time went during this project.


For the next iteration, we intend to waste time in a totally different way. Because this time, it's a programming project! When programming, I think we tend to spend a disproportionate amount of time learning new libraries. Case in point: we want to parse OBJ files.

OBJ is a very simple file format for 3D meshes. Nadya asked me to look up how the OBJ format represents points on disk, and to write some pseudo-code for her. Well, I'm afraid I will again be done with my part of the work on the first day, because the format is very, very simple:

# this is a comment

# List of Vertices, with (x,y,z[,w]) coordinates, w is optional and defaults to 1.0.
v 0.123 0.234 0.345 1.0
v ...

The format is so simple that I don't really need to write down pseudo-code, but I will do so anyway, because I have a point to make.

m := create 3DS Max mesh object
for each line is the OBJ file:
  if the first character is 'v':
    split the line into words.
    x := second word
    y := third word
    z := fourth word
    add the point (x, y, z) to m's point list.

And that's it! That's the entire pseudo-code, because for this iteration, we're only interested in importing the points, not the faces.


Now, the algorithm is trivial, but we are still scheduling an entire 3 weeks to actually implement it. Why? Because we are not familiar with 3DS Max's API. We already know that the documentation contains errors. There might be all kinds of catches we haven't thought of and which we'll need to debug through. Really, the algorithm is trivial, but this has no incidence on the amount of time it will take to implement it! Familiarity with the necessary API is the dominant factor.

I feel like this is very representative of the work I do every day as a programmer. The reason coding things takes so long is not the complexity of the algorithms (they are often trivial), it's not that we work with low-level languages which force us to deal with unnecessary details (although we often do), it's mainly that we need to learn how to use new libraries all the time.

Well! I'll have a lot of time to ponder about that problem during the next three weeks. Because I'm responsible for the pseudo-code, and that part is done!


Oh, and I should also finish those two-box models. There are what, two rooms left? I think I'll wait until two days before the deadline...

Sunday, April 29, 2012

C# action item plugin for 3ds Max


It has been a while since the last time when I made some good progress on my 3D project. My guess is that I have had a lack of motivation and clear planning. And recently Sam has got a brilliant idea which should resolve most of my issues! We are doing Agile now!

Having a list of goals for a relatively short period is a good motivation for me: I work very well having visible and reasonable deadlines. And dealing with clear and shorter tasks helps to go forward with "baby steps". This is important given that I work on this project in my spare time.

So, here is my plan for this 3 week iteration! It has only one story: Creating a plugin that imports OBJ files into my scene.

You may want to ask - why do I need to create a whole plugin for a task that can be performed by simply clicking the Import menu? Well, due to our work process, I need a more sophisticated import procedure. I want to update the existing objects rather than delete and re-create them. Sam does modeling and I do texturing, and it is important to bring in his changes without deleting any material that are already in the scene.

So, my plugin will work in the following way. It will look up for the existing objects that are about to be updated and will rename them if necessary. Then it will proceed with the import. After the import is done, it will grab all the materials of the old objects and apply them to the new ones. After that, the old objects can be deleted.

This is the long term task. In this iteration I aim for a simpler goal: a simple, straightforward import of OBJ files. OBJ is a standard 3D object file format which we use to exchange files between our applications. I just want to familiarize myself with writing action item plugins and with the file import SDK.
And it is so super-easy to write these plugins with the new .NET SDK! I will show you how :)

First of all, I downloaded the Microsoft Visual C# 2010 Express from here. It is a free tool from Microsoft to create C# applications.

Then I went to the online SDK help to read how to create .NET plugins.

According to the help, I only needed to create an assembly project with a class that would implement the ICuiActionCommand interface. Once the dll is compiled, it should be placed to the bin\assemblies folder. The next time Max is launched, it will dynamically load it and register an instance of the class as an action in its CUI interface. Very simple!

In reality it was too, except that back then, when I started working on this, the online help had a bug. It said: "This is done by creating a public class that implements the interface ICuiActionCommand (found in MaxCustomControls.dll)". But it was not true! MaxCustomControls.dll did not have it! I searched all the help from the beginning to the end, and everywhere it was said the same thing. It was  pretty frustrating! Then I decided to text search my class name in the Max .NET assemblies, since they all have a textual description of the interface. And I found my class! It was hiding in the UiViewModels.dll, the namespace UiViewModels::Actions. UiViewModels was not documented at all; fortunately, this bug is fixed now.

I quickly implemented the action item interface, created a simple "Hello, World!" WPF form... And here is my action in 3dsMax!


I put a button with my action on the toolbar and launched the plugin:



Here is the code behind it (I omitted the window code, since it is trivial):


As you see, the code that launches the window is located in the Execute() function.
That's it! I am done with the action item plugin!

Now I need to write a simple import procedure, and this will be the subject of my next post.
Stay tuned!




Sunday, April 22, 2012

Plans for a cube-filled room

We're back! And this time, we've got a plan.

Given how much my other project has progressed since I began planning it using Agile methods, I would like to apply the same ideas to our 3D project. Iterations! Stories! Minimum viable product! Yes!

The first thing to decide is how long our iterations are going to be. This will give us limits for the scope of our first iteration's deliverable, and guide us in scheduling an appropriate number of stories. We chose 3 weeks. If the evolution of my other project is any indication, this length is bound to change after an iteration or two anyway, so the exact length is not very important.


Next, our Minimum Viable Product (MVP). The long-term plan is to model our entire condo in 3D, but that's far too large a scope. Especially given that Nadya, being a photography enthousiast, is aiming for photorealism. But with a deadline in three weeks, something has got to give! Should we favor quality or quantity?

That's a false dilemma. We aren't experienced enough to expect photorealism in any amount of time, and a 4½ condo contains a surprisingly large number of objects. So we shall cut both quality and quantity!

Did I say "cut"? We shall maim quality, and shred it to pieces. Observe:


A maimed room. Next to the sea of white. Low poly-count, affordable rent.


The objects are actually much more recognizable than I expected. What you see above is extreme minimalism: two boxes per object, no more, no less. Nadya will place my minimalist models at their proper location in her model of the condo, and I will refine them later. That's the reason why a single box is not allowed: that would make it hard for her to orient the models properly, and we might only notice the mistake once the models get refined.

For some models, the restriction was both challenging and insightful. The fridge, for example: its most distinguishing feature is the line which separates the freezer from the fridge proper. But how do I recreate this line with only two boxes? Well, I could make the freezer hover over the rest of the fridge, or I could make the top door slightly thicker or thiner than the bottom door, but neither approach gives satisfying results. Instead, I found another feature to highlight: the space between the door and the floor! I had never noticed that space before, but if it wasn't there, the fridge just wouldn't look right.


No, we don't keep our fridge in the bedroom.
It's for size comparison! Or something.


For the first iteration, we restrict our attention to a fraction of the condo, but there are still dozens of objects to model. In order to create those two-boxes models as efficiently as possible, I created a simple tool to align two boxes on top of each other.


Surprisingly, filling all of this takes less time than a
 round-trip to the bedroom to re-take a measurement.


Even though there are just two boxes, there are many ways in which the boxes could be placed relative to each other, and even more ways to take the wrong measure on the real-world object. The numbers above, for example, describe the chair. The first box is the base of the chair, and the second box is its back. To measure the height of this second part, I could have measured from the top of the chair to the floor, or from the top of the chair to its seat. To measure the depth of this second box, I could have measured the back's thickness, or the distance between the seat's front and the back. All of these combinations of measurements are easy to express using my tool, by using negative numbers and "reinterpret data" checkboxes.

Once the boxes have the proper size, I use the "face" menu and the "fit" checkboxes to position the second box relative to the first. The back of the chair, for example, has the same width as the base, and is aligned at the back. Done! Next object. To minimize the round trips even further, I bring my wireless keyboard with me to the bedroom and I blindly type notes remotely into an invisible text document. Spooky action at a distance!


Using this technique, I can create two-boxes placeholders for a large number of objects in a small amount of time. I still can't keep up with Nadya, though, who says that placing all of those models at their appropriate location is too easy. For this reason, she also scheduled a story for writing a plugin to import data from Houdini to 3DS Max! I can't wait to see how this pans out. More details in her next post...

Thursday, February 2, 2012

How to blend shapes along an axis. Also, a chair!

Today, I modelled a chair!

Nadya just loves this chair. After spotting it in a second-hand store,
she worried that if we did not hurry, somebody else might buy it first.

Fortunately, she is the only one who thinks this chair is awesome.


Ok, so it's only a chair, but just because you see chairs every day doesn't mean it's easy to model one. The most challenging part, for me, was the seat. The front of the seat curves downwards in order to wrap the legs, while the remainder of the seat curves inwards in order to wrap Nadya's ass. Important stuff! But easier said than done.

Thanks to the SoftTransform operator, adding a single curve to a planar surface is straightforward. Adding the second curve, however, is harder because the curves can interact with each other. To bypass the problem, I decided to model two single-curve seats, assuming that I could somehow blend one variant into the other.

Well, turns out blending is complicated enough that I can write an entire blog post on the subject. Enjoy.


Which seat would you rather sit on?

Answer: Sit on your own damn seat. This one is for Nadya's ass only!

I love Houdini, but I was a bit disappointed to discover that its BlendShape operator isn't flexible enough to perform the blending I needed. I will explain why in a moment, but before I get to that, let me explain what it is that I needed.

Blending, you see, involves interpolating the position of each point so that it lies between the position it would have in the first variant, and the position it would have in the second variant. This interpolation depends on a parameter, the blending parameter, which can vary between zero and one. Animating the parameter from zero to one gives the impression that the first shape is morphing into the second shape, and Houdini's BlendShape operator clearly has this use case in mind.

Above: Sphericubes!


Being able to create intermediate shapes is nice, but that's not what I had in mind. An intermediate between my two curved seats would be a seat which curves both downwards and inwards, simultaneously. That's not what I want! The seat I want to model has a downwards curve at the front, and an inwards curve at the back.

To use a sphericube-style example, I don't want a cube with bulging faces, I want the left side of a cube, the right side of a sphere, and something intermediate in the middle, whatever that may be.

Above: A cubisphere, whose "something intermediate in the
middle" turned out to look a lot less awkward than I expected.

Sadly, the BlendShape operator can create sphericubes, but not cubispheres.

It's a shame, really.


Well, punchline! I found a way to create them anyway. Yes! The screenshots are real!


Houdini has a very flexible operator called, simply, "Point". It loops over all the points of your shape, manipulating them according to a formula you write. Or rather, three formulas, one per axis.

Suppose you wanted to translate your shape to the right by one unit. The formula relating the original point positions to the translated positions is as follows.

(x, y, z) ↦ (x + 1, y, z)

Using the Point operator, you would write this instead.

$TX + 1
$TY
$TZ

And your points would be translated. Of course, you wouldn't use the Point operator for a simple translation; you would use a Transform operator, because its interface is simpler and its implementation is probably faster. You would only use the Point operator when, say, the existing operators let you down. I'm looking at you, BlendShape.

Above: Math. Also, a blending operator which blends along the Z axis!


Maybe I am being too harsh with BlendShape. Its limitation is not surprising. In fact, most operators, including the Transform operator, share this limitation: parameter values cannot vary from point to point. For example, when I translate a shape, I cannot tell Transform that the top of the shape should move to the right, while the bottom of the shape should move to the left. Outrage! Cripplingly limiting! Annoying, perhaps, but entirely unsurprising if your favorite 3D software is something more conventional, like 3DS Max.

As a Houdini user, however, I am rightfully surprised!


You see, the parameters in Houdini are not mere numbers. They are formulas, computing the right number from attributes of other nodes. It's very easy, for example, to scale a hat so that its new width coincides with the width of a head you have already created. The idea is that if you change your mind about the width of the head, you won't have to also adjust the hat.

At Houdini's, we sell hats which fit heads of all sizes.

Parameter formulas are also convenient for creating animations, by writing equations involving the current frame number. The point is that in Houdini, varying parameters according to stuff is common and expected. One subtlety to keep in mind, however, is that the node can't actually see the formula; instead, the formula is used to compute a number, and the node can only see that number.


It's a bit as if you were ordering Christmas lights online, and you had to pick a color. You can (and must!) consult your girlfriend, the weather, compare with your other decorations... You can vary your answer according to any number of factors, but your answer still can't vary along the length of the light rope. You must distill all the factors into a single color, say, red, and select "red" on the online form.

It might be possible, in theory, to alternate red and green lights along the rope, but the light rope manufacturer won't ship you one, because all it knows about your order is that you clicked on "red". Similarly, it might be possible to use a 0.1 blend factor on the left of the cubisphere and 0.9 on the right, but the BlendShape operator won't ship you one, because all it knows about your order is that you clicked on "0.5".

The above is purely theoretical.


If you liked my Christmas lights example better than my sphericube example, be sure to check Nadya's post. She is texturing a light rope!

She worked on that labor of love right here, on the properly-blended curves of this very chair.