Advanced 3D Kit

I love my paper route. It gives me 2 hours to do nothing but think. Like this one time last year, I came upon the idea of a plugin-based portable music player, which although ingenius, nobody has ever made. I think if somebody were to make it, it would kill among the geek crowd, because they could finally play superior formats such as OGG.

But today, I was walking along, and I was thinking about how I could make clothing-type material and how it would move around and such, and I hit upon the idea that Haiku could enormously expand its 3D Kit and provide a plugin-based architecture for 3D rendering. Like, there would be a series of plugins that would be a part of the rendering system and manage the shading for each polygon, a few addons that manage the actual rendering process, a separate system for particle generation, an environment system that includes wind/water effects, a system to ease the use of skeletons, and a physics engine that would provide gravity and other useful reactions. Using plugins would allow people to pop in a different effect and the rendering program could use that new plugin to create a new effect in the 3D world.

I confess, I know hardly anything about the 3D rendering pipeline, so this whole idea of mine might be completely worthless. But I think it would be really neat, and help out a lot. Especially since 3D is becoming a very popular media-creation tool, and since Haiku/BeOS is the Media OS, having an easy system for 3D creation would provide another killer app for Haiku.

For example, we could set up a simple scene with a candle as a basic cylinder. The program that is being used to create the scene could instantiate a Renderer object, a Shader object, and a basic Lighting object, pass the Shader object, the Lighting object, and the scene geometry to the Renderer, and the Renderer would perform its magic and return a picture with the scene fully rendered and lighted and shaded.
And then there could be a geometry creator that creates a candleflame automatically at the point where the program specificies, and the Flame object would create its own Lighting object to light the candlestick from above. The Flame object would take notice of what the Environment object says the current windiness is, and move around itself and its Light accordingly. If the program specificied that the Flame is underwater, the increased density of the Environment would make the Flame move around slower. The Environment’s density could be also used with a modifier to make an object float, depending on its Environment.
Another geometry creator could create smoke particles that would especially take note of the Environment to determine where they will go. Perhaps the Flame object could create a subenvironment above itself that would add some density to the surrounding Environment and make smoke particles, with a lower density, float up faster in that area.
Each geometry modifier or creator would directly modify the scene, or at least return a copy of the scene with the modifications, so that the Renderer and Lighting and Shading and related systems can concentrate on only rendering the geometry that they receive, and not worry about what tweaks they would have to do with the geometry they’ve been given.
And then somebody could create some special material plugins that would allow people to create reflective candle holders. Each reflection plugin would have to manage its own Rendering system to create the reflection, if I’m understanding computer-generated reflections correctly.
A skeleton system would probably be very useful as well, especially with the advanced character animations that are being created now. People could use a standard bones system and attach individual polygons to each bone, or they could use a complicated system that could modify the mesh to simulate stretching skin more accurately.
A physics system would take the Environment’s gravity and make objects, when instructed to, fall down at 9.8m/s^2 or any other gravity rate. Objects, or perhaps polygons or other smaller subdivisions of objects, could have elasticity attributes that would specificy how much they should deform when coming in contact to another object, and the objects’ speed would be taken into account. An object could be instructed to bounce, and how high to bounce, or perhaps what direction to bounce in. The possibilities are limitless, of course. I would think this system would work best as a geometry modifier, because it would take geometry and the associated physics attributes and move or modify the geometry according to the rules of physics.

If this system would work, than creating custom programs to test out some ideas in 3D would be simple. I’ve had the dream of making a program that would assist in making 3D people dance, and the thing that’s stopping me is that not only would I have to create the programming necessary to let users create dance steps easily and in time to music, but I would also have to recreate the entire rendering pipeline.
If I wanted to test out a new clothing animation technique, with this system I could simply create a geometry-modifying plugin, and the rest of the rendering pipeline could use this system instantly.

Of course, as I said previously, I have very limited experience with 3D animation or rendering. I worked through some DirectX tutorials in Visual Basic, and I played around in 3D Studio Max 7 a bit. That, the fact that I’m absolutely terrible at planning out programs, and the fact that I don’t know anything about using C++, is preventing me from doing anything towards making this useful idea a reality.

In regards to using this in gaming, I don’t see how much that would be possible. As I understand it, most geometry in games is stored in the video card. Will it be possible to have the geometry stored in the video card, yet still be accessible to the entirity of this kit? I know even less about how 3D acceleration works. How much acceleration could be used in this process? If it can be used, wonderful! Haiku could become the next best thing in 3D gaming, with operating system support for sophisticated graphics effects. I had originally envisioned this process to be used mainly for content creation, but it could be used in gaming as well. What do the accelerated driver writers have to say?

What is the community’s thoughts? Could the leading 3D experts point out flaws in my reasoning? Is this a good idea, or a lot of work for dubious benefit? How difficult would it be to make a system like this, but still have it be extensible enough for any new ideas and effects and modifications that could be thought of?

–Walter Huf–

It’s called “Maya”. :wink:

I hate to be the killjoy here, but I think this type of functionality belongs in an application, not the OS. Plus, adding this type of specialized functionality would bloat Haiku faster than a scared blowfish. Anything that appeals to a such a small slice of users and requires that much work should be left out of the OS IMHO. Lean and mean is why BeOS rocked. :smiley:

Perhaps not integrating it right into the operating system, but as an addon. Similar to the way the IM-Kit is being developed right now. The IM-Kit is a series of applications that work together to add a new functionality to BeOS. Perhaps a Render_Kit could provide a similar extension of functionality to the certain subset of people that are interested in the 3d rendering system.
Sure, you could just download Blender, but I do not know how easy it is to add separate frontends to it.
One such frontend I envision would be a program that takes a midi recording of an instrument and animates a model of that instrument, and perhaps a person playing it. This program could use the rendering tools available in this kit to load up the models, add bones and kinematics, animate those bones according to instructions given from elsewhere in the program, and eventually render the resulting movie.
How easy would it be to create a frontend or alternate input plugin for Blender that would take advantages of the advances in human input, such as virtual reality gloves? Using a plugin based system for everything in the render pipeline would seem, at least to me, to make it really easy to simply develop a new gui to take advantage of new technologies and use the global rendering system.
Of course, it might just be easier to create a frontend system for Blender, to allow other applications to use Blender’s rendering tools and plugins.

gtada wrote:
I hate to be the killjoy here, but I think this type of functionality belongs in an application, not the OS.

I thought it belongs in a programming language as user extension. :stuck_out_tongue:

I wasn’t thinking so much of 3D packages (like Maya) while reading your text (although I admit, I didn’t read and understand everything you wrote), instead what I think you’re describing is simply a design for a graphics/game engine.

Maybe you could have a look at one of the free graphics engines like Irrlicht or Ogre.

Ooh, I hadn’t even thought of using game engines for making movies. I might have to look into that. Thanks!