COLLADA momentum

Open letter to everyone involved with managing/using COLLADA as proposed open 3d exchange format:

This is a follow-up on the discussion I had with Khronos at Siggraph, especially on getting COLLADA supported better. In some ways things – on Blender side – improve steadily. But I also get disappointed reactions of professional users expecting things to work much better than we can provide still.

From the Blender side I think we’re quite well equipped and supported now. There’s a small but active team of 4 people working on it.

In order to keep momentum I have two suggestions for Khronos or the Collada team:

1) A call among Khronos members to support the OpenCollada(*) project in one way or another. It would be much advised if companies take ‘ownership’ on this project too, getting officially involved. The OpenCollada website is now anonymous with a donation button without any visible projects or people running it. People in our team who submit bug reports to OpenCollada also wait far too long to get responses (> 6 months).

2) The character animation im/export pipeline is still the most difficult and wanted target. I would bring together the stakeholders for this to define a ‘Character’ badge, based on the minimal & feasible specs related to that. That’s on one side the people who work on tools like Maya/Max/XSI/Houdini/C4D/Blender etc and the other side ‘users’ such as Unity3D.

I know the Khronos group itself won’t allocate budgets to software development itself. Maybe the above two suggestions would fit in actions or organizational support Khronos members can provide somehow?

My own 2 cents on COLLADA: the advanced flexibility of the spec is both its strength as its weakness. In practice it means that you can align the definitions in files quite nice to how your own architecture works, forcing other programs to do conversion magic to own internal architecture again on reading that COLLADA file.
Also the way how the conformance tests work now is very elaborate and complex (no companies or end-user tools have been listed to conform yet even!).

I realize that narrowing down the COLLADA spec is not going to happen easily, so I would suggest to reorganize (or refocus) the conformance testing system in a way both users as developers get predictable results quicker. That could be achieved by defining a sub-set for the Baseline badge which only validates;

– Baseline Model Badge: tria/quad models with UV coords and texture links and basic material definitions or references. Result should be efficient but minimal 3d model exchange.
– Baseline Character Badge: Model as previous + deform groups, basic per-group and baked (per frame) motion matrices. Result should be efficient minimal 3d character export to game engines, and for animation film the motion of control-rigs only (assuming the character rigs themselves remain native).

For these badges I would not look at general API conformance, but simply bring together actual COLLADA users to provide the smallest & simplest use cases – using minimal & clear specs – and only validate this subset to confirm the badge.

This level is also what we currently support for FBX now, which is successfully used for the Blender-Unity pipeline now. It’s quite remarkable that we can get this work for this closed format well, and still struggle to achieve anything useful on that level with COLLADA instead.

I have only been involved indirectly with our Blender-COLLADA efforts, but I do know there’s been a whole lot of development time invested in it already in the past 4 years. The fact things still only work in a very basic way is quite disturbing. It shouldn’t be that difficult really. If something as basic like Model/Character I/O can’t be achieved among the main 3D tools in a short term, we might better conclude that COLLADA is not going to work for this ever…

Regards,

-Ton-

(* There’s also PyCollada http://pycollada.github.com/ and efforts in irc #blendercollada going on for a smaller and better maintained version of opencollada)

Blender 4.3 Beta is here!
Results from Google Summer of Code 2024
VSE Workshop: August 2024
Online Assets Workshop Report

29 comments
  1. I’m an ignorant fool who has stumbled into this blog, and I don’t see why Collada is sooooo important. What about FBX? It doesn’t work. Object Import still does not bring mtl into the file. Whats with that?

  2. After looking at the schema for COLLADA, I would say that it will probably (hopefully) fail. It is utterly massive, needlessly, and will require hundreds of thousands of lines of code to be able to be parsed effectively – it is easy to estimate this as there are hundreds or thousands of types in the schema – each of which will require custom code to parse.

    It makes some huge mistakes in its schema, primarily by creating new types for every individual value that might be needed.

    Experience has shown me that the best way to describe complex data is by having a relatively loose container, value, descriptor type of schema. In this way, the schema only defines a container as holding values and values have descriptors which describe what they are.

    This type of schema would have many descriptors but only a handful of types, as opposed to the current schema which has a million types and millions of constraints on how those types can be used.

    Essentially, it should be constructed so that the parsing code is the same whether reading an array of vertices or boning parameters – i.e. an array of vertices becomes a container of values each having a vertice descriptor and the boning parameters are a container of values having boning descriptors. All of this data would be in a container having a model descriptor.

    The key is that the parser has one set of code that reads all containers, one set of code that reads all values – the descriptors provide all the information about what the value is and how to treat it. And all values and containers could have multiple descriptors. So instead of having a separate defined type for every permutation of a float array, you could have a 2 dimensional float array by nesting 2 containers with array descriptors containing values with float descriptors inside a container with an array descriptor.

    The descriptors provide the information required about how to process the particular node. It is more like scripting the target – i.e. the parser constructs an object corresponding to the container, then constructs objects for each of its children according to their descriptors, and adds them to the containing object.

    In this way, everything is broken down into containers and values only. Two complex types instead of thousands.

    Anyways, that’s my 5 cents worth. Bottom line is I would never want to have to write a parser for the current spec. It would be a horrifying experience that at best would produce a most brittle set of code.

    Use attributes to describe a node, not distinct types. In this way the spec can easily support things it wasn’t even designed to support, simply by adding a few descriptors.

    Think of it this way – instead of FloatType, IntType, StringType, you only have Value; and value has an attribute named name and an attribute named type. A VectorX of type float is coded the same way as a VectorX type int.
    All types for all uses are then parsed by a single set of code, and this single simple change immediately shrinks the schema by at least a 1/3rd. It also lets the parser code become data driven.

    COLLADA and other massively complex specifications are what is produced when you have a group of people creating a specification who do not also have to code against the specification. When you don’t know all the potential needs for a specification, it pays to define a simple, generic method of passing just about anything, as opposed to trying to define every single item down to its most minute components. The best data structures are those that can be traversed generically.

  3. Maybe this is a little off topic, but XML sucks and this has an impact on collada. JSON is way easier! Dump it altogether. FBX is becoming more popular by the day.
    My 2 cents.

  4. COLLADA (accordingly OpenCOLLADA) is much needed because it’s the only decent alternative (open and available for both free and commercial use) to STEP – a quite popular standard for CAD in the engineering industry. With 1.5 kinematics and dynamics were introduced and hopefully we will see more of that in future versions (for example being able to include inverse kinematics relevant information in the official schema or integration of Denavit Hartenberg parameters for describing kinematics of a body or a chain of bodies). So, YES, COLLADA is needed. It would be great if you are able to work with a format that includes such valuable information. There is a project, derived from Blender (http://wiki.blender.org/index.php/Community:Science/Robotics), that works in the area of robotics. Such applications of the COLLADA format make it awesome. After all not all of us create games and frankly STEP (and IGES for that matter) is getting older and older with each and every day. Considering the evolution in this field, formats older than 20-30 years will soon (if not already) start slowing progress down. I work heavily with OpenRave (Open Robotics Automation Virtual Environment), which simulates collisions, inverse kinematics, path planning etc. It uses an extended COLLADA 1.5 format for its modelss. And as COLLADA continues to evolve more and more features change from extensions to standard features.

    The problem is that some people are too constipated in their way of thinking and want to stick to things that are amortized from the long usage. If all were like this, we would be still using MS Paint for everything. LOL Flexibility is the thing that is needed. You can stick to the standard specification but you can also do an unlimited expansion of its features. Imagine if Blender was developed in such a way not allowing any additional features to be included by using Python? Out through the window and right into the waste container.

    Last but not least I would like to say that COLLADA is a popular enough format. As such it is a good idea to be supported by Blender. If you support some unknown to anybody (except of its developers) format, no matter how good it is, it will not be used and therefore – completely underrated and abolished. The way open source world works is that it fights against closed source stuff, because closed source stuff makes a big p@@p on evolution. I do agree that 3DS etc. are great formats. But they automatically bind you to a really expensive product because let’s face it – a proprietary format is best supported buy the people who own it. In many cases introduction of new features of such formats is bound to the will of the owner. If you stick ONLY to those formats, you are limiting yourself and your clients.

    As for Khronos being dead – totally not true. As of the middle of last year the CTS (COLLADA Conformance Test Suite) is

  5. pycollada is the way to go. Trying to do this in C++ was a big waste of 4 years time.

  6. hi guys,
    i need a codes/commands in blender, can you show me the codes/commands?

  7. Now I wonder if things shouldn’t just switch to the Alembic system instead? Collada seems to be very painful, and the end result isn’t likely to get better any time soon.

  8. Hello, just a quick test with collada.
    The parented object not exported well into collada.

    blend exmaple here:
    http://www.mediafire.com/?iqvala9rn526oq0

    exported collada here:
    http://www.mediafire.com/?8cuwt86aqu75u9u

    maybe the rotated matrix is wrong, without this i cant export animations so this way the collada not usable for me :(

  9. I’m in the web business and COLLADA being quite important to get chars and anims exported from blender to flash for example, — I really hope Khronos reads this letter and helps out. I must say I was impressed with .FBX and Unity fast easy and it just worked.

    an ALEMBIC I/O in C++ is being currently developed on for blender.

    http://ramencomp.blogspot.com/2011/10/alembic-exporter-for-blender-work-in.html

  10. I did a little research on Alembic and it looks like that it will be supported by major softwaers (Autodesk Maya, Pixar´s Renderman, Houdini, etc..)
    and i´ts results are very impressive, check it out:
    http://www.youtube.com/watch?v=I__MeR8jsFk&feature=youtu.be

  11. Thanks for writing this up.

    I’m working with GarageGames and collada is our only interchange format in Torque 3D. A choice made because it was becoming too difficult to create and maintain exporters for every different 3d creation suite with limited development time. Some of our programmers have tried to support OpenCollada but had a similar experience where response is slow or non-existent.. and for a long time no new binaries were being officially made for newer versions of max/maya forcing users to use Autodesk’s shoddy implementation. Something definitely needs to change to keep the format alive… So far the solution commonly seems to be to just jump ship to FBX and ignore the need for an open source interchange format.

  12. Thank you Ton!

    This is a very well written letter talking to the painpoints involved with the actual format. I’ve seen extensive attempts to integrate Collada in Blender over the past years, and it is clearly not from lack of enthousiasm and attention. I heartily agree, OpenCollada needs attention and support from all the official Collada organizations including contributions from closed source companies such as AutoDesk.

  13. Hear! Hear! One would think the Khronos Group had learned something from the OpenGL disaster, practically begging DirectX to take over the world… But it seems the only thing they learned was how to repeat their mistakes in minute detail.

  14. As a ten years professional, I can say you lost your time if you try to export armature and deformations. Vertices point cache is the way to go. It’s the only reliable interchange format. PC2 is a bit complicated for the moment, you have to use a script not updated, and you have to do it object by object. But I really wait for alembic to integrate Blender in other pipeline.

  15. I’m the author of 3dviewr and dae4j (java lib for collada import). I’ve been trying to help out a bit with the recent blender collada work, but I must admit so far OpenCollada has not really been a pleasant experience. I was also extremely shocked when I noticed the size of the binary libs for OpenCollada (~92MB for release on win64). For my own importer I just used a standard SAX parser, so I’m a bit puzzled what takes up so much space, luckily it doesn’t impact the final blender executable that much.

    Another issue is validation. In theory it makes sense to use a validating parser, but in practice many program’s collada support is far from perfect, so you end up not being able to load stuff from other sw :(

    As for the spec itself, it is in many ways quite complex, making it difficult for devs to get things right.

    Anyway just my 2 cents :)

  16. The part in the document starting with “My own 2 cents on COLLADA” is in my opinion a huge issue. Collada can be extended by all software products. These extensions are based on the internal used technologies and architecture of the software. The technology and architecture are not governed and therefore needs additional support in order to really inter-exchange data across software.

    In areas the format itself supports different architectures and every software needs to implement conversions from different (sometimes hard to realise) architecture to the software internal architecture. Also for this a separate component can be very useful.

    A logical proposition is to create a separate component disconnected from end-user software or to add these requirements to the Collada specification.

  17. Thank you, Ton, for talking to Khronos!!

    Thank you, gurus, for “badgering”!

  18. Err, there should be some angle-bracketed “extra” words in there – looks like this blog software doesn’t convert HTML symbols :-)

  19. I want to use Collada as the intermediate format in my game engine, but Blender’s support is rather spotty. I’ve been badgering people about it for a while now, though as you said progress has been slow (but I don’t know what goes on behind the scenes, so can’t complain too much.)

    We have recently gained the import/export of all Blender-specific light properties to an node, which is great. But I still need to see materials handled the same way, and custom properties exported to nodes too.

    Still, again as you said, Blender’s not the only DCC tool with poor Collada support – even 3ds Max didn’t export valid Collada documents when I tried it this time last year!

  20. LOGAN: for your use case it’s quite simple to make a simple script triangulating things first before exporting. I understand you like to see this, but it’s in the regular maintenance & todo category.

  21. @logan as a work around you could put a decimate modifier on your mesh and set it to 100%. This will have have the side effect of proceduraly triangulating your mesh before it is exported (won’t effect editing).
    What you are saying is probably still a good idea but this will make your life easier in the mean time.

    For me the big draw cards of Collada is the ability to embed custom meta-data but given what I will be doing over the next few months exporting characters too realtime rigs is probably the most important thing I am going to need to be able to do with opencollada.

  22. The only time I use COLLADA is when I want to export a model to Google Earth. At this moment I still have to rely on Blender 2.49 and the Python plugin for export as Google Earth needs the model data to be triangulated. It is my understanding that OpenCollada does support this feature but blender has not implemented a triangulate check-box.

    It’s a bit strange that there seem to be problems with OpenCollada in Blender while Blender doesn’t even support all available options that OpenCollada offers. Who knows some of the problems originate from this.

    Google stated it will not change the COLLADA import on their side. A special Google Earth format export is not needed for Blender, just the triangulate option in my case. For now I am forced to use Blender 2.49 with the python exporter.

  23. Thank you for taking the time to write about this. I’ve posted a link to this on the jMonkeyEngine forum, where questions do occasionally pop up about both our decision to drop official COLLADA support in v3 and why we currently support one format officially (OGRE).

    I’m a COLLADA user, and make heavy use of some of its features that are seemingly missing from other formats (but who knows, because the documentation for others either isn’t available or isn’t as complete). I’m in 100% agreement with you here about the flexibility being a double-edged sword. Its a shame that the proprietary formats are able to steal the spotlight so often, but unfortunately its the old question of finding the tipping point for companies to support open-formats more readily.

  24. I think instead it would be more useful to focus on the open source Alembic point cache instead.

    Having Alembic support would allow Blender to seamlessly inter-operate in pipeline environments. Moving entire scenes between packages (including baked simulations).

    While it is true that you cannot move Rigs through Alembic and this may be a problem for real-time assets. Rigs tend to be complicated and package specific anyway. So usually even if you move Rigged items with Collada all you are getting is basic bones and skinning and not the entire rig anyway.

    I seen very few truly pain free examples of Collada interop working between packages. Rigs end up needing to be recreated and there are often issues with quad models getting triangulated etc…

    Alembic removes all this because all that is moved is the animated geometry cache. Also it is much more efficient in file size compared to other formats.

    Mike

    • I’m with Mike here. Alembic would move Blender farther into integrating it with production pipelines (Maya, Houdini, Nuke to name a few support Alembic already).
      But Collada does have it’s place and applications so I applaud the effort to bring attention to it, hopefully something will come out of this.

  25. I think one of the most useful things they could do is make the conformance suite cross-platform. I’m the maintainer of pycollada and would like to try running it, but it’s Windows only. I’m very rarely in Windows.

    Agreed that OpenCollada really needs work. It’s one of the main reasons I abandoned using it for my own work.

    I also think something really useful would be a validator that actually validates what’s contained in the file. You can generate schema-valid files that are not actually valid according to the spec which leads to a lot of problems for interoperability.

In order to prevent spam, comments are closed 7 days after the post is published.
Feel free to continue the conversation on the forums.