After working with Blender for a few years now, I have finally written down some issues we have stumbled upon in Hypnosis. I actually heard few of these problems mentioned in Blender Conference 2014 videos and that means it’s quite an objective opinion and good feedback should always be useful to developers. Also, if you’re an artist, maybe this will save your time by knowing what problems you should expect and avoid. So.. without further ado:
- Fluids in Blender can not be emitted from face normal, it must be emitted from parameters set in fluid panel, which is basically static. Also it’s not smooth on collisions, so simulation is not really usable in transparent/glass objects, because surface is choppy/pixelated and making higher resolution is not helping much. Better and faster way to make fluid is to use fluid particles, it’s easy to work and experiment with them, see the result fast and so on, but there is no mesher available, so basically these particle fluids are useless unless you are making some abstract motion graphics video where it’s ok to have particles as spheres.
- Hair collision with meshes and overall stability. As I see, this has been addressed and I see good developments for Gooseberry project, but it’s still not ready yet, so it’s in the list.
- Projection mapping requires very high count of polygons and that just does not make any sense. If you want to project an image on a simple form, you must subdivide it many times to see good result, but sometimes even then there is a little bit of offset from the original projection and that is really not usable. Subdividing makes computer to lag and makes adjusting something in future harder, even if subdivide modifier is used. Projection should work perfectly just using a 4 vertices plane and in any angle.
- Using mesh as force field for particles or rigid bodies does not work as it should, objects are always moving and never stopping, something really weird. Tried to make the effect when small rigid bodies are pulled together to make a bigger form, like many berries are making a number “5”, but that just does not work as expected.. I am really sad about this, because when bullet was integrated, this was the first thing I thought I will finally be able to do. Cinema 4D has very good examples of this effect.
- This is discussed already many times, but we think object parameters should be still available after it is deselected. It’s maybe ok to not have them for cube or sphere, although would be nice to have them, but they really are needed for addon generated objects like sapling-tree addon. Or some other addons.. pipes, windows, bolts.. you name it.
- Stamp brush in texture mode is taking sample only from the direction you are seeing the mesh, so if you set a sample, rotate camera and try to stamp, it’s projecting that texture in perspective, basically copying the screen, not texture.
This is really annoying and makes it harder for artist to get exactly the result he wants. If there is some way to do it differently, please correct me, but neither I or my colleges could find another way.
- Cloth simulation pinned vertexes can be edited using weight modifier, but that does not work properly, most of the time makes unexpected results.. (If you want a cloth to hold on to something and then release it in wind or something like that) Tearing of cloth is also not possible, so we have done that manually already so many times I cannot count..of course that is a new feature to add..
- A smaller thing. Settings for rendering with cpu or gpu could be saved as preset for the used device, otherwise by changing device you have to change everything again, like tile size, sampling count and so on..
- Using render nodes should disable the parameter settings and output in render panel. I have never understood why you have to set these things in 2 separate places. Would be much more clear if by enabling render nodes all outputs would come ONLY from nodes and not the settings panel. There could be a sign in output panel – “Render Nodes enabled” and a button to open them.
- Output could have a setting for which camera to render and a “+” to add more cameras with individual frame settings, otherwise if you need the same animation rendered from different directions, you can not make a batch render list. You must render one camera, then change to another and render that and then the next and so on.. This can be done by making linked copies of scene, but that is counter intuitive and all the settings for rendering then must be set twice and checked again and again, if something has been changed, which results in mistakes.
- Viewport could be improved a lot. For cycles there is really strange viewing modes.. textured mode does not show all the textures as they should be, material mode kind of does that, but then again, it does not show any lighting.
In Blender internal there also too much parameters and in different places.. for example GLSL is in “n” panel, although that could maybe be as a mode itself. Also it’s really annoying that generated textures can not be visible in any viewing modes at all. Of course cycles real time rendering has made that a smaller problem, but still, I think generated textures should be visualised as well as textures. I suppose the viewport project will address these issues.
- Setting different materials for an object in different render layers would be awesome, although I am not sure how this could be implemented in current render system.. This also can be done by making copy of the scene, and this time a real copy, because linked objects can’t have different materials. But again, if something is changed in project, the copied scene must be redone.
- Shadow catcher shader in Cycles is a “must have”.
- Generated noise and cloud textures should have an “evolution” parameter also, not only seed. I know it’s possible to connect texture to an empty and animate it’s position, but that is not the same effect as “evolution” would be.
- Particles, if set as objects, should be able to collide using Bullet physics.
- Freestyle render lines are good for static images, but for animations they are still too unstable and flicker too much from one place to another when some angle changes. I almost lost a client because of this, but after a lot of work and time I was able to eliminate those errors in After effects and by hand drawing..
Probably there are some other things, but these are the major ones which many times have made our work nightmare. Of course there are good things in Blender also and I still enjoy it.
Please comment if you have any questions or suggestions.