there's a new game in kdereview right now called kdiamond, writted by Stefan Majewsky. it's a fun little game that is much like the classic bejeweled. with any luck it'll move from kdereview into kdegames for 4.1.
while playing it, i noticed that sometimes the animations weren't overly fluid. this was something that others had noticed as well. i did a quick valgrind and saw that the actual drawing and game mechanics code wasn't really significant, though there was a lot of time being spent in driving the event loop. without even looking at the code i could guess at the problem: each diamond on the canvas was animating itself. indeed, that's what was going on.
i wrote an email to Stefan suggesting a fix (which he implemented very quickly; sweet! =) and Zack suggested that i should write a blog entry about this general issue since it may be of use to others as well. this is that blog entry. there's nothing new or even overly interesting to graphics developers, but for the rest of us ... it might be helpful.
so... why is it so nasty to have individual items animate themselves? the obvious approach to animate a bunch of items is to give each item a timer (e.g. QTimeLine) and move the animation along in time to the progress of the timeline. in the case of moving an item, this is a simple matter of taking the start and end points, the current progress of the timeline (e.g. from 0.0->1.0), figuring out how far along the item should be on a given path between the two points based on the progress value and setting the location. the math is trivial and everything gets nicely encapsulated in the animated item's class.
here's the problem, though: let's say that the item is being animated at a nice smooth 25 frames per second. that gives us a 40ms delay between frames. (that's an oversimplification: the delay will vary depending on actual system activity and other processing in the app itself, but let's go with this generalization for now.)
if we have two items animated then we have that same 40ms window, but now it is divided into two pieces averaging 20ms in length. (the real intervals may be 10ms and 30, or 6 ms and 34ms, or whatever; again, this will vary from interval to interval ... but the average in the oversimplified scenario is useful information here.)
obviously, as we add more and more items, if the distribution is somewhat random (and in practice it will be) then our time slice between animations gets smaller and smaller and as we approach 40 items we end up with an animation happening every millisecond (again, going with the oversimplified generalizations). over 40 items and obviously we're into the sub-millisecond range. the problem, however, is not in animating the movement of those 40 items (that's probably very fast) ... it's the "in between" part that causes us grief.
with every item having it's own timer, each animation step of each of those items implies going back to the event loop, checking the next timer and if it's scheduled to trigger emitting its signal (or put another way, calling the connected methods) and then returning to the event loop. suddenly there's a lot more time variance and the animations will start to appear sluggish and completely uncoordinated as the individual frame timings drift about. but that's not the worst of it.
what's really bad is that while in the event loop other things will happen. really expensive things, like repaints. using a canvas such as QGraphicsView, it will eventually decide to update its contents on one of those trips to the event loop and trigger a bunch of repaints. if this happens when N out of M total animations have stepped through their frames, then not only will you get a paint with some of the animations in step and some not, but a (relatively) huge delay will also be introduced as all the data structure traversal, math and then resulting painting necessary to update the canvas happens. while fast in the general case, this can end up being detrimental to the fluidity of the animations if triggered too often and without coordination with the animations.
besides canvas paint, user interaction events and other input data processing will end up getting in between the individual animations. it just all gets very messy and animation frame latency starts to suck.
(firing all those timers randomly without aligning them is also rather bad for power consumption as it wakes up the cpu more often, but for most apps doing animations that's often not really a priority issue.)
the solution, thankfully, is really quite simple: share an animation tick. this is nothing new, really, and has been done in graphics programming since the days of yore, when the grass was green and amigas were still impressing people with that stick figure guy juggling three ray traced balls. ;) but it's still something that i noticed gets missed, especially as more people are adding them to their apps, often for the first time.
how it works is really simple: most animations are simply updating an internal state and then using that state to affect some sort of visual change, such as the start/end point interpolation of the above movement example. so you start a single timer that triggers a step forward in the animation ("the next frame" or "a tick") and on each of those ticks every active animation is iterated over and their state is incremented (whatever that means for the given animation).
in this way the event loop is exited and entered only once for all the animations. upon re-entering the event loop the scene or canvas is free to update itself with all the animations having been updated and ready to go in a coordinated fashion. so for each 40ms slice there is one animation tick, the time required to update the animations elapses (that should be very fast, even for good numbers of items) and then the repaints needed can happen.
the end result should be a lot smoother and just "feel" better since things will be moving together at the same semi-random interval rather than each at their own semi-random interval.