Slideshow

From Apache OpenOffice Wiki
Jump to: navigation, search

Organization of the Module

Module slideshow provides the engine for the Impress slideshow mode. The module has been written from scratch for OO.o 2.0, to replace the outdated engine formerly hardwired into module sd. The module provides a single shared library object, that implements the com.sun.star.presentation.SlideShow UNO service.

  • slideshow/
    • inc/
      • only needed for pch stuff in this module. Slideshow does not export any header files
    • prj/
      • usual build system boilerplate
    • qa/
      • debug/
        • debugging and logfile analysis tools
      • tools/
        • other helpful stuff (testdocument generation scripts)
    • source/
      • api/
        • com/
          • the private UNO API header files, as used by slideshow and sd
        • and shared_ptr debugging hooks. only used when compiled with 'debug=t'
      • engine/
        • activities/
          • SMIL animation activities, as described here
        • animationnodes/
          • SMIL animation tree nodes, as described here
        • shapes/
          • implementations for the various shapes on an Impress page, plus import functionality
        • slide/
          • managing functionality for whole Impress slides (z order, global animation control and event processing)
        • and common slideshow functionality, like API bindings, global control and events.
      • inc/
    • test/
      • complete test stub, to run a slideshow standalone (quite barebones, though)
    • util/
      • usual build system boilerplate
    • manifest.txt
      • coding and design manifest for slideshow module. changes should adhere to this, or modify the manifest and all occurences in the module, such that consistency is maintained.

Architecture

This section is a start at describing the architecture of the slide show. It is far from complete and may contain errors.

Queues

The controller functionality is divided into many small parts, often in the form of an action (called Event) whose execution is triggered by events like the end of an animation, a mouse click, or a timer event.

Many of the actions are stored in one of three queues.

EventQueue
The EventQueue is used for two purposes:
  • Scheduling of actions that are executed at a specified time, that typically lies in the future. The start of animations (activities) is one example.
  • Asynchronous execution of actions from within a fixed thread and environment. These actions are typically scheduled to be executed as soon as possible (zero delay).
The processing of all due actions (whose trigger times lie at or before the time at which processing is started) is started by SlideShowImpl::update() which typically is triggered from the sd project.
Actions (remember that they are called Event in the source code) that are inserted into the queue as a result of processing another action are not necessarily executed in the same run: zero-delay actions are specified with reference to the time of their creation but the event queue processing uses the start of the processing as reference time (both have different ideas of "now"). For actions whose trigger time lies in the future this is OK. For actions with zero delay, however, this introduces a latency of something like 20ms to 50ms and a screen repaint. The delay may or may not be observable to the user, the screen repaint typically is.
Another sort of problem is introduced by the attempt to have some actions executed in a specific order (advancing to the next effect should take place after aborting and skipping the current effect). For this there exists a second queue inside EventQueue. All actions in this second queue are inserted int the regular ueue on each call to EventQueue::process().
ActivitiesQueue
Set of the currently active activities (parts of effects). On every call to update all currently active activities are executed. Finished activities are removed from this set afterward. Activities are inserted into this set by actions in the EventQueue.


UserEventQueue
A collection of event handlers. Each handler maintains a (sorted) set of actions that are triggered on certain events. Some events, like the mouse double click, are triggered by the user (hence probably the name UserEventQueue), some, like the end of an animation, are not. The execution of the handlers actions is triggered by the EventMultiplexer.


All three queues could have better names: the EventQueue does not store events, the UserEventQueue is not one but many queues, and the ActivitiesQueue is not a queue at all.


Event Multiplexer

The EventMultiplexer class is a container of sets of handlers of various kinds of events. Some (most?, all?) of the handlers are provided by the UserEventQueue. Each handler is basically a set (or queue? a queue defines an order on its actions, actions are removed after activation; a set is unordered but actions are only removed when explicitly required) of actions. When a handler is triggered then its actions are executed.


Future improvements

Please note that some of the deficits described below are already being worked on or may already have been resolved.

Problems of the event driven approach
The implementation of the slide show follows an event driven approach. The controller functionality is split into small parts with respect to both space (into many small actions (Event objects)) and time (cascading creation and asynchronous execution of the actions). This makes it hard to understand and hard to debug what is going on and what is going wrong.
Extension or modification of the controller code is also made more complicated. Pre- and post conditions are obfuscated. Dependencies are hard to find and hard to modify.
Example: when an animation is aborted then the next effect is automatically triggered. This is fine in case that the effect abortion was triggered by the user pressing a key or clicking the mouse. When the effect is programatically aborted, for example as preparation for rewinding it, then showing the next effect may be unwanted. Changing the code so that these two cases can be handled properly is no trivial task: both the effect abortion, triggering the next effect and showing the next effect are exectuted asynchronously (where special care is taken to ensure that the three actions are executed in the right order). So, the decision of whether to trigger the next effect can not be made when the action for aborting the current animation is triggered. It has to be made earlier when the action is created.
Solution: consolidate the many actions into one central controller. However, as this would require a complete restructuring of the architecture and thus is not likely to happen.


Cascading asynchronous events leads to latency at start of activities
The asynchronous execution of actions introduces unnecessary delays: the execution of one action may lead to the creation and scheduling of another. While in many cases the new action could be executed right after the already scheduled actions, in reality it is executed only when the next animation frame is prepared. Some 20ms and one screen update later.
Possible solution: differentiating between timer triggered actions and actions that have no delay may improve this situation.
Cascading asynchronous events leads to latency at end of activities
The use of activities in their current form leads to latency and requires too many screen updates. Activities are put into the ActivitiesQueue by events in the EventQueue and these can have the same latency as every other event. Also, the activation process can involve a cascade of more than one event, which are not necessarily executed in the same frame. When an activity ends, it ends with reaching its final animation step. It is then removed from the ActivitiesQueue and the associated shape (group) is marked as not being animatated anymore. Thus the activity does not show its final animation step. Instead the shape is painted in its new state directly to the canvas and not via the sprite that is used while the shape is being animated. This switching in how the shape is painted requires two different processing steps of the activity and two separate repaints of the canvas and its display on the screen. In many cases one or the other repaint/update is optimized away. But for some animations, like per-character animation of text, this almost doubles computing time per frame.
Possible solution for this and the previous problem (latency at start and end of activity): Treat an activity more like a time-line. Put it into the ActivitiesQueue before it becomes active and remove it after it becomes inactive. Just let the queue ignore inactive activities. In experiments it was already sufficient to leave an activity one frame longer in the ActivitiesQueue to remove one canvas update without changing visual appearance. In this last frame the activity was able to paint its final animation step and the transition from sprite to regular painting of the animated shape could be done without any visual indication (well text in DirectX canvas changes from not anti aliased to anti aliased).


Time between frames is too long and varies too much.
The time between the display of two frames can vary very much and even on fast machines the number of frames per second can not be higher than 15. That is because timing is split between the slideshow module and the calling sd module. The slideshow, after heving rendered one frame, calculates the time until the next frame should be displayed. The sd module then waits this time until it tells the slideshow to render the next frame. There are several problems with this approach:
  • The time calculated by the slideshow is the time between the display of two frames. Preparing the next frame can take a large part of this time or can even take longer. Sd, however, idly waits that time until it gives the slideshow the oportunity to render the next frame.
  • Sd uses a method to wait that has a large error margin. This typically leads to longer waits than requested.
  • Sd enforces a maximum number of frames per second that is not too large (20).
  • Slideshow introduces latency that can lead to triggering events one or more frames after they are actually due.
  • There are two canvas updates for every frame. For animations with many short animation parts this can almost double the time it takes to render a frame.
  • Slideshow enforces the display of 10 animation steps for every single animation part, even if the duration of that animaion part is smaller than the time between two frames. All other animation parts that run in parallel are slowed down accordingly.
Personal tools