Evolving 3D desktop effects in Plasma

The latest Plasma release dropped a few desktop effects: the cube family, CoverSwitch and FlipSwitch. All of those effects were written back in 2008, the early days of KDE 4.x and the early days of desktop effects in KWin. The effects were implemented by me and when Vlad asked about removing them I saw the need for this and supported this step for technical reasons. With this blog post I want to share a little bit of why it was needed to remove them and why this means that they can come back in better ways than ever before.

State of CPU in 2008

To really understand this we need to time travel back to 2008 and the years before when desktop effects were introduced. This can help to understand how the hardware architecture changed and how that influenced design decisions in the effects API which are nowadays problematic. First of all CPUs. The Intel Core 2 Duo architecture was launched in 2006 as the brand new thing which had multiple (namely 2) cores which slowly replaced the NetBurst architecture which dominated Desktop computing for the beginning of that decade.

Also for developers these multi-core systems were a new thing. And that meant lots of software written before was single threaded. Especially KWin at that time was single threaded as it also depended on libraries which were not really thread safe such as the Xlib library and back at that time even OpenGL. Even years later when Qt introduced the threaded rendering in Qt 5 on many Mesa drivers this was disabled due to thread safety issues. Nobody would have thought of having any benefit of a threaded compositing approach back in 2007 with the state of hardware and the available libraries. Thread libraries were of course already available such as QtConcurrent or ThreadWeaver, but not of a useful use in KWin. This means the API written back then did not support ideas like doing rendering on a second thread or even rendering for each screen in a thread.

State of OpenGL in 2008

KWin’s compositing pipeline was written for OpenGL 1.4 which was the only supported version in the Mesa stack. KWin supported shaders written in OpenGL Shading Language which replaced parts of the fixed functionality pipeline in OpenGL 1.4 and emulated the fixed functionality pipeline. This was only supported on the proprietary NVIDIA driver. OpenGL 3 was released in 2008, but it was years till it became available on the Linux desktop (according to Wikipedia in 2012).

Thus the design of the OpenGL compositing scene and desktop effect system was written for the fixed functionality and only with this in mind. While KWin gained support for the programmable pipeline and is the only supported way nowadays the design is still in place and problematic.

State of multi screen support

Multiple screens were not that common back in 2008 and there existed multiple competing technology approaches. There was the old “one X screen per screen” approach with a different DISPLAY variable for every screen. There was Xinerama, with the NVIDIA implementation called TwinView and there was the “new kid on the block” XRandR. Using multiple screens still required hacking the xorg.conf file and then with luck it was working. Especially if you had an NVIDIA card which one needed for good desktop effect performance.

 

From https://xkcd.com/963/

From an X11 perspective there was (and even today) there is not such a thing as multi screen. For the compositor everything is one screen and we have to present all screens at the same time. So much for variable refresh rates (AMD FreeSync introduced in 2015), buffer age extension (implemented in KWin in 2013) and so on. From rendering point of view there was not much difference between rendering one screen or multiple screens. All we had in KWin was an integer variable telling us the count of known screens and the geometries.

State of Alt+Tab

Alt+Tab, which CoverSwitch and FlipSwitch provide, was still a hard coded implementation in QWidget. With the effect system an API was added to suppress the QWidget and use a desktop effect as replacement. This allowed to have an effect (BoxSwitch) which showed thumbnails for the windows. Overall this was rather a break with the other parts of the effects API as the effects are mostly used to influence position and drawing of the windows. With the Alt+Tab API an internal part of KWin was exposed and suppressed. It was it’s own API inside the API.

CoverSwitch and FlipSwitch even took this a step further by introducing 3D elements in the so far 2D world of desktop effects and completely intercepted the rendering. Most effects do not change the order how windows are rendered, but with those effects it was important to render the windows exactly in the order Alt+Tab wants them. So the effect intercepts the normal render calls and renders the windows in another order. To make this worse the CoverSwitch included reflections which meant the windows needed to be rendered twice. And the effects had to combine windows from multiple screens. The cube effect family did even more horrible things.

How Alt+Tab evolved

The biggest change to Alt+Tab happened thanks to QtQuick. While it was reworked before already and was no longer QWidget based, QtQuick allowed to easily define new layouts. The internal Alt+Tab API was one of the first parts to be exposed to QtQuick in KWin. Furthermore we introduced an interesting concept to make it possible to render window thumbnails in the QtQuick scene. This was done by telling the compositor to draw a thumbnail at a specific region of the QtQuick Alt+Tab window. As this was perfectly synced we finally had an easy to use API to make Alt+Tab layouts with all the fancy things only desktop effects could give us. Though we could not put other elements on top of the thumbnails (e.g. a close button) or transform them from within QtQuick. This was the end to the already mentioned BoxSwitch effect and the Alt+Tab mode in PresentWindows. Thus the only remaining effects for Alt+Tab are FlipSwitch and CoverSwitch.

With Qt 5 there was the hope to further improve that. Now also QtQuick used OpenGL for rendering which meant that in theory it would be possible to make our window textures available to QtQuick. Thanks to the work on the compositing engine lately this is now possible and gives even more flexibility to render Alt+Tab and makes it possible to implement FlipSwitch and CoverSwitch with QtQuick. This is really awesome as it means we have much better tools at hand to implement such fancy effects, we don’t have to develop our own toolkit and implement our own transition handling. Instead we can use all the great things like PathView in QtQuick. The C++ desktop effect implementation of CoverSwitch was 1000 lines of code, while the new emerging QtQuick based implementation is just about 200 lines of code.

How the effects evolved

Also the KWin effect system had a transition. We noticed that most effects are actually animations and added a dedicated implementation for it. This implementation is exposed to JavaScript and most of those effects are nowadays written in JavaScript. The effects in C++ are often the odd ones which do too much and use the wrong toolkit for the wrong thing. Such as I said years ago that PresentWindows and DesktopGrid need to be rewritten in QtQuick. The effect system should be for animating, not for developing a user interface. With QtQuick we have a way better toolkit.

Overall huge effects such as the cube family, CoverSwitch and FlipSwitch are standing in the way of evolving the effect system. While we have better tools to implement and maintain them thanks to QtQuick.

Animating auto-hiding panels

With Plasma 5 a change regarding auto-hiding panels was introduced. The complete interaction was moved from Plasma to KWin. This was an idea which we had in mind for a long time. The main idea is to have only one process to reserve the interaction with the screen edges (which is needed to show the panel when hidden) and to prevent conflicts there. Also it allowed to have only one place for providing the hint that there is the panel.

On an implementation level that uses an x11-property protocol between KWin and Plasma to indicate when the panel should be hidden. KWin then does the interaction to hide it and to show on screen edge activation.

Unfortunately from a visual perspective this created a regression. In Plasma 4 the auto-hiding panel was animated with our Sliding-Popups effect, but in Plasma 5 this doesn’t work any more. The reason for that is that our effect system can only animate when a window gets mapped and unmapped. With the new protocol the window doesn’t get unmapped when hidden and not mapped when shown, so our effects are not able to react on it. Technically our Client object is kept instead of being released on unmapped.

Thanks to Wayland this will be working again starting with Plasma 5.8. For Wayland windows KWin keeps the object around when a window gets unmapped and uses the same object when it gets shown again. KWin keeps more state for Wayland windows than for X11 windows. But this also means that our animations are not working for Wayland windows. Last week I addressed this and extended the internal effects API to also support hiding/showing again of Wayland windows. As the effects API does not differentiate between Wayland and X11 windows this change also enables the auto-hiding panel animation. All that was needed was emitting two signals at the right place.

Here Wayland shows another strength: not only does it help to bring features to X11 it also allows us to automate the testing. With a pure X11 system we would have had a hard time to properly auto-test this. But for Wayland we have a nice isolated integration test-framework which allowed to add a test-case for auto-hiding panels.

Desktop Effects Control Module in KWin5

One of the new features in KWin 5 is a completely rewritten configuration module for our Desktop Effects. In KWin 4 our module was based on KPluginSelector, which is a great widget for a small list of plugins, but it was never a really good solution for the needs of KWin.

Also we noticed that a QWidget based user interface is not flexible enough for what we would like to provide (e.g. preview videos). So when QtQuick came around we had the first experiments with reimplementing the selector with QtQuick, but with the lack of what today is QtQuick Controls it never left the prototype state. But it encouraged us to use one GSoC project on redesigning the control module from scratch and Antonis did a great job there to lay the foundation for what we have now in the upcoming alpha release.

The most noticeable change is that the new control module just focuses on the Desktop Effects. What we learned from our users is that they are only interested in configuring the effects and that the other options exposed in that control module bare the risk of users changing and breaking their system. Thus we decided to give the users what they need and move all other options into another control module.

In order to give the users the possibility to focus on the effects we also did some cleanup in the list and all effects which are not supported by the currently used compositing backend are hidden by default (e.g. OpenGL effects when using XRender). Also all internal or helper effects are hidden by default. These are effects which replace functionality from KWin Core or provide interaction with other elements of the desktop shell. Normally there is no reason for users to change that except if they want to break their system. That’s of course a valid use case and so there is a configuration button to modify the filtering of the list to show also those effects.

Last but not least our effects got extended by information on whether they are mutual exclusive to other effects. For example one would only want to activate the minimize or the magic lamp effect. Both at the same time result in broken animations. For effects in a mutual exclusive group the UI uses radio buttons and manages that only one of the effects can be activated. That’s the change I’m most happy about.

Check out the video to see the new configuration module in action and also see some of the new features I haven’t talked about. Please don’t tell me in the comments about padding issues and rendering problems. We can see those, too, and are quite aware of them. If you want to help iron out issues with Oxygen and QtQuick Controls check have a look to our wiki page.

KWin Effects written in JavaScript

Today I cannot make such a nice announcement as Aaron yesterday, but I can at least try announce something I personally consider as awesome.

This weekend I tried to make it possible to write KWin effects in JavaScript. After about two hours of work the effect loader was adjusted to load a Qt script instead of the library written in C++. This is a quite important step for the future of effects in KWin. It finally makes it possible to share effects via Get Hot New Stuff, so that our users can download new effects directly from the control module.

For packaging effects we use the well established Plasma Package structure, so that our script developers only need to know this one common way. The API itself will share as much as possible with the KWin scripting API – of course with adjustments for effects. For animating the API is based on the AnimationEffect introduced in 4.8.

From a performance point of view using JavaScript does not change anything. Our effect system has two phases: one to react on changes in the window manager (e.g. a window got activated) and one to perform the rendering. The scripting API only interacts with the window manager, so all the rendering is still good old C++ code – a similar approach to QML.

Now I guess you want to know what you can do with it. So here I present for example a Fade like effect written in JavaScript (for comparison: C++ version is > 200 SLOC):

var duration = 250;
effects.windowAdded.connect(function(w) {
    effect.animate(w, Effect.Opacity, duration, 1.0);
});
effects.windowClosed.connect(function(w) {
    effect.animate(w, Effect.Opacity, duration, 0.0, 1.0);
});

For us KWin core developers the scripted effects will be an important step as well. For quite some time we have been unhappy with the fact that there are too many effects which become difficult to maintain – especially if we have to adjust the API. With effects written in JavaScript this becomes much simpler. As we do not have to keep the ABI (API compatibility is enough) stable we can move effects written in JavaScript out of the source tree and make them available for download.

At the moment the JavaScript API is just at the beginning. But I expect it to evolve over the course of the current release cycle. For me the scripts are rather important as it also provides us an easy way to have device specific adjustments.

As I wrote currently the scripts do not operate during the rendering. Because of that we don’t have bindings for WebGL. This would at the moment not make any sense. Nevertheless it might be that we allow to upload custom shaders, but I won’t pursue such a task in the 4.9 cycle.