To EGLStream or not

The announcement of KDE Neon dev/unstable switching to Wayland by default raised quite a few worried comments as NVIDIA’s proprietary driver is not supported. One thing should be clear: we won’t break any setups. We will make sure that X11 is selected by default if the given hardware setup does not support Wayland. Nevertheless I think that the amount of questions show that I should discuss this in more detail.

NVIDIA does support Wayland – kind-of. The solution they came up with is not compatible to any existing Wayland compositor and requires patches to make it work. For the reference implementation Weston there are patches provided by NVIDIA, but those have not been integrated yet. For KWin such patches do not exist and we have no plans to develop such an adaption as long as the patches are not merged into Weston. Even if there would be patches, we would not merge them as long as they are not merged into Weston.

The solution NVIDIA came up with requires different code paths. This is unfortunate as it would require driver specific adjustments and driver specific code paths. This is bad for everybody involved. For us developers, for the driver developers and most importantly for our users. It means that we developers have to spend time on implementing and maintaining a solution for one driver – time which could be spent on fixing bugs instead. We could do such an effort for one driver, but once it goes to every driver requiring adjustment it gets not manageable.

But also adjustments for one driver are problematic. The latest NVIDIA driver caused a regression in KWin. On Quadro hardware (other hardware seems to be not affected) our shader self test fails which results in compositing disabled. If one removes the shader self test everything works fine, though. I assume that there is a bug in KWin’s rendering of the self test which is triggered only with this driver. But as I don’t have such hardware I cannot verify. Yes, I did pass multiple patches for investigating and trying to fix it to a colleague with such hardware. No, please don’t donate me hardware.

In the end, after spending more than half a day on it, we had to do the worst option which is to add a driver and hardware specific check to disable the self test and ship it with the 5.7.5 release. It’s super problematic for the code maintainability to add such checks. We are hiding a bug and we cannot investigate it. We are now stuck with an implementation where we will never be able to say “we can remove that again”. Driver specific workarounds tend to stick around. E.g. we have such a check:

// Broken on Intel chips with Mesa 9.1 - BUG 313613
if (gl->driver() == Driver_Intel && gl->mesaVersion() >= kVersionNumber(9, 1) && gl->mesaVersion() < kVersionNumber(9, 2))
    return;

It's nowadays absolutely pointless to have such code around as nobody is using such a Mesa version. But the code is still there, makes it more complex and has a maintenance cost. This is why driver specific implementations are bad and is nothing we want in our code base.

People asked to be pragmatic, because NVIDIA is so important. I am absolutely pragmatic here: we don't have the resources to develop and maintain an NVIDIA specific implementation on Wayland.

Also some people complained that this is unfair because we do have an implementation for (proprietary) Android drivers. I need to point out that this does not compare at all.

First of all our Android implementation is not specific for a proprietary driver. It is written for the open source hwcomposer interface exposed through libhybris. All of that is open source. The fact that the actual driver might be proprietary is nothing we like, but also not relevant for our implementation.

In addition the implementation is encapsulated in a platform plugin and significantly reduced in functionality (only one screen, no cursor, etc.). This is something we would not be able to do for NVIDIA (you would want multi-screen, right?).

For NVIDIA we would have to add a deviation into the DRM platform plugin to create the OpenGL context in a different way. This is something our architecture does not support and was not created for. The general idea is that if creating the GBM based context fails, KWin will terminate. Adding support there for a different way to get an OpenGL context up and running would result in lots of added complexity in a very important code path. We have to ensure that KWin terminates if OpenGL fails. At the same time we have to make sure that llvmpipe is not picked if NVIDIA hardware is used. This would be a horrible mess to maintain - especially if developers are not able to test this without huge effort.

From what I understand from the patch set it would also require to significantly change the presenting of a frame on an output and by that also turn our lower level code more complex. This code is currently able to serve both our OpenGL and our QPainter based compositor, but wouldn't allow to support NVIDIA's implementation. Adding changes there would hinder us in future development of the platform plugin. This is an important area we are working on and KWin 5.8 contains a new additional implementation making use of atomic mode settings. We want to have atomic mode settings used everywhere in the stack to have every frame perfect. NVIDIA's implementation would make that difficult.

EGLStreams bring another disadvantage as the code to bind a buffer (what a window renders) to a texture (what the compositor needs to render) would need changes. Binding the buffer is currently performed by KWin core and not part of the plugin infrastructure. Given that new additional code would also be needed there. We don't need that for any other platform we currently support. E.g. for hwcomposer on Andrid libhybris takes care of allowing us to use EGL the same way as on any other platform. I absolutely do not understand why changes would be needed there. Existing code shows that it can be done differently. And here we see again why I think the situation with EGLStream does not compare at all to supporting hwcomposer.

Overall we are not thrilled by the prospect of two competing implementations. We do hope that at XDC the discussions will have a positive end and that there will be only one implementation. I don't care which one, I don't care whether one is better as the other. What I care about is only requiring one code path, the possibility to test with free drivers (Mesa) and the support for atomic mode settings. Ideally I would also prefer to not have to adjust existing code.

A circle closes…

At Desktop Summit 2011 in Berlin I did my first presentation on Wayland and presented the idea of Wayland to the KDE community and explained to the KDE community how we are going to port to Wayland. This year at QtCon in Berlin I was finally able to tell the KDE community that the port is finished and that our code is ready for testing.

In 2011 I used a half hour slot to mostly present the differences between X11 and Wayland and why we want Wayland. In addition I presented some of the to be expected porting steps and what we will have in the end. This year I only used a 10 min lightning talk slot to give the community an update on the work done the last year.

(Watch video on youtube, my talk starts at 15:04)

Of course the work on Wayland is not yet finished and Wayland is not yet fully ready for use. There are missing features and there must be bugs (new code base, etc.). But we are in a state to start the public beta.

What is interesting is comparing the slides from 2011 to what we have achieved. The plan presented there is to introduce “Window Manager Backends” in KWin. We wanted to identify windowing system independent areas and make our two most important classes Toplevel and Workspace X11 free and add a window manager abstraction. During the port this wasn’t really an aim, nevertheless we got there. We do have a window manager abstraction which would allow to add support for further windowing systems. Toplevel is (at runtime) X free. Workspace, though, is not yet X free, but that moved on my todo list.

Also we thoughts back in 2011 that this might be interesting for other platforms naming Android, WebOS and Microsoft Windows as examples. Android we kind of achieved by having support for Android’s hwcomposer and being able to run Wayland on top of an Android stack. Support for Android’s surfaceflinger is something we do not aim for. The example of WebOS doesn’t really fit any more as WebOS uses Wayland nowadays. And Windows is only in the area of theoretically possible (though with the new Linux support it would be interesting to try to get KWin running on it).

KWin nowadays has a platform abstraction and multiple platform plugins. This allows us to start a Wayland compositor on various software stacks. Currently we support:

  • DRM
  • fbdev
  • hwcomposer (through libhybris)
  • Wayland (nested)
  • X11 (nested)
  • virtual

Adding support for a new platform is quite straight forward and doesn’t need a lot of code. The main tasks of a Platform is to create the OpenGL context for the compositor and to present each frame on the Platform specific output. All platforms together are less than 10000 lines of code (cloc) and a single platform is around 400-3000 lines of code.

In order to add support for a new windowing system more work would be needed. It is very difficult to estimate how much code would be needed as it all depends on how well the concept can be mapped to Wayland. Ideally adding support for a new windowing system would be done by creating an external application which maps the windowing system to Wayland. Just like XWayland maps X11 to Wayland. But as we can see with XWayland this might not be enough. KWin also needs to be an X11 window manager to fully support X11 applications. Given that it really depends on the windowing system how much work is needed.

One could also add a new windowing system the same way as we added support for Wayland. This would require to implement our AbstractClient to have a representation for a managed window of the windowing system and add support for creating a texture from the window content. In addition various places in KWin need to be adjusted to also consider these windows. Not a trivial task and going through a mapping to Wayland is always the better solution. But still it’s possible and this makes KWin future proof for possible other windowing systems. In general KWin doesn’t care any more about the windowing system of a window. We can have X11 windows on Wayland and Wayland windows on X11 (only experimental branch, not yet merged).

This brings me back to my presentation from 2011. Back then we expected to have three phases of development. The first phase adding Wayland support to the existing X11 base. That was what we experimented with back then and as I just wrote still experiment with it. As it turned out that was not the proper approach for development.

As a second phase we expected to remove X and have a Wayland only system. At the moment we still require XWayland to start KWin/Wayland. During the development it showed that this is not something really needed. It was easier to move the existing X11 code to interact through XWayland – we could keep the X code and move faster.

wayland-architecture-rootless-x.png

The third and final phase was about adding back XWayland support, so that KWin can support both X11 and Wayland windows. That’s the phase we developed directly. Which is kind of interesting that we went to the final step although we thought we need easier intermediate steps.

KDE Neon dev/unstable switching to Wayland by default

During this year’s Akademy we had a few discussions about Wayland, and the Plasma and Neon team decided to switch Neon developer unstable edition to Wayland by default soonish.

There are still a few things in the stack which need to be shaken out – we need a newer Xwayland in Neon, we want to wait for Plasma 5.8 to be released, we need to get the latest QtWayland 5.7 build, etc. etc.

This is really exciting. It’s probably the biggest step towards Wayland by default the KDE community has ever taken. I hope that other continuous delivery systems will follow so that we can get many enthusiastic users to try Wayland.

Panels on shared screen edges

Plasma 5.8 will bring an improvement fixing a bug reported more than a decade ago. Back then Plasma did not even exist, the bug is reported against an early KDE 3 version. The addressed problem is the handling of panels on multi-screen setups.

This is if one has multiple screens and tries to put a panel between two screens – on the shared edge – the panel does not have a “strut” set and thus windows maximize below it:

[------------][------------]
|            ||P           |
|      1     ||P     2     |
|            ||P           |
[------------][------------]

In this illustrated setup the panel is “P” and windows on screen 2 ignore the panel. What might be surprising here is that this was not just a bug, but deliberate behavior. There is code making sure that the panel on the shared edge gets ignored. Now one doesn’t write code to explicit break useful features, there’s obviously a good reason for that.

And to understand that we must look at how panels and there struts work. First let’s look at Wayland. Wayland doesn’t have a concept for panels or struts by default. KWin provides the PlasmaSurface interface which Plasma can use to give a window the role “Panel” and to describe how the panel is used: whether it’s always on top, or whether windows can cover it or go below. KWin can use that to decide whether the panel should have a strut or not. Thus on Wayland KWin was able to support the setup shown above since it supports panels.

On X11, though, we have the NETWM spec which describes how to set a partial strut:

The purpose of struts is to reserve space at the borders of the desktop. This is very useful for a docking area, a taskbar or a panel, for instance. The Window Manager should take this reserved area into account when constraining window positions – maximized windows, for example, should not cover that area.

The start and end values associated with each strut allow areas to be reserved which do not span the entire width or height of the screen. Struts MUST be specified in root window coordinates, that is, they are not relative to the edges of any view port or Xinerama monitor.

Now here we see already the problem: it’s not multi screen aware. The strut is specified in root window coordinates. So in our case above we would need to set a strut for the left edge which spans the complete height. So far so good. But the width of the strut must be specified in root window coordinates which includes the complete screen 1. If Plasma would set this, we would have a problem.

In Plasma 5.7 KWin’s strut handling code got slightly reworked to perform sanity checks on the struts and to ignore struts affecting other screens. Basically KWin broke the implementation of given spec and in multi-screen setups only allows struts which make sense.

Now at least KWin could handle this situation properly, but Plasma still has the check to not set a strut on shared edges. For Plasma 5.8 we now changed the condition: if the window manager is KWin we allow such struts. For any other window manager we still go with the previous solution. We still think that we cannot just set a strut which would in the worst case exclude a complete screen. As that’s how the spec is written, we need to assume the window manager is standard compliant. For KWin we know that it is not standard compliant any more and support such struts, so Plasma can make use of it.

This change hopefully improves the multi-screen experience for our Plasma users who use KWin as a window manager.

Creating a Photo-Box with the help of KWin

For a family celebration I wanted to create a “Photo-Box” or “Selfie-Box”: a place where the guests can trigger a photo of themselves without having to use tools like a selfie-stick.

The requirements for the setup were:

  • Trigger should be remote controlled
  • The remote control should not be visible or at max hardly visible
  • The guests should see themselves before taking the photo
  • All already taken photos should be presented in a slide show to the guests

The camera in question supported some but not all of the requirements. Especially the last two were tricky. While it supported showing a slide show of all taken photos, the slide show ended as soon as a new photo was taken. But the camera also has an usb-connector so the whole power of a computer could be taken in.

A short investigation showed that gphoto2 could be the solution. It allows to completely remote control the camera and download photos. With that all requirements can be fulfilled. But by using a computer a new requirement got added: the screen should be locked.

This requirement created a challenge. As the maintainer of Plasma’s lock screen infrastructure I know what it is good at and that is blocking input and preventing other applications to be visible. Thus we cannot just use e.g. Digikam with gphoto2 integration to take the photo – the lock screen would prevent the Digikam window to be visible. Also there is no way to have a slide show in the lock screen.

Which means all requirements must be fulfilled through the lock screen infrastructure. A result of that is that I spent some time on adding wallpaper plugin support to the lock screen. This allowed to reuse Plasma’s wallpaper plugins and thus also the slide show plugin. One problem solved and all Plasma users can benefit from it.

But how to trigger gphoto2? The idea I came up with is using KWin/Wayland. KWin has full control over the input stack (even before the lock screen intercepts) and also knows which input devices it is interacting with. As a remote control I decided to use a Logitech Presenter and accept any clicked button on that device as the trigger. The code looks like the following:

class PhotoBoxFilter : public InputEventFilter {
public:
    PhotoBoxFilter()
        : InputEventFilter()
        , m_tetheredProcess(new Process)
    {
        m_tetheredProcess->setProcessChannelMode(QProcess::ForwardedErrorChannel);
        m_tetheredProcess->setWorkingDirectory(QStringLiteral("/path/to/storage"));
        m_tetheredProcess->setArguments(QStringList{
                                                   QStringLiteral("--set-config"),
                                                   QStringLiteral("output=TFT"),
                                                   QStringLiteral("--capture-image-and-download"),
                                                   QStringLiteral("--filename"),
                                                   QStringLiteral("%Y%m%d-%H%M%S.jpg"),
                                                   QStringLiteral("-I"),
                                                   QStringLiteral("-1"),
                                                   QStringLiteral("--reset-interval")
        });
        m_tetheredProcess->setProgram(QStringLiteral("gphoto2"));
    }
    virtual ~PhotoBoxFilter() {
        m_tetheredProcess->terminate();
        m_tetheredProcess->waitForFinished();
    }
    bool keyEvent(QKeyEvent *event) override {
        if (!waylandServer()->isScreenLocked()) {
            return false;
        }
        auto key = static_cast(event);
        if (key->device() && key->device()->vendor() == 1133u && key->device()->product() == 50453u) {
            if (event->type() == QEvent::KeyRelease) {
                if (m_tetheredProcess->state() != QProcess::Running) {
                    m_tetheredProcess->start();
                    m_tetheredProcess->waitForStarted(5000);
                } else {
                    ::kill(m_tetheredProcess->pid(), SIGUSR1);
                }
            }
            return true;
        }
        return false;
    }

private:
    QScopedPointer m_tetheredProcess;
};

And in addition the method InputRedirection::setupInputFilters needed an adjustment to install this new InputFilter just before installing the LockScreenFilter.

photobox

The final setup:

  • Camera on tripod
  • Connected to an external screen showing the live capture
  • Connected to a notebook through USB
  • Notebook connected to an external TV
  • Notebook locked and lock screen configured to show slide show of photos
  • Logitech Presenter used as remote control

The last detail which needed adjustments was on the lock screen theme. The text input is destroying the experience of the slide show. Thus a small hack to the QML code was needed to hide it and reveal again after pointer motion.

What I want to show with this blog post is one of the advantage of open source software: you can adjust the software to your needs and turn it into something completely different which fits your needs.

Modifier only shortcuts available in Plasma 5.8

Today I’m happy to announce that for the first time we were able to backport a new feature from our Wayland offerings to X11: modifier only shortcut support. This is one of the most often requested features in Plasma and I’m very happy that we finally have an implementation for it.

Modifier only shortcuts mean that an action is triggered if one clicks a modifier key without any other key. By default the Meta (also known as super and windows) key triggers the main application launcher of your Plasma session. But the implementation is way more flexible and allows to use any modifier (ctrl, alt or shift) as a trigger for an action (currently configuration is only possible through manual modification of kwinrc).

The feature was initially implemented for Wayland inside KWin and has been available for a year already. The tricky part is to recognize when only a modifier is pressed. On Wayland KWin gets all key events and passes them through xkbcommon. Given that KWin knows when a key is pressed and knows when only the modifier is pressed. Thus it was possible to implement modifier only shortcuts.

On X11 our global shortcut system does not have such a deep knowledge about all key states. It only registers to the shortcuts one has configured and in general it triggers on the press event, for modifier only we need trigger on the release event. As our X11 shortcut system relies on key grabbing we couldn’t use it for modifier only support: the chances to break applications are too high.

The implementation we have now on X11 reuses the infrastructure setup for Wayland. It uses XInput 2 to listen to all raw key events (which are also delivered if an application grabs the device) and sends those events through our xkbcommon infrastructure just like on Wayland to recognize the pressed keys.

In the past my main concern was that we mis-detect the state and that we trigger the shortcut when that is not wanted, for the wrong key, etc. etc. Also with the new implementation I have these concerns, although I’m very confident that it works correctly. Nevertheless I want to ask you to give a try to this feature and report if you notice any problems. I’m especially interested in setups which change the meta-modifier position through xmodmap and layout changes. If anything is not working as expected please tell us so that we can have an awesome modifier only experience in Plasma 5.8.

In case you use KSuperKey I recommend to disable that. Otherwise it might happen that shortcuts are triggered twice. At that point I want to thank KSuperKey for having filled this gap in our default feature set and providing a solution for our users who wanted to use this feature.

OpenGL changes in KWin compositing

In Plasma 5.8 we will see a few changes in the OpenGL compositor which are worth a blog post.

Debug information

The new KWin debug console gained a new tab for OpenGL information. This shows information comparable to what we know from glxinfo/es2_info. It shows the information gathered from the OpenGL library (version, renderer, etc.) and all the available extensions. This is intended to be useful when for whatever reason glxinfo is not working and one needs to know exactly which extensions are available.

OpenGL information in Debug Console

As you can see in the screenshot: KWin also learned to add decorations around “internal” windows on Wayland. Which means the debug console can now be freely resized and moved around.

Support for llvmpipe

With Plasma 5.8 we finally allow OpenGL compositing through the llvmpipe driver. So far KWin fall back to XRender based compositing when the OpenGL driver uses software emulation. We thought that with XRender based compositing one gets better results than with software emulation. In this release we re-evaluated the situation and came to the conclusion that it doesn’t make sense to continue blocking llvmpipe driver. Due to QtQuick KWin will render parts of its user interface through llvmpipe anyway and also Plasma will render completely through llvmpipe. Thus most of the system is going through that driver. If the system is usable enough for Plasma, it will also be sufficient for KWin’s compositing.

Related to that many of the reasons to not use llvmpipe by default are going away. For example we didn’t want virtual machines to render through llvmpipe, but today also KVM can do accelerated OpenGL through virgl driver. Similar the embedded systems also provide working drivers nowadays, if we think of raspberry pi or odroid – they all can do OpenGL and we don’t have the risk of them going on llvmpipe which would result in a very bad experience.

Last but not least there is the question whether XRender or QPainter compositing is a better solution than llvmpipe. And there I’m not so sure anymore. For XRender it’s possible that the xorg modesetting driver is used in combination with glamor. In that case it would also use llvmpipe. So nothing gained by forcing to XRender.

But still it’s possible that using llvmpipe compositing will result in high CPU usage if KWin has to use it. The solution to this problem is to deactivate what we know to be expensive – all these effects which are not available on XRender because we know it to be too expensive. We don’t want blur effect on software emulation and neither wobbly windows. So we are now working on disabling the effects if we are on software emulation. For this we introduced a way to disable all animations in KWin and a lot of effects are already adjusted. This might also be a handy new features for users who don’t want any animations at all, but still want the compositor enabled.

Last but not least using llvmpipe also exposes performance problems which we normally don’t see with a fast GPU. And as a result we were already able to improve various parts of our rendering stack which will benefit all users – independently of whether llvmpipe is used or not.

Removal of GLX/EGL selection

A change which also got backported to a 5.7 bugfix release is the removal of the selection of GLX or EGL in the compositor settings. Unfortunately EGL is still not a good enough option on X11. Way too often we saw bug reports about rendering being broken and it was caused by using EGL. We decided that it doesn’t make sense to expose a config option which in most cases will result in the users systems being broken. So on X11 the default will stay GLX, while on Wayland the only option is EGL. Of course on X11 the option can still be set manually be editing the config file or by specifying the environment variable to pick EGL or to pick OpenGL ES 2.

Removal of unredirect fullscreen windows

Unredirection of fullscreen windows has been a kind of blue-headed step child in KWin’s compositing infrastructure for a long time. It’s a feature not loved by the developers, not properly integrated, but you have to support it. For those not knowing the feature: it excludes an area from compositing and let’s the fullscreen window be rendered the normal way in X11 (unredirect). The idea is that you get slightly better performance if you bypass the compositor.

The functionality was never fully integrated into the compositor. It was way too easy to break out of the condition (e.g. a tooltip), but at the same time effects which should break it, had no way to do it (e.g. Present Windows should either not activate or end it). The weirdest oddity of the feature is that we had to hard disable it for all Intel drivers due to crashes. We don’t know whether this is still the case but after having had such a bad experience with it in the past, we decided to never turn it on again. Which means it’s a feature not even supported by all devices.

We developers did not spent much time on the feature as we think it doesn’t make much sense as KWin has a better infrastructure in place: blocking compositing. Applications are allowed to specify that compositing should be blocked. This results in KWin shutting down the compositor, freeing all resource related to it (e.g. destroying the OpenGL context), so all power to the running game. As the compositor is shutting down, you don’t have weird interactions like tooltips jumping out or effects not working properly.

There is a standardized way for applications to request this and we see that many games and applications (e.g. Kodi) make use of it. This is the preferred way in our opinion. Given that this mode is fully supported, we decided to remove unredirect fullscreen windows from KWin’s compositor. This streamlines our implementation and gives us one feature to concentrate on and make sure that it works exactly as our users need it. On Wayland the architecture looks different: there is no such thing like unredirect fullscreen, but we can ideally just pass the buffer to the DRM device. The idea is that we do the best optimized way whenever possible.

Support for render devices

Our virtual rendering platform in KWin/Wayland gained a new feature. Like the normal DRM device it now uses udev to find all possible devices and will default to a render device (e.g. /dev/dri/renderD128) if present. If not present it will look for a card device created through the vgem driver. This is a change intended mostly for the use on build.kde.org so that we can properly test OpenGL compositing, but is also a handy feature for running KWin in the cloud on systems with “real” GPUs. If no GPU is present one can easily make KWin pick the virtual device.

Support for restarting the compositor on Wayland

Initially KWin did not support restarting the compositor or even switching the backend on Wayland. But of course we need to support that as well. The OpenGL driver might do a graphics reset after which our infrastructure wants to reinit the complete setup. Thus KWin needed to learn how to restart the OpenGL compositor on Wayland, which will be fully supported in Plasma 5.8.

Why does kwin_wayland not start?

From time to time I get contacted because kwin_wayland or startplasmacompositor doesn’t work. With this blog post I want to show some of the most common problems and how to diagnose correctly what’s going wrong.

First test nested setup

If you want to try Wayland please always first try the nested setup. This is less complex and if things go wrong easier to diagnose than a maybe frozen tty. So start your normal X session and run a nested KWin:
export $(dbus-launch)
kwin_wayland --xwayland

This should create a black window. If it works you can send windows there, e.g.:
kwrite --platform wayland

When things go wrong

Xwayland missing

KWin terminates and you get the error message:

FATAL ERROR: failed to start Xwayland

This means you don’t have Xwayland installed. It’s a runtime dependency of KWin. Please get in contact with your distribution, they need to fix the packaging 😉

Application doesn’t start

KWin starts fine but the application doesn’t show because of an error message like:

This application failed to start because it could not find or load the Qt platform plugin “wayland”in “”.

Available platform plugins are: wayland-org.kde.kwin.qpa, eglfs, linuxfb, minimal, minimalegl, offscreen, xcb.

Reinstalling the application may fix this problem.

This means QtWayland is not installed. Please install it and try again.

Platform plugin missing

KWin terminates and you get the error message:

FATAL ERROR: could not find a backend

This means that the platform/backend plugin is not installed. KWin supports multiple platforms and distributions put them in multiple packages. For X11 you need e.g. on Debian based systems the package kwin-wayland-backend-x11. For the “real thing” you need: kwin-wayland-backend-drm.

XDG_RUNTIME_DIR not set

KWin terminates and you get the error message:

FATAL ERROR: could not create Wayland server

This means that KWin failed to create the Wayland server socket. The most common reason for this is that your environment does not contain the XDG_RUNTIME_DIR environment variable. This should be created and set by your login system. Please get in contact with your distribution. The debug output should also say something about XDG_RUNTIME_DIR.

Platform fails to load

KWin terminates and you get the error message:

FATAL ERROR: could not instantiate a backend

You hit the jackpot. Something somewhere went horribly wrong. Please activate all KWin related debug categories, run it again and report a bug against KWin – ideally for the platform you used. The debug output hopefully contains more information on why it failed. If not we need to try to look into your specific setup.

Trying on the tty

KWin works fine in the nested setup: awesome. In most cases this means that KWin will also work on the DRM device. Before trying: make sure that you don’t use the NVIDIA proprietary driver, if you do: sorry that’s not yet supported. If you are on Mesa drivers everything should be fine.

Log in to a tty and setup similar to nested setup – I recommend the exit-with-session command line option to have a nice defined setup to exit again:

export $(dbus-launch)
export QT_QPA_PLATFORM=wayland
kwin_wayland --xwayland --exit-with-session=kwrite

Ideally this should turn your screen black, put the mouse cursor into the center of the screen and after a short time show kwrite. You should be able to interact with it and when closing kwrite KWin should terminate. If that all works you are ready to run startplasmacompositor. It will work.

But what if not. This is the tricky situation. The most often problem I have heard is that KWin freezes at this point and gdb shows KWin is in the main event loop. This most likely means that KWin tries to interact with logind DBus API, but your system does not provide this DBus interface. Please get in contact with your distribution on how to get the logind DBus interface installed (this does neither require using systemd nor logind). I would like to handle this better, but I don’t have a system without logind to test. Patches welcome.

Running KWin on the weird systems

Yes one can go crazy and try running KWin on devices like the Nexus 5, Virtual Machines or NVIDIA based systems. Here the experience differs and I myself don’t know exactly what is supported on which hardware and in which setting combination. Best get in contact with us to check what works and if you are interested: please help in adding support for it.

Animating auto-hiding panels

With Plasma 5 a change regarding auto-hiding panels was introduced. The complete interaction was moved from Plasma to KWin. This was an idea which we had in mind for a long time. The main idea is to have only one process to reserve the interaction with the screen edges (which is needed to show the panel when hidden) and to prevent conflicts there. Also it allowed to have only one place for providing the hint that there is the panel.

On an implementation level that uses an x11-property protocol between KWin and Plasma to indicate when the panel should be hidden. KWin then does the interaction to hide it and to show on screen edge activation.

Unfortunately from a visual perspective this created a regression. In Plasma 4 the auto-hiding panel was animated with our Sliding-Popups effect, but in Plasma 5 this doesn’t work any more. The reason for that is that our effect system can only animate when a window gets mapped and unmapped. With the new protocol the window doesn’t get unmapped when hidden and not mapped when shown, so our effects are not able to react on it. Technically our Client object is kept instead of being released on unmapped.

Thanks to Wayland this will be working again starting with Plasma 5.8. For Wayland windows KWin keeps the object around when a window gets unmapped and uses the same object when it gets shown again. KWin keeps more state for Wayland windows than for X11 windows. But this also means that our animations are not working for Wayland windows. Last week I addressed this and extended the internal effects API to also support hiding/showing again of Wayland windows. As the effects API does not differentiate between Wayland and X11 windows this change also enables the auto-hiding panel animation. All that was needed was emitting two signals at the right place.

Here Wayland shows another strength: not only does it help to bring features to X11 it also allows us to automate the testing. With a pure X11 system we would have had a hard time to properly auto-test this. But for Wayland we have a nice isolated integration test-framework which allowed to add a test-case for auto-hiding panels.