Last year I started a blog post series about how input works in KWin/Wayland. This blog post resumes this series by talking about touch input.
Several people wondered why it took so long for this blog post. After all it’s more than a month since the last one. Of course there is a good reason for it. I was reworking parts of the input stack and wanted to discuss the changes with the next post of the input blog post series. Unfortunately there are still a few changes missing, so I decided to nevertheless do the touch input post first.
Touch input is the new kid in the block concerning input events. It’s a technology which was created after X11 got created and thus it is not part of the X11 core protocol. On X11 this makes touch a weird beast. E.g. there is always an emulation to a pointer event. Applications which do not support touch can still be used as the touch events generate pointer events. Now this is actually a huge sacrifice for the API and means that touch feels – at least to me – as a second class citizen in X11.
On Wayland the situation is way better. Touch is part of the core input protocol and does not emulate pointer events. Applications need to support touch in order to get touch events. If an application does not support touch, the touch events won’t trigger any actions. This is a good thing as it means applications need to do something sensible with touch events.
Like with the other events touch events are reported to KWin by libinput. Touch events are quite straight forward. We get touch down events (when a finger touches the screen), touch up events (when a finger gets lifted again) and touch motion events (when the finger moves on a screen). This is fully multi-touch aware, meaning we can follow multiple touch points individually.
The events are sent through KWin’s internal filter architecture like all other events. Currently KWin does not really intercept events yet. We do support touch events on window decoration and KWin’s own internal windows. But in those cases we emulate mouse events. We don’t have any UI elements which would benefit from multi touch events, thus emulating mouse events internally is sufficient for the time being. If in future we add multi touch aware UI elements that would require changes.
In case KWin does not intercept the touch sequence the events are passed on to the KWayland Server component which forwards the events to the Wayland window which is currently receiving touch events. KWin determines the window by using the window at the first touch down of the sequence. While a sequence is in progress the window cannot change.
The touch events are then processed by the application and can provide sensible functionality. E.g. our Plasma calendar supports a pinch-zoom gesture to switch to an overview of all months. This was developed under X11 and just works on Wayland without any adjustments. Good job, Qt devs!
Last week at the Plasma sprint touch gestures were an important discussion point during the last days. We decided which global gestures we want to support in Plasma. We hope to be able to deliver this for Plasma 5.10 on Wayland and will also look to get the same on X11 by reusing the architecture written for Wayland. But this might land in a later release.
Global touch gestures have an interesting and useful feature. When a sequence starts KWin does not know whether that will be a global gesture or a gesture which needs to be forwarded to the applications. Thus all events must be sent to the applications. Once KWin knows that this is a global gesture it can send a cancel event to the application. This informs the application that the touch sequence got canceled. This prevents conflicts between the global and application touch gestures. On X11 this is not so comfortable, so we will have to see how we can support this.
In the last blog post I discussed keyboard input. This blog post will be all about pointer devices – mostly known as “mouse”. Like my other posts in this series, this post only discusses the situation on KWin/Wayland.
Different hardware types
There are different kind of devices which are recognized as a pointer device. We have the classic mouse/trackball like devices and on notebooks we find touchpads. Furthermore there are also absolute positioning pointer devices, which are sometimes found on touch screens.
Given the differences of the devices there are quite a few configuration options available in libinput for pointer devices. There is for example pointer acceleration and many options for touchpads defining how it should behave. We are currently working on a touchpad KCM for Wayland, so it looks like this will return with Plasma 5.9. As explained in the first blog post of this series the configuration options are set as soon as the device is created.
Pointer motion
The pointer devices generate various events and one of them is the motion event. In general there are two kind of motion events: absolute and relative. Most devices like a mouse generates relative motion events which is the reason why this blog post will only focus on them.
Determining new position
A relative motion is a distance vector with an x and y coordinate. It describes how the cursor position should be moved.
So once the event is read from the queue inside KWin the new position needs to be determined. Now it’s not as simple as taking the last position and then adding the motion vector. This could result in the cursor leaving the visual area.
Instead the pointer motion gets validated and constrained. We ensure that the cursor doesn’t leave the visual area and also apply pointer constraints an application window set.
This is a new protocol KWin supports in Plasma 5.9. It allows a Wayland window to either lock the pointer to position or to confine the pointer to an area. In the first case the pointer doesn’t move at all, in the second case it’s only allowed to move in a certain region of the window.
Processing new position
Even in case the pointer motion is constrained in a way that the cursor doesn’t move, the event is further processed. An application might be interested in the relative motion and react to it, even if the cursor doesn’t move.
For the further processing a QMouseEvent is generated and sent through KWin’s input filters just like described for the keyboard case. The pointer motion might be handled inside KWin, e.g. the active screen edges need to know the current position. Or the pointer motion might be forwarded to a window through KWayland.
Updating the focused window
If the pointer moves it might be that the cursor moves from one window to another or from a window to it’s server side decoration. This means that for every pointer position change KWin needs to evaluate the window at the current position.
Compared to keyboard input where KWin only needs to consider the active window this is a rather complicated task. We need to consider input transformation applied to the screen or window, we need to apply input masking on the window, consider the window decoration, check whether the screen is locked, workaround issues in Xwayland prior 1.18, etc. etc.
In the end the method might have determined a new Surface which gained pointer focus. KWin uses KWayland to update the focused pointer surface which ensures that the surface leave and enter events are emitted.
Of course not always when you move the mouse it should update. If a grab is active (pointer button pressed) it won’t update.
Updating the cursor image
If the pointer moved to a new position it might be that the cursor icon changed. This unfortunately might require a roundtrip. One doesn’t know which icon a window wants to use till the motion was sent to the window. A window might react in two ways: it updates the cursor image, or it doesn’t. In the first case KWin gets notified through KWayland that the image changed, in the second case there is no notification at all.
This means KWin doesn’t know what cursor icon is really valid when moving the cursor. So on pointer motion KWin updates the position of the cursor with the current cursor icon, but it might be that a frame later the client updates it. This is so far the only element in Wayland where I have to say that not every frame is perfect. The cursor could show the wrong icon.
When entering a window the cursor is not defined. Till the window sends a cursor image it is not set. KWin doesn’t render the cursor then and this means that when entering a frozen window we don’t have a cursor. Something we have to improve on. Currently we don’t detect hung applications yet as I think we cannot detect them at all due to clients using a dedicated Wayland event thread and thus always happily replying that everything is ok, even if not.
But it might also possible that KWin needs to set a cursor image. E.g. when hovering a window decoration or entering a special mode for selecting a window the cursor image is provide by KWin. KWin loads the cursor image from the theme. Internally KWin tracks the source from where the cursor image should be used. Whether it’s a Wayland window, or the window selection, or an effect setting a specific image.
Updating the actual cursor
The actual update of the cursor position and icon happens through KWin’s internal platform API. Every platform sets the cursor image in a different way. For our primary platform we use the DRM api to up update the position and to update the image – if a new one is available.
For the nested platforms like X11 and Wayland this happens through the windowing system specific calls. The nested platforms don’t allow to update the cursor position – this happens by the windowing system through the pointer motion. The cursor image, though, can be updated.
The virtual platform only knows the concept of a software cursor. That is the cursor gets rendered in the compositor rendering pass. Currently that is only implemented in the QPainter compositor and not yet available in the OpenGL compositor.
Button events
The next event supported by libinput for pointer devices are pointer button press/release events. These events carry the pointer button they triggered for.
Compared to pointer motion the event processing is way more straight forward. The pointer event is either intercepted by one of our event filters (e.g. Alt+Tab) or forwarded to the window currently having pointer focus.
This is a huge improvement over X11. On X11 if KWin wants to exclusively process pointer events it needs to grab the pointer. That implies that the screen cannot be locked. So if e.g. Present Windows is active the screen doesn’t lock. On Wayland this doesn’t matter any more. If Present Windows is active, the screen will lock and get all pointer events. Once unlocked Present Windows is still active. I was quite happy when I was able to add an auto test for that situation.
Axis events
Many pointer devices have one or two axis. On X11 the core protocol implemented axis events as pointer buttons. With Wayland and libinput we now have dedicated axis events telling us which axis got scrolled and the delta. This is a big improvement compared to X11 as it means that “smooth scrolling” is part of the standard and not something added later on through an extension.
The handling is of course very similar to the other events. KWin creates a QWheelEvent and passes it through the various event filters and if no filter intercepted the event, it will be forwarded to KWayland::Server which in turn sends it to the focused pointer surface.
Touchpad gestures
For touchpads we have further events. Libinput does not only recognize motion and press events on touchpads, but is also able to recognize a multi finger gesture. It supports two kind of gestures: swipe and pinch/rotate gesture. KWin gets the gesture events and forwards them through the normal event system. There is a special Wayland protocol which we added support in Plasma 5.9. This allows forwarding the pointer gesture to Wayland applications as the following video demonstrates.
Unfortunately we do not really use these gestures yet. QtWayland doesn’t implement the protocol, so the forwarding doesn’t reach any application and we don’t make use for it yet internally for e.g. global gestures. We are still working on defining the gestures we want to support. I hope we have something for Plasma 5.9, but no promise.
Happy holidays
This is my last blog post for this year. Next year I will continue this series with a blog about touch screen events and maybe also wacom tablet.
I wish everyone happy holidays and a great start into 2017.
In the last blog post I explained how input devices are opened and handled in KWin. In this blog post I’ll have a closer look on keyboard devices and events.
Keyboard are not keyboards
Keyboards on Linux are weird. You don’t have one keyboard but many of them. Many devices also announce to be a keyboard and just support one key. A good example for this is the power button or an external headset which provides mute, volume up/down keys. From an input perspective such devices are also keyboards.
For us in KWin it is important to figure out what the keyboard really supports. If there is no “real” keyboard attached (or enabled), our virtual keyboard should get activated automatically. E.g. if you detach the keyboard from a convertable it should turn into tablet mode by having a virtual keyboard. When attaching the keyboard, the virtual keyboard should be disabled as the primary text input device. libinput provides a function to test which keys are supported. We use that to differentiate the classes of keyboards.
Keyboard events
Keyboards are the most simple input devices out there. Libinput only emits one event of type LIBINPUT_EVENT_KEYBOARD_KEY and that only contains the key which was either pressed or released. KWin reads events from libinput in a dedicated thread, so each event only gets queued and our main thread is notified about the new event. Once the main thread processes the event, the event gets translated into our input redirection classes. All input events go through the input redirection, no matter from which source the events are delivered. KWin does not only support events from libinput, but also the nested setups (KWin running on top of X11 or on top of another Wayland server) and fake events used in our integration tests. This means once the event reaches the input redirection we in general lose the information which device created the event. Though recently we extended the internal API to optionally include the device in the event handling. This is used by the Debug Console to show on which device an event was generated. But more on that later.
xkbcommon
Now the key press/release event has reached our central dispatching method KeyboardInputRedirection::processKey. The first (and most important) task is to update the keyboard state in xkbcommon. Xkbcommon is used to translate a hardware key with a layout to the actual key symbol depending on the state of the keyboard (e.g. active modifier). To explain: if I press the “y” (key code 21) key and have the “Shift” key pressed, it will create a “Z” with the German keyboard layout, but a “Y” with the English layout. Simplified that’s the job of xkbcommon.
In KWin we have wrapped all functionality for xkbcommon in a dedicated class called Xkb. This class tracks for us the active layout and performs the layout switching (including showing the OSD when the layout changes). It knows the last composed key symbols, the currently active modifiers and the modifiers relevant for shortcut activation.
When updating the state of xkb we also check what changed. Did the user activate the num lock? If yes we need to announce that the LEDs changed, so that our libinput code can update the LEDs on the physical keyboard. Did a modifier change? If yes we need to inform our Wayland windows about the new modifier set. In Wayland this is tracked on the server, although the actual translation from key to symbol happens on the client. So why does KWin also do the translation? KWin also needs the keysym in various places, e.g. the filter in Present Windows or in general for triggering global shortcuts.
Our Xkb state updating functionality is also responsible for handling modifier only shortcuts. Actually it’s the wrong place for it, but our input filtering code does not guarantee that a filter sees all input events. For the modifier only shortcuts it’s essential to see all events, though, and the only place is directly in Xkb. Not the most elegant solution, but it works. This functionality is also used by X11 as I explained in an older blog post.
Filtering through KWin
Now KWin has enough information to process the key event. For that it creates a customized QKeyEvent and sends it through an input filter chain. KWin’s input processing is using a chain of input filters. Each filter can perform an operation based on an event and decide whether the event should be further processed or whether event processing should end.
For example pretty early in the chain we have the lock screen filter. If the screen is locked this filter intercepts the event processing and ensures that the event is only sent to the screen locker and not to any window. Or there is a filter ensuring that ctrl+alt+f1 works even if the screen is locked. Another filter is responsible for handling global shortcuts, one for passing events to our effects system (such as Present Windows).
The last filter in the chain is our forwarding filter. The task of this filter is to forward the events to a window. It passes the event to KWayland::Server from where it is sent to the currently focused Wayland surface.
Focused Keyboard surface
The Wayland server needs the focused keyboard surface for that. In case of keyboard focus that is relatively trivial in KWin. KWin has a concept of an “active” window. Before forwarding the event KWin verifies which is the focused keyboard window. If there is an active window the surface of that window is marked as the focused keyboard surface in KWayland::Server.
Our KWayland::Server library takes care of sending a keyboard leave and keyboard enter event to the respective windows, so that KWin doesn’t have to care about this. This is one of our advantages by having an abstraction with KWayland::Server – everything that is not of relevance to the compositor is handled directly in the library.
Key event processing in Wayland
The forwarding input filter updated the keyboard surface and sends now the key event to the Wayland client. For that all the processing into keysymbol is no longer needed, the key code is sent to the client.
The client gets the key event through a callback and now also sends it through xkbcommon. In Wayland the keymap is sent from the server to the client, so that both server and client have the same keymap. The client can now do a translation from key code to key symbol, just like KWin did before.
The further event processing is handled inside the client. E.g. in Qt this will generate a QKeyEvent which is then sent to the focused widget.
Key Repeat
Keyboard input has also a special mode: repeating keys. When a key is pressed, some of them should generating repeating keys. KWin uses the configuration from the keyboard module to decide when and how often a key should repeat. A repeating key is not forwarded to the Wayland clients. Instead KWin tells through the Wayland Keyboard protocol the settings for key repeat and this is than handled directly in the client.
Unfortunately in Qt this is broken and a hardcoded value is used. So currently in a Plasma Wayland session key repeat is rather broken as it’s handled differently depending on the used application. KWin is correct, X11 applications are correct, GTK applications are correct, Qt applications are incorrect, if run on Wayland.
Recently I did some work on the input stack in KWin/Wayland, e.g. implemented pointer gesture and pointer constraints protocol, and thought about writing a blog series about how input events get from the device to the application. In the first blog post I focus on creating and configuring an input device and everything that’s related to get this setup.
evdev
Input events are provided by the Linux kernel through the evdev API. If you are interested in how evdev works, I recommend to read the excellent post on that topic by Peter Hutterer. For all we care about the input event API is too low level and we want to use an abstraction for that.
libinput and device files
This abstraction exists and is called libinput. It allows us to get notified whenever an input device gets added or removed and when an input event is generated. But not so fast. First of all we need to open the input devices. And that’s a challenge.
The device files are normally not readable by the user. That’s a good thing as otherwise every application would be able to read all key events. Getting a key logger would be very trivial in that case.
But if KWin runs as a normal user and the user is not able to read from the device files, how can KWin read them? For this we need some support. Libinput is prepared for the situation and doesn’t try to open the files itself, but invokes an open_restricted function the library user has to provide. KWin does so and outsources the task to open the file to logind. Logind allows one process to take control over the current session. And this session controller is allowed to open some device files. So KWin interacts with logind’s dbus API to become the session controller and then opens the device files through the logind API and passes them back to libinput.
This is the reason why for a full Wayland session KWin has a runtime dependency on logind’s DBus interface. Please note that this does not mean that you need to use logind or systemd. It only means that one process is required which speaks logind’s DBus interface.
Devices in KWin
Now libinput is ready to open the device files and emits an LIBINPUT_EVENT_DEVICE_ADDED event for each device. KWin creates a small facade class for each device type and applies configuration options for it. KWin supports reading the configuration options set by Plasma’s mouse configuration module and has an own device specific configuration file which will soon allow the touchpad configuration module to configure the touchpad on Wayland. Also as part of setting up the device KWin enables LEDs – if the device supports them – for Num Lock and Caps Lock.
All the input devices created by KWin can be investigated in the Debug console (open KRunner, enter “KWin”). KWin reads a lot of information about the device from libinput and shows those in the Debug console. In the input event tab each of the events include the information which device generated the event.
All devices are also exported to DBus with the same properties as shown in the Debug console. This means the configuration can be changed at runtime through DBus. KWin saves the configuration after successful apply and thus ensures that your settings are restored correctly when you restart your system or replug your external device. This is also an important feature to support the touchpad configuration module.
Last week I concentrated most of my development work on screenshot support through spectacle in a KWin Wayland session. Now I am happy to announce that we merged support for capturing a screenshot of a window with the help of an external application like spectacle.
To explain why this is a great achievement we first need to look at X11. On X11 taking a screenshot of a window is easy. It’s part of the X protocol to read the pixmap data of the root window and you get the position and size of each window. Thus one is able to cut out the window and have it as a screenshot. That’s the most simple variant to do it, spectacle and previously ksnapshot do it differently. More on that later on.
This is the way to screenshot the active window. If one wants to screenshot any window the user needs to select it. Also for that the X protocol contains everything one needs: grab the mouse cursor, get the click and query the x window tree to figure out which window got clicked. Afterwards screenshot it the same way as explained.
For bringing that to Wayland we see some “problems”. Wayland is designed in a sane and secure way matching the security requirements of 2016 and not the security requirements of the 1980s. An application taking a screenshot of another window or of the complete system is nowadays not acceptable any more. And there is no built in way to take a screenshot – neither of fullscreen nor of a window.
And even if there were, it wouldn’t help much. Information about other windows are not available. One cannot get the information of which is the active window or what is the window under the current mouse position. Also an application is not able to grab the mouse and get the click anywhere on the screen. Mouse is always only available on the current window.
So overall quite some obstacles to take a screenshot and we see that it will need support from the Wayland compositor:
Selection of the window
Taking the screenshot
Communication protocol with the application
Luckily KWin is partially already prepared for it. Even on X11 KWin provides a screenshot functionally to spectacle. A few years ago we wanted to have something better than the standard X screenshots. We wanted to have a window completely taken without overlapping windows and the decoration with the shadows included. Shadows are in case of KWin rendered by the compositor and not part of the X windows. So in order to screenshot it needed support from the compositor, just like on Wayland.
Unfortunately we didn’t think of something like “a successor for X11” back then and designed the interaction more in a way suited for KSnapshot than for not X. We used a DBus API which passed the window id of the window to screenshot as argument and as a result sent an XPixmap to ksnapshot.
Overall not suited for Wayland, but a very good starting point as we already have the screenshot functionality available. So what we needed is a an X free DBus protocol and a way to select the window from within KWin.
Just like spectacle also KWin has a functionality to select a window through mouse interaction: the kill window functionality triggered through Ctrl+Alt+Escape. So far this functionality was only available for X11 and X11 windows, we were not able to do the same on Wayland.
For taking the screenshot I wondered whether we could use this functionality in a more generic way: a feature to interactively select a window. This required a slight refactoring. Of course the X11 way to select a window doesn’t help much, but the ideas we have there. The X11 specific code got moved into the X11-standalone Platform plugin and is now invoked through the internal Platform API. It doesn’t directly kill the window any more, but only returns the window which got selected.
A similar interaction code got added for Wayland and now the kill window functionality can be triggered on both X11 and Wayland and can kill both X11 and Wayland windows.
Now all we needed was making this functionality aware to the screenshot functionality. And with that we could trigger an interactive way to trigger a screenshot. This keeps the user in control of the process: the user is informed that a screenshot is being taken and informed how to cancel this. This addresses the security concerns we had for taking screenshots. By making the user perform an explicit action we know that the user agreed to taking the screenshot.
Now all that was needed is adjusting spectacle. Spectacle is a rather new and modern application which had multiple windowing systems in mind when the implementation started. So far I had not done any work on spectacle and the code base was new to me. Nevertheless in about an hour I had the screenshot selection working:
Not everything is supported though. Fullscreen or screen area screenshots are not yet supported. But given that the primary problem is solved now, this will also be addressed soonish.
If you like the work we are doing for Wayland consider participating in our end of year fundraiser.
Two weeks have passed since the Plasma 5.8 release and our Wayland efforts have seen quite some improvements. Some changes went into Plasma 5.8 as bug fixes, some changes are only available in master for the next release. With this blog post I want to highlight what we have improved since Plasma 5.8.
Resize only borders
KWin’s server side decorations have a feature that one can resize the window in the shadow area. With the Breeze window decoration this is available if one uses the border size “No Side Borders” or “No Borders”. For Wayland we just had to adjust the input area of a window slightly and honor it when evaluating the mouse pointer movements.
Global Shortcut handling
We found a few bugs related to global shortcut triggering. There is some unexpected behavior for shortcut triggering in xkbcommon, which will be addressed in the next release by adding new API. For now we had to workaround it to support some shortcuts which no longer triggered. Of course for every kind of shortcut which did not trigger we added a test case so we can also in future ensure that this works once the new xkbcommon release is available. At the moment we are not aware of any not working global shortcuts on Wayland. If you hit one, please report a bug.
Support for Keyboard LEDs through libinput
KWin did not enable the LEDs for num lock, caps lock, etc. This was mostly because I don’t have any keyboard which has such LEDs – neither my desktop keyboard nor my two notebooks have any LEDs. So I just didn’t notice that this was missing. Once we got the bug report we looked into adding this. I want to take this as an example of the “obvious bug” one doesn’t report because it’s so obvious. But if one doesn’t have such hardware it’s not so obvious any more.
Relative pointer support
A feature we added for Plasma 5.9 is support for the relative pointer protocol.
The protocol is implemented in KWayland 5.28 and KWin is adjusted to support the relative pointer events as can be seen in the screenshot of the input debug console. This is a rather important protocol to support games on Wayland. We also plan to add pointer confinement for Plasma 5.9.
Move windows through the widget style
Our widget styles Breeze and Oxygen have a feature to move the window when clicking in empty areas. This is a feature which needs to interact with the windowing system directly as Qt doesn’t provide an abstraction for it. On X11 it uses the NETRootInfo::moveResizeRequest, on Wayland support for triggering a window move is built into the core protocol. But so far we were not able to provide the feature on Wayland as we just didn’t have enough information from QtWayland. For example we lacked access to the wl_shell_surface on which we have to trigger the move. So some time ago I added support to QtWayland that we can access the wl_shell_surface through the native interface. Now about a year later we can start to use it. To support this feature we need to create an own wl_seat and wl_pointer object and track the serial of pointer button press. This we can then pass to the move request on the ShellSurface. The change is not KWin specific at all and will work on all Wayland compositors.
Color scheme sync to decoration
A new feature we added in KWin 5.0 is the possibility to synchronize the color scheme from the window into the window decoration and the context menu on the decoration. On X11 this works through a property which our KStyle library sets. This was the best we had back in the early days of the 5.x series as Qt didn’t expose enough information. It has the disadvantage that the sync only works with QWidget based applications and only with widget styles inheriting KStyle. For Plasma 5.9 we improved that and brought the relevant code into plasma-integration. The restriction to QWidget is gone and it works now with all kind of windows by listening to the QPlatformSurfaceEvent. This very useful event which got added in Qt 5.5. It informs us when a native window is created for a QWindow. Thus we can add our own X11 properties on the native window directly after creation and before the window is mapped.
While adjusting this code for X11 we also added the relevant bits for Wayland. We use the Qt Surface Extension protocol to pass a property to the server. That’s a small and neat addition the Qt devs did to allow communication between a Qt based client and a Qt based Wayland compositor. As one can see in the screenshot the color scheme now updates also for Wayland applications.
Window icons
Window icon handling in Wayland is different to X11. On X11 the icons are passed as pixmaps. That has a few disadvantages nowadays because the icons provided on the window might not have a high enough resolution to work well on high-dpi systems. The icon from the icon-theme though provides higher resolution. On Wayland there is no way to pass window icons around and the compositor takes the icon from the desktop file of the application. This works well unless we don’t have a desktop file. For such windows we now started to use a generic Wayland icon as the fallback, just like we use a generic X icon as fallback for X11 windows which don’t have an icon.
That’s an icon which one might have noticed when using a Plasma Wayland session as every Xwayland window only had the generic X icon in the task manager. The communication between KWin and the task manager also passes the icon name around and not pixmap data. This works well for everything which isn’t Xwayland where we normally just don’t have the name. For Plasma 5.9 we addressed this problem and extended our protocol to request pixmap data for a window icon which doesn’t have a name. Thus we are now able to also support Xwayland windows, which increases the useability of the system quite a lot.
Multi screen effect improvements
On Wayland several of our effects broke in a multi-screen setup. This is because rendering is different. On X11 all screens are rendered together in one rendering pass and we have one OpenGL window to render to. On Wayland we have one OpenGL window per screen and have one rendering pass per screen. That’s something our effects didn’t handle well and resulted in rendering issues. For Plasma 5.9 these issues are finally resolved.
Wobbly windows
One of the affected effects is Wobbly windows. A rather important effect given that this blog is subtitled “From the land of wobbly windows”. We experienced that in a multi-screen setup the effect was only active on one screen. If the window got moved to the other screen it completely vanished.
I was quite certain that this is not a problem with the effect itself, but rather with the way how we render. As we also saw other effects having rendering issues in multi-screen setups I was quite optimistic that fixing wobbly would fix many effects.
The investigation showed that the problem in fact was an incorrect area passed to glScissor due to the general changes in rendering explained above. Rendering on other screens got clipped away. With the proper change we got wobbly working and several other effects (Present Windows, Desktop Grid, Alt+Tab for example) without having to touch the effects at all.
Screenshot
With that knowledge in place we looked into fixing other effects. E.g. the screenshot effect which allows to save a screenshot in the tmp directory. A few example of screenshots taken with this effect can be seen in this blog post. The problem with this effect was that when taking a fullscreen shot over all screens only one got captured. The assumption here was that our glBlitFramebuffer code needs adjustment to be per output and with that we can now screenshot every screen individually or all screens combined.
Blur and Background Contrast
Related to that are the blur and background contrast effect as they also interact with the frame buffer, though don’t use the glBlitFramebuffer extension. With those effects one of the biggest problems was that the viewport got restored to a wrong value after unbinding the frame buffer object. Due to that the rendering got screwed up and we had severe rendering issues with blur on multi screen. These issues are now fixed as can be seen in the screenshot above: both screens are rendered correctly even with blur enable.
Panel improvements
Plasma’s panel got some improvements for Plasma 5.9. This started from bug reports about windows can cover not working and also auto-hide not working. Another example that it is important to report bugs.
Auto hiding panel
On X11 auto hiding panels use a custom protocol with KWin to indicate that they want to be restored if the mouse cursor touches the screen edge. It uses low level X11 code thus we also need a low level Wayland protocol for it. We extended our plasma shell protocol to expose auto hiding state and implemented it in both KWin and Plasma.
Search in widget explorer
We had a bug report that search in the widget explorer doesn’t work. The investigation showed that the reason for that is that the widget explorer is a panel window and we designed panels on Wayland so that they don’t take any keyboard focus. This is correct for the normal panel, but not for this special panel. We adjusted our protocol to provide an additional hint that the panel takes focus and implemented this in kwayland-integration in a way that the widget explorer gains focus without any adjustments to it.
KRunner as a panel
Of course there are more potential users for this new feature. One being KRunner. Once we had the code in place we decided to make KRunner a Panel on Wayland which brings us quite some improvements like it will be above other windows and on all desktops.
The announcement of KDE Neon dev/unstable switching to Wayland by default raised quite a few worried comments as NVIDIA’s proprietary driver is not supported. One thing should be clear: we won’t break any setups. We will make sure that X11 is selected by default if the given hardware setup does not support Wayland. Nevertheless I think that the amount of questions show that I should discuss this in more detail.
NVIDIA does support Wayland – kind-of. The solution they came up with is not compatible to any existing Wayland compositor and requires patches to make it work. For the reference implementation Weston there are patches provided by NVIDIA, but those have not been integrated yet. For KWin such patches do not exist and we have no plans to develop such an adaption as long as the patches are not merged into Weston. Even if there would be patches, we would not merge them as long as they are not merged into Weston.
The solution NVIDIA came up with requires different code paths. This is unfortunate as it would require driver specific adjustments and driver specific code paths. This is bad for everybody involved. For us developers, for the driver developers and most importantly for our users. It means that we developers have to spend time on implementing and maintaining a solution for one driver – time which could be spent on fixing bugs instead. We could do such an effort for one driver, but once it goes to every driver requiring adjustment it gets not manageable.
But also adjustments for one driver are problematic. The latest NVIDIA driver caused a regression in KWin. On Quadro hardware (other hardware seems to be not affected) our shader self test fails which results in compositing disabled. If one removes the shader self test everything works fine, though. I assume that there is a bug in KWin’s rendering of the self test which is triggered only with this driver. But as I don’t have such hardware I cannot verify. Yes, I did pass multiple patches for investigating and trying to fix it to a colleague with such hardware. No, please don’t donate me hardware.
In the end, after spending more than half a day on it, we had to do the worst option which is to add a driver and hardware specific check to disable the self test and ship it with the 5.7.5 release. It’s super problematic for the code maintainability to add such checks. We are hiding a bug and we cannot investigate it. We are now stuck with an implementation where we will never be able to say “we can remove that again”. Driver specific workarounds tend to stick around. E.g. we have such a check:
// Broken on Intel chips with Mesa 9.1 - BUG 313613
if (gl->driver() == Driver_Intel && gl->mesaVersion() >= kVersionNumber(9, 1) && gl->mesaVersion() < kVersionNumber(9, 2))
return;
It's nowadays absolutely pointless to have such code around as nobody is using such a Mesa version. But the code is still there, makes it more complex and has a maintenance cost. This is why driver specific implementations are bad and is nothing we want in our code base.
People asked to be pragmatic, because NVIDIA is so important. I am absolutely pragmatic here: we don't have the resources to develop and maintain an NVIDIA specific implementation on Wayland.
Also some people complained that this is unfair because we do have an implementation for (proprietary) Android drivers. I need to point out that this does not compare at all.
First of all our Android implementation is not specific for a proprietary driver. It is written for the open source hwcomposer interface exposed through libhybris. All of that is open source. The fact that the actual driver might be proprietary is nothing we like, but also not relevant for our implementation.
In addition the implementation is encapsulated in a platform plugin and significantly reduced in functionality (only one screen, no cursor, etc.). This is something we would not be able to do for NVIDIA (you would want multi-screen, right?).
For NVIDIA we would have to add a deviation into the DRM platform plugin to create the OpenGL context in a different way. This is something our architecture does not support and was not created for. The general idea is that if creating the GBM based context fails, KWin will terminate. Adding support there for a different way to get an OpenGL context up and running would result in lots of added complexity in a very important code path. We have to ensure that KWin terminates if OpenGL fails. At the same time we have to make sure that llvmpipe is not picked if NVIDIA hardware is used. This would be a horrible mess to maintain - especially if developers are not able to test this without huge effort.
From what I understand from the patch set it would also require to significantly change the presenting of a frame on an output and by that also turn our lower level code more complex. This code is currently able to serve both our OpenGL and our QPainter based compositor, but wouldn't allow to support NVIDIA's implementation. Adding changes there would hinder us in future development of the platform plugin. This is an important area we are working on and KWin 5.8 contains a new additional implementation making use of atomic mode settings. We want to have atomic mode settings used everywhere in the stack to have every frame perfect. NVIDIA's implementation would make that difficult.
EGLStreams bring another disadvantage as the code to bind a buffer (what a window renders) to a texture (what the compositor needs to render) would need changes. Binding the buffer is currently performed by KWin core and not part of the plugin infrastructure. Given that new additional code would also be needed there. We don't need that for any other platform we currently support. E.g. for hwcomposer on Andrid libhybris takes care of allowing us to use EGL the same way as on any other platform. I absolutely do not understand why changes would be needed there. Existing code shows that it can be done differently. And here we see again why I think the situation with EGLStream does not compare at all to supporting hwcomposer.
Overall we are not thrilled by the prospect of two competing implementations. We do hope that at XDC the discussions will have a positive end and that there will be only one implementation. I don't care which one, I don't care whether one is better as the other. What I care about is only requiring one code path, the possibility to test with free drivers (Mesa) and the support for atomic mode settings. Ideally I would also prefer to not have to adjust existing code.
At Desktop Summit 2011 in Berlin I did my first presentation on Wayland and presented the idea of Wayland to the KDE community and explained to the KDE community how we are going to port to Wayland. This year at QtCon in Berlin I was finally able to tell the KDE community that the port is finished and that our code is ready for testing.
In 2011 I used a half hour slot to mostly present the differences between X11 and Wayland and why we want Wayland. In addition I presented some of the to be expected porting steps and what we will have in the end. This year I only used a 10 min lightning talk slot to give the community an update on the work done the last year.
Of course the work on Wayland is not yet finished and Wayland is not yet fully ready for use. There are missing features and there must be bugs (new code base, etc.). But we are in a state to start the public beta.
What is interesting is comparing the slides from 2011 to what we have achieved. The plan presented there is to introduce “Window Manager Backends” in KWin. We wanted to identify windowing system independent areas and make our two most important classes Toplevel and Workspace X11 free and add a window manager abstraction. During the port this wasn’t really an aim, nevertheless we got there. We do have a window manager abstraction which would allow to add support for further windowing systems. Toplevel is (at runtime) X free. Workspace, though, is not yet X free, but that moved on my todo list.
Also we thoughts back in 2011 that this might be interesting for other platforms naming Android, WebOS and Microsoft Windows as examples. Android we kind of achieved by having support for Android’s hwcomposer and being able to run Wayland on top of an Android stack. Support for Android’s surfaceflinger is something we do not aim for. The example of WebOS doesn’t really fit any more as WebOS uses Wayland nowadays. And Windows is only in the area of theoretically possible (though with the new Linux support it would be interesting to try to get KWin running on it).
KWin nowadays has a platform abstraction and multiple platform plugins. This allows us to start a Wayland compositor on various software stacks. Currently we support:
DRM
fbdev
hwcomposer (through libhybris)
Wayland (nested)
X11 (nested)
virtual
Adding support for a new platform is quite straight forward and doesn’t need a lot of code. The main tasks of a Platform is to create the OpenGL context for the compositor and to present each frame on the Platform specific output. All platforms together are less than 10000 lines of code (cloc) and a single platform is around 400-3000 lines of code.
In order to add support for a new windowing system more work would be needed. It is very difficult to estimate how much code would be needed as it all depends on how well the concept can be mapped to Wayland. Ideally adding support for a new windowing system would be done by creating an external application which maps the windowing system to Wayland. Just like XWayland maps X11 to Wayland. But as we can see with XWayland this might not be enough. KWin also needs to be an X11 window manager to fully support X11 applications. Given that it really depends on the windowing system how much work is needed.
One could also add a new windowing system the same way as we added support for Wayland. This would require to implement our AbstractClient to have a representation for a managed window of the windowing system and add support for creating a texture from the window content. In addition various places in KWin need to be adjusted to also consider these windows. Not a trivial task and going through a mapping to Wayland is always the better solution. But still it’s possible and this makes KWin future proof for possible other windowing systems. In general KWin doesn’t care any more about the windowing system of a window. We can have X11 windows on Wayland and Wayland windows on X11 (only experimental branch, not yet merged).
This brings me back to my presentation from 2011. Back then we expected to have three phases of development. The first phase adding Wayland support to the existing X11 base. That was what we experimented with back then and as I just wrote still experiment with it. As it turned out that was not the proper approach for development.
As a second phase we expected to remove X and have a Wayland only system. At the moment we still require XWayland to start KWin/Wayland. During the development it showed that this is not something really needed. It was easier to move the existing X11 code to interact through XWayland – we could keep the X code and move faster.
The third and final phase was about adding back XWayland support, so that KWin can support both X11 and Wayland windows. That’s the phase we developed directly. Which is kind of interesting that we went to the final step although we thought we need easier intermediate steps.
During this year’s Akademy we had a few discussions about Wayland, and the Plasma and Neon team decided to switch Neon developer unstable edition to Wayland by default soonish.
There are still a few things in the stack which need to be shaken out – we need a newer Xwayland in Neon, we want to wait for Plasma 5.8 to be released, we need to get the latest QtWayland 5.7 build, etc. etc.
This is really exciting. It’s probably the biggest step towards Wayland by default the KDE community has ever taken. I hope that other continuous delivery systems will follow so that we can get many enthusiastic users to try Wayland.
For a family celebration I wanted to create a “Photo-Box” or “Selfie-Box”: a place where the guests can trigger a photo of themselves without having to use tools like a selfie-stick.
The requirements for the setup were:
Trigger should be remote controlled
The remote control should not be visible or at max hardly visible
The guests should see themselves before taking the photo
All already taken photos should be presented in a slide show to the guests
The camera in question supported some but not all of the requirements. Especially the last two were tricky. While it supported showing a slide show of all taken photos, the slide show ended as soon as a new photo was taken. But the camera also has an usb-connector so the whole power of a computer could be taken in.
A short investigation showed that gphoto2 could be the solution. It allows to completely remote control the camera and download photos. With that all requirements can be fulfilled. But by using a computer a new requirement got added: the screen should be locked.
This requirement created a challenge. As the maintainer of Plasma’s lock screen infrastructure I know what it is good at and that is blocking input and preventing other applications to be visible. Thus we cannot just use e.g. Digikam with gphoto2 integration to take the photo – the lock screen would prevent the Digikam window to be visible. Also there is no way to have a slide show in the lock screen.
Which means all requirements must be fulfilled through the lock screen infrastructure. A result of that is that I spent some time on adding wallpaper plugin support to the lock screen. This allowed to reuse Plasma’s wallpaper plugins and thus also the slide show plugin. One problem solved and all Plasma users can benefit from it.
But how to trigger gphoto2? The idea I came up with is using KWin/Wayland. KWin has full control over the input stack (even before the lock screen intercepts) and also knows which input devices it is interacting with. As a remote control I decided to use a Logitech Presenter and accept any clicked button on that device as the trigger. The code looks like the following:
And in addition the method InputRedirection::setupInputFilters needed an adjustment to install this new InputFilter just before installing the LockScreenFilter.
The final setup:
Camera on tripod
Connected to an external screen showing the live capture
Connected to a notebook through USB
Notebook connected to an external TV
Notebook locked and lock screen configured to show slide show of photos
Logitech Presenter used as remote control
The last detail which needed adjustments was on the lock screen theme. The text input is destroying the experience of the slide show. Thus a small hack to the QML code was needed to hide it and reveal again after pointer motion.
What I want to show with this blog post is one of the advantage of open source software: you can adjust the software to your needs and turn it into something completely different which fits your needs.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.