How we could use Bugzilla for User Support

On the kde-core-devel mailinglist we currently have a very interesting discussion about how we use our bugzilla installation and in general user support. The first interesting thing to notice is that there are different developer groups who have different expectations from bugzilla and who see it being used differently.

  1. library developers: users of libraries are developers. Bug reports are useful and have a high level of entropy. In general the developers would like to have reporting bugs as easy as possible. Examples are KDE frameworks
  2. developers of advanced applications: the user group is very specific and the user base is quite experienced. The amount of bug reports is still rather small and reports are very useful to the product. Examples are Krita and to a certain degree KDE PIM
  3. developers of end user applications: large number of bug reports and most bug reports are user support issues. The entropy is very low and it is very difficult for the developers to find useful information and to fix actual bugs. Examples: Plasma Desktop and KWin

From these three developer groups we can recognize that there are different expectations on what bugzilla is used for: developer tool vs. end-user support tool. From the discussion it seems so far that all groups want to have bugzilla as a developer tool, but that the first two groups also consider it to be a user support tool.

Bugzilla is no end user support tool

For KWin our bugzilla installation is actually abused as an end-user support tool and given what we see I don’t think that bugzilla is suited for end-user support.

  • Successful solved problems cannot be marked as that. It is not a “fixed” bug, but in fact an “invalid” bug.
  • Users cannot find their issue as successful solved issues would be marked as “invalid”
  • user interface is too technical for user support, e.g. “severity”, “priority”, etc.
  • useful information is hidden in mess of useless information, e.g. duplication notices
  • moderation is not possible
  • user interface is tailored towards a defects management system
  • crash information hide the workarounds to prevent crashes
  • release announcements state how many bugs have been resolved

Given that I think we should make sure that user support issues do not end up in the bug tracker. It’s nothing the users want to use. Given the bugs we have in KWin I see that users think that they found a bug, but in fact it’s just a user support issue. We have to bring users to the right tool to solve their problems without introducing more work for the developers.

First, Second and Third level support

At the first stage for end-user support we need (and have) a working first level support system. This can and should be distributed over many channels like forums, mailinglists, IRC channels and maybe even phone support. First level supporters take care about the issue and help the user. Many problems can be solved directly – e.g. by telling them which option to set.

In case the first level support is unable to solve the problem the second level support has to be involved. The second level support verifies that the first level support tried everything to solve the problem. If that is the case the second level support investigates whether there is a real issue (aka bug) and whether it is already known on the bugtracker. If it is not the case the second level support informs the third-level support about the issue by opening a bug report.

The third-level support are the actual developers. They have a look at the issue and verify whether this is a bug or not. If it is a bug they accept the bug report and inform the user about it in the initial first level support tool that the development department took over the issue.

If it is not a bug the third level supporters provide the required information and pass the issue back to the second level support. These supporters use the provided information to update the documentation for the first level support so that if the problem arises again it is well documented and the first level support knows how to handle it without involving the first level support.

How this could be implemented?

In my opinion the best tool for first level support is a forum, which we already have and which works quite well. The second level support is mostly missing. A way to inform second level support needs to be introduced. This could be done by having a tool to create a bug report from a user support thread assigned to the second level support.

In this bug report the second level support can handle the work task oriented. If they need to involve the third level support they just inform them by setting a flag. The third level support can then investigate the issue and in case there is a real bug they open a bug report for this specific task and block the support report for it.

Advantages of the approach

The biggest advantage is that the developers only get real bug reports and each bug report has all the required information as the report is in fact created by the developers. Only valid bug reports will enter the system.

In general the bugtracker will be used more task oriented: one task for investigation, one task for fixing the bug and so on. Bugzilla is very suited for such work as it is possible to add flags, introduce bug dependencies and if required duplicate bugs. It would allow us to use bugzilla to manage our work which is currently not possible.

The users would always be informed about the state of their support request. They would get the feedback that their problem is under further investigation, that a defect support has been created, that it has been fixed. It would be a much more transparent process for the user.

As there is always the feedback to the second level support state the end user and first level support documentation would get updated increasing our quality not only in the software but also in the documentation.

Disadvantages of the Approach

The biggest disadvantage is clearly that this would result in a closed bug tracker. For many people involved in free software an open bug tracker is the ultimate tool. Here I think the advantages for all users are overwhelming and the closed down bug tracker is a small price to pay.

Another disadvantage is that it requires creating multiple bug reports and re-entering data for the same issue. Again I think it’s a small price to pay especially if we go to a task oriented approach and have everything nicely linked.

I hope to see a nice discussion in the comment section of my blog post. Hopefully many people see the advantage in this and I would love to try such a new issue tracking workflow for KWin as an experiment for the complete KDE community.

The Costs of Supporting Legacy Hardware

The interesting IT news of last week is probably that the next Mac OS X version will drop support for some legacy hardware. Looking back at the history of Apple we see that this is not the first time, but that the company dropped support for old hardware quite regularly by changing the CPU architecture. This time it is different as the GPU is the reason for the drop of support.

This made me wonder what are actually the costs to support legacy hardware in KWin? While it is completely acceptable that new Windows versions and new Mac OS X versions do not run on legacy hardware, there seems to be the demand that free software has to support all kind of legacy hardware. So what does supporting legacy hardware cost for KWin?

What’s important to remember when asking this question is that supporting legacy hardware has nothing to do with supporting low end hardware. That is supporting OpenGL ES 2.0 hardware is in fact an improvement as most of the code is shared with the OpenGL 2 backend, but supporting OpenGL 1.x hardware really requires different code paths. So optimizing for low end hardware improves the overall system while optimizing the legacy hardware might in fact decrease the quality of the overall system.

So what is the legacy hardware we are facing with KWin? Well basically anything not supporting at least OpenGL 2.0 and not supporting non-power-of-two (NPOT) textures. The latter ones are really causing headaches as we are unable to develop for it and have broken support for it during the last release cycles.

Up until recently Mesa did not support OpenGL 2, but this is nowadays no problem any more, so we can be sure that if OpenGL 2 is not supported, the hardware is lacking features. On ATI/AMD OpenGL 2 has been supported since the R300 (with limitations in NPOT support) which got released in 2002. On NVIDIA OpenGL 2 has been supported since NV40 which got released in 2004. On Intel OpenGL 2 has been supported since I965 which got released in 2006. (All this information is taken from KWin code)

This means if I talk of legacy hardware it means hardware which is at least six years old. Supporting such hardware comes with some costs. E.g. on Intel you have the problem that you cannot buy the GPUs, that is you have to get a six year old system just to test. With ATI I faced the problem that even if I wanted to test an old R200 I cannot install it in my system because my system does not come with an AGP socket anymore – the same holds true for old NVIDIA systems.

So the only possible solution to test on real hardware which is not OpenGL 2 capable means to use hardware as old as the GPU to test. As there is the saying that free software development is about “scratching his own itch”, I must admit that I cannot scratch any itch with running legacy hardware. Especially not when I want to develop. I have a rather powerful system to not have to wait for compile jobs to finish. Developing on a several year old single core system with maybe a gigabyte of RAM is nothing I want to do.

So in fact we cannot develop for the legacy hardware. Now let’s have a look at the costs for us to have legacy hardware support. In KWin this comes as the OpenGL 1.x backend and to a certain degree also as the XRender backend. XRender has a some usefulness without considering legacy hardware as it provides translucency for virtual systems. Nevertheless we should consider it as a legacy support system. According to SLOCCount the XRender backend has a size of about 1000 lines of code. But I can only count the discrete source files for XRender. It does not consider the branches in other code pathes and so on.

Getting an overview of the OpenGL 1.x related code is rather difficult. SLOCCount cannot help as the code just uses different code branches in the same files. Looking at the code it is clear that there are several hundred lines of code dedicated to OpenGL 1.x. This is unfortunately also scattered around many files. Overall about 5 % of our core code base  is dedicated for supporting legacy hardware. All OpenGL 1.x related code is also ifdefed to hide it from the gles backends. So each dedicated OpenGL 1.x call comes with an additional ifdef and in many effects there is a branch for OpenGL 1 and for OpenGL 2.

To sum it up: we have increased complexity, increased maintenance costs and lots of code just for OpenGL 1.x related hardware which we cannot really test. So a rather bad situation. Additionally it is nothing which we can continue to support in the future. Neither Wayland nor Qt 5 will make sense on such hardware (XRender based compositing might still make sense with Qt 5, but as the name says not with Wayland).

Given this the logical step would be to remove the OpenGL 1.x related code completely. This would of course clash with the demand of some user groups thinking we have to run on old legacy hardware. In the case of Intel GPUs it might be in fact true that there is still a larger number of users around – this is of course difficult to judge.

Another real issue for removing is that the proprietary ATI driver (aka Catalyst/fglrx) only provides a decent compositing performance with indirect rendering restricting the available API to OpenGL 1.x. So removing OpenGL 1.x support would mean removing OpenGL compositing support for all fglrx powered systems even if the GPU supports OpenGL 4. But to be honest: given that the radeon driver has no problems with OpenGL 2 on the same hardware, I would not mind removing support for proprietary drivers.

What might be a nice solution to this problem are the llvmpipe drivers in Mesa which hopefully provide a good enough experience without hardware support. At least Fedora wants to use llvmpipe drivers for GNOME Shell. As soon as Mesa 8.0 hits my Debian testing system I will evaluate the usage of llvmpipe drivers for KWin as this will hopefully improve our experience on virtual machines. If I am satisfied with the performance, I will be tempted to remove the OpenGL 1.x based legacy code…

Enabling Others to do Awesome

Recently Aaron wrote a blog series for the introduction of Spark, I want to cite a central saying of one post:

I want the things which I help make to become opportunities for others in their turn to participate with their own voice, their own movement and their own passion.

If I reflect my own motivation for my work on KWin and especially the areas of work I decided to work on in the 4.9 cycle I see quite some parallels to the paragraph cited above. But personally I would phrase it slightly different:

I want to help others to do awesome.

This describes perfectly what I am currently working on. I no longer want to write fancy effects – in fact I stopped adding effects quite some time ago. I don’t want to extend KWin to be the window manager which incorporates all other window manager concepts no matter how useful they are – we stopped doing that as well.

But still I want that our users get the window manager they want. Whether it is fancy effects or a tiling window manager or new concepts for virtual desktops, KWin should be the base for it.

At the heart of all these efforts we have the scripting bindings. While we already had bindings since 4.6 they never really got used – this will change 🙂 But JavaScript bindings is just one part of the game, the other part is declarative, that is QML, bindings.

In 4.8 we already experimented with QML for the window switcher layouts to test whether QML fits our needs and I am very happy so far with it. But 4.8 does not yet include the possibility to really write own layouts. While it is possible to do so, there is no way to configure a custom one or even distribute this.

For 4.9 the installation of the window switcher has been adjusted to use the Plasma Package structure which makes it possible to install window switcher through the plasmapkg tool. But this is only part of it: plasmapkg has been extended to support any kind of scriptable KWin content, be it window switchers, KWin scripts or KWin effects written in JavaScript (decoration to be added soon).

For this KWin scripts follow also the Plasma Package structure which is quite awesome as it finally allows to dynamically enable/disable scripts through our KCM:

This dialog wants to tell me: “I want Get Hot New Stuff integration” to directly download new scripts from the Internet. And sure it does need that and this will be added for the 4.9 release.

The screenshot shows two scripts I’m currently working on – both will be shipped with KWin in 4.9. Videowall is a script which scales fullscreen video players over all attached screens to generate a video wall. This is the first script which actually replaces functionality from KWin core. KWin had the functionality to disable multiscreen support for fullscreen windows and that really cluttered both code and UI. And it just did not make any sense. There are just no valid usecases to have in general fullscreen windows span all screens, except for video walls. And that’s what we have here: a small KWin script to make an obsoleted functionality sane. We improved KWin for everyone by removing an difficult and confusing control module without taking away the functionality for expert users.

But even more interesting than the video wall is the other script. This is not a JavaScript based KWin script, but a declarative QML script. It will replace the existing C++ based implementation of the desktop change OSD without any functionality loss. Which means less code and much more readable code 🙂

And what’s really awesome: JavaScript and QML have the same bindings, the API is exactly the same. So going from QML to JavaScript and vice versa is extremely simple. But that’s not all with QML based KWin scripts you can include live window thumbnails. So this is a technology to allow you to do awesome new switchers like Present Windows or Desktop Grid.

As you notice I’m currently dogfooding the bindings to myself. This helps me to see what is needed, what can be improved and to ensure that the bindings will work for any given usecase we can imagine. My wish is to see for example the complete window tiling support in KWin to be rewritten as a KWin script. This should be possible and would proof that you can use scripting to get a completely different window management.

So if you want to make awesome things with your window manager, now is the time to get started. Don’t hesitate to contact me and demand further bindings if anything is missing.

Disclaimer: QML support has not yet been merged into master.

Having a look at the old/new Desktop Environments

2011 seems to have finally been the year of the Linux Desktop. But not in the way people always anticipated it by Linux Desktops finally dominating the desktop market (something which will probably never happen unless Microsoft fails badly), but by releasing many new Desktop Shells.

Both large desktop communities have introduced a new shell. The GNOME community has released GNOME Shell, the KDE community released the touch oriented Plasma Active and Canonical decided to go alone with Unity. All these shells introduce new User Interaction concepts and interestingly 2011 was also the year of new classic desktops.

GNOME 2 got forked as the Mate Desktop, GNOME Shell got forked as Cinnamon to provide a classic user experience. In the Qt/KDE world Razor-Qt got introduced and although being around a little bit longer Trinity (a fork of KDE 3.5) got some attention probably based on the fact of the forks of GNOME.

My personal expectation is that Mate will fade away due to Cinnamon. For Cinnamon I expect that it will stay as there seems to be a demand for a classic desktop experience in the GNOME world (btw Plasma could do all that for you). My hope is that Cinnamon and GNOME Shell will rejoin again as I don’t like forks. GNOME Shell will of course also stay around and I expect it to become quite successful as time passes by and I expect many former Ubuntu users migrating to it when Canonical finally decides to go for Tablets only.

Trinity – Who needs it?

In the KDE world I find the Trinity project most interesting both from a social and from a technical point of view. I am very interested in understanding why developers decide to keep an outdated and orphaned code base alive. What motivates the developers to continue the development given that the original authors moved on? Also what are the targeted user groups? Who needs an outdated system like Trinity? I tried to identify the possible user groups of Trinity and came up with three groups:

  1. Users searching for a “lightweight” desktop
  2. Haters of “KDE 4” technology
  3. Users wanting a “classic” desktop on KDE

The first user group mostly confuses lightweight with old hardware. If we see the specs of Spark I would not call that a high end system, so the reason that users have trouble with KDE Plasma on “lightweight” systems is not because the hardware is “lightweight” but just old. But even KDE 3.5 was far from lightweight. The saying that KDE is bloated is probably as old as KDE itself. So the users looking for Trinity as a lightweight alternative to KDE Plasma just fail to realize that five years ago they were screaming that KDE 3.5 is too heavy. Neither KDE 3.5 nor Plasma tried to run well on old hardware – this is a market very well served by e.g. LXDE. So keeping KDE 3.5 alive for such users seems to be the wrong decision.

The second user group is the most dangerous to the project. My fear is that the project gets dominated by users and developers hating the KDE 4 technology. Emotions – especially hate – are very bad to base decisions on. If better technical solutions are discarded just because it might optionally pull in a “hated” KDE technology, the project will run into problems. Also if the hating user group is too big it brings the danger that they will block any progress of the project by proclaiming that they will leave (hate) the project as well. The result would be stagnation and a backward oriented project. My personal opinion is that Trinity is already dominated by haters – more to that later on.

The last user group is the one demanding a “classic” desktop. Well this seems like a valid use case to me. But if I look at a recent KDE Plasma 4.8 and at a KDE 3.5 I don’t see much difference. With a few clicks you get a desktop which looks and behaves the same way as 3.5. There might be slight differences, but is that worth keeping millions of outdated lines of code alive? Wouldn’t it be better to just write another Plasma shell to mimic the KDesktop/Kicker behavior? And even if you don’t want a Plasma powered classic desktop, there is Razor-Qt which offers a modern approach to the problem instead of building on legacy solutions like Trinity.

Trinity – the desktop of duplication

One of the biggest issues with the development of Trinity is that they took over the KDE 3.5 code base without having a plan what to do with it. There are applications which got dropped in KDE 4 (e.g. Kicker) – for those I can understand that the development is continued. There are applications which use a “hated” technology – e.g. KDEPIM using Akonadi. Unfortunately continuing KDEPIM 3.5 will not solve any of the issues which lead to the hated technology nor will it improve the modern project. Last but not least there are application whose development continued in the same way offering a much better product nowadays – for example KWin, Kate, Okular or all the games and edu applications. For those applications I don’t see any need to continue the development of the KDE 3.5 product: it does not serve any of the identified user groups. Personally I find it very sad to see valuable FLOSS developer power wasted on such products instead of helping the current versions.

This is an unfortunate duplication of work. But it’s not the only one going on in the Trinity project. They migrated to git – again (KDE already did that). They migrated to CMake – again (KDE already did that). They started porting to Qt 4 – again (KDE already did that). They have to migrate away from HAL – again (KDE already did that). It’s a pity to see all this duplication with that limited manpower.

But it’s not only the duplication which is a real issue, but also the lack of knowing what is going on in KDE. For example there is an enterprise branch of KDEPIM 3.5. I just dared to pick out one random commit from KDE’s enterprise branch and checked whether the change is present in Trinity’s version of the changed file. I think you can guess the result. I also checked one random change for kdelibs and that is also not present in “tdelibs”. Monitoring directories for changes is extremely simple in KDE.

This means that Trinity does not only duplicate the work already done by KDE, but also that a user running Trinity gets a worse product than stock KDE 3.5 as offered by some distributions using the enterprise branches.

Trinity – the haters desktop

Some time ago I decided to contact the Trinity development team with the aim to reduce some of the ongoing duplication. Seeing commits to “twin” really hurts and our current version of KWin is just better in any area. We have not just integrated compositing but also fixed many, many window manager related bugs. All those fixes are not present in the “twin” fork.

But there is a more dangerous issue. I had taken the time to look through the commits to “twin” and they are in the categories from “plain wrong” to “might lead to a crash that twin does not start any more”. Developing on a foreign code base without any help is very difficult and nobody has ever contacted the kwin development team before (at least not me personally, the kwin mailing list, the bug tracker, the IRC channel when I was around).

The reactions to my offer to help were mostly (there are a few positive exceptions) negative. And unfortunately it resulted again in duplication of work. The lead developer decided that it is important to allow multiple window managers. This is a great thing to do, but what a pity that the KDE developers thought about the same years ago and found a better place to put it instead of the window decorations KCM.

But I wanted to write about the hatred. One of the reactions to my proposal was:

Personally I would hate to have to install kwin, which relies on a bunch of other KDE4 libraries and automatically installs that akodani garbage scanner stuff, just to use TDE

After pointing out that KWin does not depend on Akonadi, I got:

I meant nepomuk, not akodanai.

I think especially the typos speak for itself. Not knowing what the technology is, what it does or what it is for. But obviously it is a “garbage scanner”. This is not a random developer or user, but the project leader.

Another highlight of a recent thread:

This is another area that we should be able to do better than KDE4

Where I ask myself why would anyone have that aim at all? Why not work together with KDE to have a shared and improved code base? After pointing that out, I was informed, that

This Email was not supposed to go to the public list

We see here unfortunately a pattern and I dare to quote another mail from the same thread:

I don’t have time or the desire to pick through KDE4 at the moment but KDE4 is still less efficient for my workflows then TDE, period. It takes more space on the screen to display less information in a harder-to-digest format for starters. Then there is the whole assumption that people have low-resolution or small screens and one-button mice (in TDE all three mouse buttons can be used to interact with on screen elements–much of that power is just gone in KDE4).

As I said I don’t have time to play with KDE4, and all of my prior attempts to use KDE4 as anything other than a shiny toy failed miserably. Please remember that there is a bit of a frog-in-the-pot syndrome when users are forced to use an inefficient interface for long periods of time; the best way to break this is to go back and use Windows XP or KDE 3.5 for a day or two, then go back to KDE4. If KDE4 is truly better then that will be obvious; if it is not then this fact will also be obvious. (I have done this and KDE4 still looks like a shiny toy).

I recommend to read this paragraph several times. Let it sick in…

… thanks. Now let’s see: someone claims that KDE 4 (probably Plasma) is still less efficient, although he did not test it. This person is in charge of the development of Trinity. How should he be doing any sane decision with such a world view? I don’t get it.

It’s pure hatred against KDE technologies and this dominates the development of Trinity. And this is not a single developer issue, it’s unfortunately a common pattern. I have for that one more mail to point to (only click the link if you have not eaten anything and prepared a bucket next to your seat). The worst are the reactions on such a mail like “100% On the nail ! Well said Luke…” or ” I am stupid enougth to imagine that the leader of the KDE developpment team could receive two salaries: with one coming from MS. I see KDE as something self destroying!”
Nobody called the people to order. Spreading pure hatred against KDE technologies and their developers seems to be perfectly acceptable inside the Trinity project.

Trinity – where will it go?

Given what I had seen over the last months by following the Trinity mailing list and partly the development it is quite clear that Trinity is a project for haters of KDE 4 technology. This is the only focus of the project and this is the reason for the ongoing duplication of work as they are unable to just even look whether KDE has already solved the issue.

Overall it results in pretty severe and dangerous design decisions. The hatred results in things like:

At one time there was a desire to port to Qt4, however months of solid work showed that Qt4 cannot provide the features needed to create a fast, efficient desktop geared towards mouse/keyboard interaction and high on-screen information content.

Well, I’m personally not surprised that you get this result, if you add a wrapper around Qt so that you can use Qt 3 and Qt 4 alongside. I’m also not surprised that this is the result if you have a mindset as illustrated above. If you want Qt 4 to be “worse” than Qt 3 in your project, you are able to achieve that. Don’t trust the benchmark you did not manipulate yourself 😉

So Trinity will keep stuck on Qt 3, which reached EOL on July, 1st 2007. Trinity thinks they can continue to maintain this orphaned code base – alongside the millions of line of code from KDE.

Can this work out? I doubt it. No developer knows the code and there is nobody to ask (and they don’t ask anyway it as I noticed myself). Given my personal experience of offering help I understand each KDE developer who does not want to have anything to do with the Trinity project and who would refuse to help them.

Looking over the mailing list there seems to be only a hand full of active developers, looking at the git repository it seems to be a one-man show. One developer to continue a development previously carried out by hundreds? At the same time facing the legacy issues like the deprecation of HAL, having to take care of packaging for all distributions. Finding rare issues like kernel is now 3.x and no longer 2.6.x? I rather doubt that this can succeed.

I have read quite often that “Qt 3 is now maintained by Trinity”. This is a pure lie. The development team is magnitudes too small to maintain this code base. Anybody running Trinity runs the risk of being exposed to newly discovered security vulnerabilities. It is quite clear that security issues discovered and fixed in enterprise distributions (e.g. RHEL, SLED) would stay open in the Trinity project (c.f. missing commits from enterprise branches) probably without even issuing a CVE.

Is there hope?

So what for those users who want a modern, classic Qt based desktop? Trinity will never be the desktop solution for it. The good news is that there is a sane project providing a classic Qt based desktop: Razor-Qt. It is built up on Qt 4 and has a clear project aim. Instead of just forking everything and the kitchen-sink, razor concentrates on providing a desktop shell. And the code is clean and well written, following modern approaches.

So my recommendation for all Trinity users is to either try again KDE Plasma 4.8 or to give razor a try. Trinity is no project anybody could seriously recommend and a stock KDE 3.5 is most likely a better solution than Trinity.

About Compositors and Game Rendering Performance

Today Phoronix published (again) test results comparing the Game Rendering Performance on various desktop environments. As it “seems” like the performance of Game Rendering under KWin got worse in comparison to last year I want to point out a few things.

First of all: keep in mind that this is a test of Game Rendering Performance and doesn’t tell anything about the performance of tasks that matter. The development focus of KWin is clearly not on being great for games. We want to be (and are) the best composited window manager for desktop systems. That is what matters that is what we fine tune the system for.

Another important fact to know is that last years test was seriously flawed and I think this years test is flawed in a similar way. Last year KWin used by default unredirection of fullscreen windows while no other composited window manager in the test did that. With 4.7 we adjusted the default and turned unredirection off. But at the same time Mutter received to my knowledge unredirection of fullscreen windows. In case it is enabled in Mutter we have the explanation for last years bad results and this years good results.

If I would perform such a test, I would not benchmark KWin once but once for OpenGL 2.x backend, once for OpenGL ES 2.0 backend, once for OpenGL 1.x backend, once for XRender backend, the same set with unredirection of fullscreen windows turned on and once without compositing at all. We see there are so many things that would influence the game rendering performance that just one run of the benchmark is not enough.

But still we would recommend to turn compositing off when playing games. At least that is what I would do. A window manager is mostly just in the way of games and that is one of the reasons why gaming on desktop is by far not as good as playing on a gaming console. So if you want to game with KWin use the feature to specify a window specific rule to block compositing as long as the game is running. This will yield the best game rendering performance.