KWin a solution for non-KDE based Desktop Environments?

Recently I have seen more comments about using KWin as a stand-alone window manager in other desktop environments. It looks like quite some users are looking for a replacement for Compiz nowadays. But of course especially among the users of the lightweight desktops there is the perception that one cannot use KWin because it is done by KDE.

So I thought I spent some time on explaining about what it actually means. Of course KWin is the window manager of the KDE Plasma workspaces. And this means it is part of the KDE source code module called “kde-workspace”. Most distributions provide one package or a set of packages which depend on each other for this workspace module. This means to install KWin one has to install what people consider to be “KDE”. But it doesn’t mean that one has to run any other part of the kde-workspaces. KWin is a standalone application which only depends on the kde libraries and requires a few runtime modules (e.g. the globalshortcuts daemon or kcmshell4). One does not have to run the Plasma desktop shell or systemsettings or any other application provided by the KDE community.

So installing KWin requires to install a few more applications, but all they will do is take up some space on your hard disk. I know that people are sometimes very concerned about it, so I run “du -h” on my kde install directory. This includes not just the kde-workspace module with its dependencies, but more or less everything we have in KDE’s git repository including things like an office suite, IDE, webbrowser, artwork and many other things one doesn’t need to run a window manager 😉 The result of all that is just 13 GB of disk usage. Given current storage costs (0.06 EUR/GB) this costs less than 1 EUR which is less than a cup of coffee where I live. And remember KWin will need less storage. The bare kde-window-manager package in Debian is just around 10 MB.

I understand that people care about the dependencies and think this is important. I just don’t think it’s of any importance in a world where a movie needs significantly more data storage. Still we care about the dependencies and we are working on breaking down the dependency chain as part of the frameworks modularization efforts. One of the results of this is that we have documented dependencies nowadays. And we are working on getting the dependency to the Plasma framework as a runtime-only dependency over QtQuick, so that people can put together themeing for KWin which does not pull in any bits of the Plasma dependency. Help on that is appreciated 🙂

A more relevant issue is the question of memory usage due to running KWin. Unlike disk storage, memory storage is still rather constraint. Unfortunately it’s very difficult to provide correct measurements on the memory usage of a single KDE application. KDE applications have many shared libraries (e.g. Qt). So if KWin is the only Qt application, the relative memory usage is higher than when using several Qt applications as for example in LXDE on Qt.

Now a few highly non-scientific numbers: according to KSysGuard my self-compiled KWin (Qt4) uses around 40 MB of private memory and 38 MB of shared libraries (Qt, kdelibs, XLib, xcb, etc.). The memory usage also depends on what you use. If you activate the desktop cube effect with a 10 MB wallpaper put in the background, you will see this in the memory usage 😉 Just as another value for comparison: the iceweasel instance I’m writing this blog post in has a private memory usage of more than 700 MB. Of course KWin is with that in a different league than the minimalistic window managers, but one has to see that KWin provides more features and is a window manager and compositor. If one needs to run two applications to get close to the same feature set, it’s quite likely that the same amount of memory is needed. KWin has many features and there is no such thing as free-lunch in IT. It’s certainly possible to trim KWin down by not loading the KWin effects and ensuring that no scripts are loaded and simplified graphics.

Given that I can only recommend to give KWin a try and not to discard it because it is from KDE and might pull in some dependencies. Evaluate by the features we provide and you want to use and not by some random number on your hard disk or your memory usage.

KWin/5, QtQuick 2 and the caused changes for OpenGL

In the KWin/4 days our OpenGL implementation could be based on some assumptions. First of all only KWin creates an OpenGL context, none of the libraries, KWin depends on, uses OpenGL. From this it is possible to derive more assumptions, for example our compositing OpenGL context is always current. Which means that it’s possible to run any OpenGL code in any effect. No matter how the code path got triggered. Also we can link KWin against OpenGL or OpenGL ES. We don’t have to check with other libraries to prevent conflicts.

With KWin/5 these assumptions no longer hold. KWin uses QtQuick 2 and with that we pull in the Qt OpenGL module. One of the direct implications of this change is, that we are no longer able to provide kwin/OpenGL and kwin/OpenGLES at the same time. The compilation of KWin has to follow the compilation of the Qt OpenGL module. So compiling against OpenGLES is only possible if Qt is compiled for OpenGLES, which means that the option to run KWin on OpenGLES is probably non-existing on classic desktop computers and the option to run KWin on OpenGL is probably non-existing on ARM systems such as the Improv. Given that it’s no longer possible to compile both versions at the same time, the binary kwin_gles got removed. A kwin compiled against GLES is just called kwin.

With QtQuick also our assumption that only KWin creates an OpenGL context and makes it current doesn’t hold any more. In fact it could be that at any time Qt creates an OpenGL context and makes it current in the main GUI thread. Now people probably know that QtQuick 2 uses a rendering thread, so that should be fine, right? Well not quite. To decide whether QtQuick uses a rendering thread it creates a dummy context in the main gui thread and makes it current. So our assumption that our context is created once and then kept current, doesn’t hold any more. The first solution to this problem was to make our context current whenever we go into the repaint loop. That worked quite well, till I tested KWin/5 on my SandyBridge notebook.

The problem I stumbled upon is that Qt doesn’t use a rendering thread in all cases. For some hardware it prefers to use the main gui thread. One of them is SandyBridge Mobile, the hardware I am using on my notebook. This showed that the initial solution was not elaborated enough. With another context rendering in the same thread it showed that it’s quite likely that we hit conditions where our compositing context is not current. Resulting in e.g. not rendered window decorations, effects not able to load their textures, etc. etc.

These problems are hopefully solved, now. The effects API is extended by calls to make the context current and I tried to hunt down all effects which do OpenGL calls outside the repaint loop. Unfortunately given the large number of effects it’s still possible that some effects use it incorrectly. It will be difficult to track those down: so please test.

The case when QtQuick uses the main GUI thread for rendering illustrates that we in KWin are not the only ones having incorrect assumptions on OpenGL. QOpenGLContext assumes that every OpenGL context in a Qt application has been created through QOpenGLContext and that an OpenGL context is only made current on the current thread through the QOpenGLContext API. Especially if one uses glx or egl directly to make a context current QOpenGLContext doesn’t notice this and assumes that its own context is still current which can result in nastiness. This is circumvented in KWin now by making sure that QOpenGLContext has correct information once the compositing context is made current. Nevertheless we are still hitting a bug causing a crash. This one is also currently worked around in the development version by enforcing XRender based compositing on the hardware which uses the main GUI thread for rendering. On SandyBridge one can also use the environment variable QT_OPENGL_NO_SANITY_CHECK to force QtQuick to use a dedicated rendering thread as the problem why Qt uses the main gui thread is not relevant to KWin’s usage of QtQuick. KWin also checks for this environment variable and doesn’t force to XRender, if it is set.

Obviously one could question why we are not using QOpenGLContext given that this seems to conflict. We haven’t used Qt’s OpenGL module mostly for historic reasons. Of course I evaluated the option of using QOpenGLContext when investigating this issue and right now in Qt 5.2 it would not be an appropriate solution for the usage in KWin. It’s not possible to create a QOpenGLContext from a native context and even if it were possible our implementation is more flexible. KWin can be compiled against both egl and glx allowing to switch through an environment variable. Qt on the other hand supports either egl or glx, but not both at the same time. If I find time for this, I intend to improve the situation for Qt 5.3, so that we can consider the usage of QOpenGLContext once we depend on Qt 5.3. Switching to Qt’s OpenGL context wrapper would allow us to get rid of a considerable amount of code. I’m especially interested in the QOpenGLFunctions. Obviously that will only be a solution if KWin uses the same windowing system as Qt’s platform plugin. But that’s a problem for another day 😉

KWin/5 open for bug reports

Today I added a new version number to our bugtracker: 4.90.1. This is the version number currently used by KWin on Qt 5 and this means that I consider KWin/5 to have reached a quality level where I think it makes sense to start reporting bugs.

On my system KWin/5 has become the window manager I use for managing the windows needed for developing said window manager. The stability is already really good and today I fixed one last annoying crash when restarting KWin. So I’m already quite happy. Also the functionality looks good. Some of the problems I had seen and been annoyed by are fixed and this means that my normal workflow is working. But KWin supports more than the “Martin workflow” and this means you have to test it and report bugs! Grab the latest daily build packages for your distro and enjoy the power of a Qt 5 based window manager.

Color Scheme syncing between window and it’s decoration

Some time ago I started Krita and I had the thought: well we can do better. The window decoration is just looking out of place. Everything is a nice dark color, but the window decoration isn’t. So I asked a few people what they think about making it better and the reaction was overall very positive and I started to investigate.

Right now I have a solution for KWin/5 which allows the window decoration to follow the color scheme of the window. Have a video:

At the moment it’s implemented mostly inside Oxygen, but I intend to move most of it to KColorScheme an KStyle directly, so that any KDE/Qt application can benefit from this new feature.

And of course I need to point out that all of that is possible without opening Pandora’s box of Client Side Window Decorations as Aaron said today.

How Code flows – about Upstream and Downstream

The FLOSS ecosystem consists of a large amount of independent projects which develop their software. In the end this software should be used by a user and we have the “distributions” to provide the software to them. As the name implies it’s about distributing the software. A distribution takes software from a large amount of independent projects and integrates them to provide a working set applications. A huge and impressive task and that’s the main task of a distribution: they are software integrators.

Various software products depend on each other and there is the chance of conflicts. The distributions need to ensure that this all works. The best matching metaphor for how this works is a river. It flows from the spring to the sea and takes on the water of many other rivers. The code flows from one project to the other to reach at the end the user.

upstream-downstream

Although that looks rather linear, it is not. In truth each independent project has many upstream and many downstream projects. It’s an n:m relationship in each position of the stack. For example for KWin it looks something like that:
upstream-downstream1

For each project it’s important to remember that they are just one of the many, many downstreams of their upstream project. They are not the most important piece of software around, but just one of many. This is important for a working relationship. This also influences how code should flow. Code should flow up the stream. Nobody is helped if each of a downstream of a given project fixes the same bug in their code base. It should be fixed in the project upstream of them. To put it simple: I should not work around a bug in Qt, but fix it in Qt directly.

Nevertheless not all code should be upstreamed. If the code is needed for the downstream integration task it needs to be kept downstream. For example openSUSE used to have a small geeko in the Oxygen window decoration. This should not be upstreamed, because it would cause other downstreams (e.g. Kubuntu) to remove it again. Or it would bring other downstreams to the idea to have their branding feature integrated, too. So we as the upstream would have to accept the Debian logo, the Kubuntu logo, the Gentoo logo – you get the idea. The basic idea is that an upstream project should not try to integrate with their downstream as it’s the downstream’s task to do the integration. This is true for any step of the graph, not just the last one. Circular dependencies are not a good idea.

There are exceptions to the circular dependency rule, but this is very rare. An example is the relationship between the KDE and Qt project which are both upstream and downstream to each other. But the integration inside Qt with KDE is done through plugins allowing to have this integration code been kept downstream.

Also problems can arise if a project starts to become it’s own upstream replacing some of their upstreams. They might want to see their new code being exposed to more projects but other projects on the same level of them might not pick it up as they think it’s specific to this one project. Also it might harm the relationship to other upstream projects. They might think that this downstream is no longer interested and fear that their other downstreams might start to replace them, too. This is not in the interest of the user in the end.

An example of how this can look when it goes wrong could be observed this year with Cinnamon and Cinnarch. Cinnamon is too close to the Linux Mint project, in fact it started as part of Mint and depended on the GNOME version shipped with Mint. This made it impossible for Cinnarch to provide the latest of Arch and Cinnamon. It resulted in Cinnarch dropping support for Cinnamon, but also in Cinnamon trying to get more independent from Linux Mint. Whether this step came in time will be seen in the future.

So in summary: downstreams integrate their upstreams and not the other way around.

Thoughts about the Open Source Tea Party

I have been thinking about whether I should write a blog post about the following topic for a long time. After all Jono asked us to calm down and not put more oil into the fire. But on the other hand I had asked Jono to make sure that there are no personal insults and attacks against my person several times and unfortunately on the Ubuntu Planet there is still a blog post which attacks me personally without any sign that this will change. As I had been attacked by the Ubuntu community quite a lot over the last half year and I had to ask Jonathan to tell Jono that I’m not the scape goat for Ubuntu, I think it is important that I stand up against this and point out the abusive behavior we get from the Ubuntu community.

First of all I want to verbatim quote the Ubuntu Code of Conduct:

Be respectful

Disagreement is no excuse for poor manners. We work together to resolve conflict, assume good intentions and do our best to act in an empathic fashion. We don’t allow frustration to turn into a personal attack. A community where people feel uncomfortable or threatened is not a productive one.

And now I’m going to quote verbatim what Mark Shuttleworth wrote:

Mir is really important work. When lots of competitors attack a project on purely political grounds, you have to wonder what THEIR agenda is. At least we know now who belongs to the Open Source Tea Party 😉 And to put all the hue and cry into context: Mir is relevant for approximately 1% of all developers, just those who think about shell development. Every app developer will consume Mir through their toolkit. By contrast, those same outraged individuals have NIH’d just about every important piece of the stack they can get their hands on… most notably SystemD, which is hugely invasive and hardly justified. What closely to see how competitors to Canonical torture the English language in their efforts to justify how those toolkits should support Windows but not Mir.

Mark took care to write it so generic that it would fit Intel, Wayland, KDE, GNOME, Enlightment, Red Hat, systemd and everybody else who criticized the Mir decision. Nevertheless I’m convinced that the primary recipient of that attack is the KDE community and especially me personally. This is something I derive from a comment Mark put below his blog post:

When a project says “we will not accept a patch to enable support for Mir” they are saying you should not have the option. When that’s typically a project which goes to great lengths to give its users every option, again, I suggest there is a political motive.

If we combine all of it, it’s getting clear that he addresses the KDE community. Who else has support for Windows and is known for lots of options? Of all the communities, projects and companies listed above only KDE offers Windows components (well Intel as well, but I assume that Mark is not going to blame Intel for that). Thus I’m assuming that Mark intended those comments only against the KDE community. I asked him in a comment to his blog post to clarify, unfortunately Mark has at the time of this writing not yet replied and the comment is still awaiting moderation. I also copied the same comment to Google+ and included Mark and Jono, but still no clarification.

Now people could say that it’s not that bad what Mark wrote. But his claims are factually wrong and need to be corrected. After all we don’t want that his followers repeat the false claims over and over again to attack the KDE community. I’m now going to reply to the claims without going down to the level of personal attacks but just showing that all those claims are factually wrong if they are intended against the KDE community, KWin and me in person.

So let’s look at the claims one by one.

When lots of competitors attack a project on purely political grounds

together with

When a project says “we will not accept a patch to enable support for Mir”

I said that I will not accept a patch for Mir, but this is not a political decision, but a pure technical one. I’m now going to quote myself from my very first blog post on the subject of Mir:

Will KWin support Mir? No! Mir is currently a one distribution only solution and any adjustments would be distro specific. We do not accept patches to support one downstream. If there are downstream specific patches they should be applied downstream. This means at the current time there is no way to add support and even if someone would implement support for KWin on Ubuntu I would veto the patches as we don’t accept distro-specific code.

Maybe Mark thinks that this is a political decision. But not for me: this is a pure technical decision as we would not be able to maintain the code. And Mark should know about the costs of maintaining code. After all at the podium discussion about CLA at Desktop Summit 2011 Mark told us that the CLA is needed because of the maintenance costs.

Furthermore I had dedicated a complete blog post on the technical reasons on why we do not want to and cannot support Mir. Mark should have been aware of this blog post given that Jonathan re-blogged it to Planet Ubuntu. In summary I cannot understand how Mark could think that these are political decisions given that I clearly outlined the technical reasons.

So let’s look at the next part:

you have to wonder what THEIR agenda is

Well yes, one has. As I showed above I gave a technical reason in less than 24 hours after the Mir announcement. I wonder how Mark can seriously think that we could have come up with an agenda against a product we didn’t know of before or that we are that fast. So to make it clear: there is no agenda. My only agenda is to correct false claims as in this blog post.

Personally I’m wondering what Canonical’s agenda is with the strong lobbying for us to support Mir and these constant attacks against my person. Mark is not the first one to directly attack me since the announcement of Mir.

The next part would be the NIH part. I do not know how that would fit in with KDE as I’m not aware of anything we NIH’ed recently. Also Lennart already commented on that. I think there is nothing more to add to what Lennart wrote.

And last but not least there is:

What closely to see how competitors to Canonical torture the English language in their efforts to justify how those toolkits should support Windows but not Mir.

I would be very interested in seeing where anybody from the KDE community justifies the Windows support in favor of Mir. This just doesn’t make any sense. So let’s look at it in more detail. As Mark states himself most of the applications do not have to care about Mir at all as the toolkit (in our case Qt) takes care of that. That’s exactly the reason why KDE can offer Windows ports of applications. It’s more or less just a recompile and it will work. In some cases X11 dependencies had to be abstracted and exactly that will ensure that the applications will also work on Mir. So to say thanks to the Windows port the applications will work on Mir (and on Wayland). Side note: as Aaron explains on Google+ of course Mark is wrong in saying that applications do not have to care, of course the technological split affects all applications.

As Mark also states what will need adjustments are the desktop shell programs. In case of KDE that would be mostly KWin. I’m now quoting the “mission statement” for the KWin development:

KWin is an easy to use, but flexible, composited Window Manger for Xorg windowing systems on Linux.

As one can see we do not consider Windows as a target for our development. It even goes so far to exclude non-Linux based unix systems. I’m quite known for thinking that support for anything except a standardized Linux stack (this includes systemd) is a waste of time. One can find my opinion to that on blog posts, mailing list threads, review requests or just talking to people from the KDE community who know my opinion about that.

There is an additional interesting twist in this claim about Windows vs. Ubuntu. KWin as explained is currently working on Kubuntu and not on Windows and this will stay so as long as Kubuntu is able to offer either Xorg or Wayland packages. If the Kubuntu community would no longer be able to offer such packages it would be due to changes in the underlying stack introduced by Ubuntu. So it can only be Ubuntu to remove support for KWin, not KWin removing support for Kubuntu. Furthermore it’s of course the task of a distribution to integrate software and not our task to integrate with a distribution.

Even more some years ago one was able to use KWin in Ubuntu. But then Canonical decided to introduce Unity and implement it as a plugin to Compiz. Since then it is no longer possible to run KWin in Ubuntu. A decision made by Canonical. I’m not blaming them for that, don’t get that wrong. I’m just pointing out to show how wrong it is to try to blame us for not supporting Ubuntu. It was Ubuntu which decided to no longer offer the possibility to run our software in Ubuntu. This behavior over the time made me think that I’m being made a scape goat, that Canonical tries to blame me for them moving away from the rest of Linux.

In summary we can see all the claims put up by Mark to attack the KDE community are false.

Last but not least I want to say something about a very common claim: I do neither hate Mir nor Canonical. I can hardly give prove to it, but I just point out that I attended the German Ubuntu community conference last weekend and also last year. If I were in general against Canonical I wouldn’t do something like that, wouldn’t I?

Porting a KControl Module to KF5

Over the last days I ported a few KControl Modules (KCM) to frameworks 5. As it’s a rather simple task I decided to document the needed steps. Yesterday I ported over KInfoCenter with all it’s modules:

Aus 2013-10-15

First some preparation tasks. I highly recommend to configure kdevelop to not stop on the first compile error. This helps to find pattern in the errors during the porting and also allows to start with the most easy tasks and not have to start with a difficult one, which might turn out to be a non-issue once the other errors are corrected. I also recommend to have the KDE5PORTING.html from kdelibs-frameworks open in your browser.

Adjust CMakeLists.txt

First of all you need to re-enable the directory containing the KCM in CMakeLists.txt. Then one should do small adjustments in the KCM’s CMakeList.txt.

Drop any kde4_no_enable_final line – that got dropped:

kde4_no_enable_final(foo)

Remove a few definitions to get less compile errors – especially you don’t want a compile error for each cast from const char* to QString:

remove_definitions(-DQT_NO_CAST_FROM_ASCII -DQT_STRICT_ITERATORS -DQT_NO_CAST_FROM_BYTEARRAY -DQT_NO_KEYWORDS)

Search for the target_link_libraries of the KCM and remove all variables with Qt4 and KDE4. Instead just add a few frameworks you can be sure to need:

target_link_libraries(kcm_foo
    KF5::KCMUtils
    KF5::KI18n
    ${KDE4Support_LIBRARIES}
)

This should be enough for getting most of a KCM to compile. Further frameworks should be added once you hit linker errors.

Adjust desktop file

The next step is rather simple. Look for the desktop file of the kcm. Most often called foo.desktop and look for the Exec line and change from kcmshell4 to kcmshell5:

 [Desktop Entry]
Exec=kcmshell5 foo

Common Compile Problems

Now it’s time to start the compile, fix loop till you hit linker errors. I just want to present the most common errors I have hit so far and how to fix them.

Remove QtGui/ from includes

/home/martin/src/kf5/kde-workspace/kcontrol/keys/kglobalshortcutseditor.cpp:37:32: fatal error: QtGui/QStackedWidget: No such file or directory
 #include <QtGui/QStackedWidget>
                                ^
compilation terminated.

This one is rather simple: just remove all QtGui/ or QtCore/ from the #include. There’s also a small helper application in the Qt source tree, but for a small code base it might be easier to just fix manually. Remember to use block selection mode if the includes are all nicely one below the other.

Q_SLOTS

A common problem is that the code uses slots and signals and that should be Q_SLOTS or Q_SIGNALS. A compile error looks like this:

/home/martin/src/kf5/kde-workspace/kcontrol/keys/select_scheme_dialog.h:38:9: error: expected ‘:’ before ‘slots’
 private slots:
         ^
/home/martin/src/kf5/kde-workspace/kcontrol/keys/select_scheme_dialog.h:38:9: error: ‘slots’ does not name a type

Easy to fix: just replace by Q_SLOTS. I recommend to directly recompile after one of those errors as they cause many compile issues.

i18n

In KDELibs4 the include of KLocale was needed for i18n, in KF5 it’s KLocalizedString. So this is bound to fail:

/home/martin/src/kf5/kde-workspace/kcontrol/keys/globalshortcuts.cpp:67:89: error: ‘i18n’ was not declared in this scope
                     i18n("You are about to reset all shortcuts to their default values."),
                                                                                         ^

The fix is really simple. Look for

#include <KLocale>

and replace by

#include <KLocalizedString>

In the uncommon situation that something from KLocale is used you of course need to keep it and add ${KDE4Attic_LIBRARIES} to the target_link_libraries in the CMakeLists.txt. Another possibility is to directly port to QLocale.

KGlobal::config()

In case you get a compile error about missing KGlobal when using KGlobal::config(), do not just add the missing include, but port over to the new way:

KSharedConfig::openConfig();

KComponentData

Each KCM I ported so far failed with the following error in the ctor:

/home/martin/src/kf5/kde-workspace/kcontrol/keys/globalshortcuts.cpp: In constructor ‘GlobalShortcutsModule::GlobalShortcutsModule(QWidget*, const QVariantList&)’:
/home/martin/src/kf5/kde-workspace/kcontrol/keys/globalshortcuts.cpp:37:13: error: ‘componentData’ is not a member of ‘GlobalShortcutsModuleFactory’
  : KCModule(GlobalShortcutsModuleFactory::componentData(), parent, args),
             ^

The solution is again very simple: just drop the call to componentData():

  : KCModule(parent, args)

KAboutData

KAboutData changed in frameworks with the old one being moved to K4AboutData. Luckily the changes are rather simple and a pattern can be used:

  • ki18n -> i18n
  • KLocalizedString() -> QString()
  • 0 -> QString()
  • wrap normal string in QStringLiteral()
    • As an example the old code:

           KAboutData *about =
               new KAboutData(I18N_NOOP("kcmfoo"), 0,
                              ki18n("KDE Foo Module"),
                              0, KLocalizedString(), KAboutData::License_GPL,
                              ki18n("(c) 2013 Bar FooBar, Konqui"));
      
          about->addAuthor(ki18n("Bar FooBar"), KLocalizedString(), "foobar@kde.org");
          about->addAuthor(ki18n("Konqui"), KLocalizedString(), "konqui@kde.org");
      

      becomes:

           KAboutData *about =
              new KAboutData(I18N_NOOP("kcmfoo"), QString(),
                             i18n("KDE Foo Module"),
                             QString(), QString(), KAboutData::License_GPL,
                             i18n("(c) 2013 Bar FooBar, Konqui"));
      
          about->addAuthor(i18n("Bar FooBar"), QString(), QStringLiteral("foobar@kde.org"));
          about->addAuthor(i18n("Konqui"), QString(), QStringLiteral("konqui@kde.org"));
      

      Common other problems

      Usages of KUrl can in most cases just be switched to QUrl and it will work as intended. The same is true for KAction, though if it sets global shortcuts you need to properly port following the steps in the porting guide. A usage of KFileDialog can be changed to QFileDialog – be aware that the order of arguments is different in QFileDialog.

      Linker errors

      If everything went fine you should reach a point where you hit linker errors. Now we need to add the required frameworks to the target link libraries. I try to locate the header file of a class which threw a linker error to get the framework. E.g.

      $ locate kiconloader.h
      /home/martin/src/kf5/kdelibs-frameworks/tier3/kiconthemes/src/kiconloader.h
      

      So the file is part of KIconThemes and thus one needs to add KF5::KIconThemes to target_link_libraries.

      Testing

      Once all linker errors are fixed, install the kcm, and run in your KF5 environment:

      $ kbuildsycoca5
      $ kcmshell5 foo
      

      kcmshell5 is part of kde-runtime. In case you don’t have that one installed you can also use systemsettings for a KCM that is listed there. Otherwise just build kde-runtime.

      And that’s it. Not much work and hardly any code to change.

How Desktop Grid brought back SystemSettings

One of my personal adjustments to KWin is using the top right screen edge as a trigger for the Desktop Grid effect. With the switch to KWin on 5 this hasn’t worked for me any more as we don’t have code in place to transfer the old configuration to the new system.

Last week I had enough of it. It was breaking my workflow. So I had to do something about it and had the following possible options:

  • Change the default: bad idea as we don’t want to use the upper right corner as that’s where maximized windows have their close button
  • Carry a patch to change the default on my system: bad idea as that breaks my git workflow
  • Just change the value in kwinrc
  • Bring back the configuration module

As I considered just modifying the config value in kwinrc as not a durable solution I decided to go for bringing back the configuration module (KCM).

After a little bit of hacking and facing the problem that it used some not-yet ported library inside kde-workspace I got the code compiled and installed. But when I started it using kcmshell5 from kde-runtime I hit an assert.

So either my code was broken or kcmshell5. I didn’t really want to investigate the issue of the assert and decided to try the most obvious thing to me: use another implementation to test the kcm. And so I started to port systemsettings.

After about one hour of work I had systemsettings compiled, linked and installed and could start it, but alas my KCM was still hitting an assert. So what to do? Is it my KCM or is something more fundamentally broken which needs fixing in a different layer? How to test (I still didn’t want to investigate) so I started to port a few very simple KCMs from kde-workspace as systemsettings is at the moment still rather empty. And look there, they worked in both kcmshell5 and in systemsettings.

So I started to port another KWin module and that one also hit the same assert. After some discussion on IRC with Marco I learnt that he also hit the same assert in another KCM which means that there was a pattern. And finally I started to investigate. Very soon I had a testcase with a slightly reduced KCM which I could get to hit the assert with adding one additional element to the ui file. A good start to find a workaround and now my KCM loads in systemsettings and I have my screenedge back:

Screenshot is featuring both systemsettings and KWin using Qt 5 and KDE Frameworks 5!

Now all I need to do is to extract my minimum “make it crash” code as a testcase and submit a bug report to Qt.

kde-workspace: frameworks-scratch is dead, long live master

This is a public service announcement: the frameworks-scratch branch in kde-workspace is no more and has been merged into master. This means master depends on Qt 5 and KDE Frameworks 5!

If you used to build from master you will need these dependencies now. In case you don’t have Qt 5 or KF 5, it will obviously end in a CMake error. Please be aware that kde-workspace on Qt 5 is still pre-alpha quality and probably not suited for everyday usage. If you used to build kde-workspace from master for everyday usage, you want to switch to KDE/4.11 branch, which is kept alive for a longer time.

We are sorry for any inconvenience this change might cause you. Happy Hacking!

Generating test coverage for a framework

Over the last week I was trying to add unit tests to some rather old piece of code in frameworks. Of course I wanted to know how good this test is and looked into how one can generate test coverage for our tests. As I consider this as rather useful I thought to share the steps.

For generating the test coverage I used gcov which is part of gcc. So nothing to do there. As a frontend I decided for lcov as that can generate some nice html views. On debian based systems it is proved by package lcov.

Let’s assume that we have a standard framework with:

  • framework/autotests
  • framework/src
  • framework/tests

The first step to enable test coverage is to modify the CMakeLists.txt. There is the possibility to switch to CMAKE_BUILD_TYPE “Coverage”, but that would require building all of frameworks with it. Interesting for the CI system, but maybe not for just developing one unit test.

In your framework toplevel CMakeLists.txt add the following definition:

add_definitions(-fprofile-arcs -ftest-coverage)

And also adjust the linking of each of your targets:

target_link_libraries(targetName -fprofile-arcs -ftest-coverage)

Remember to not commit those changes 😉

Now recompile the framework, go the build directory of your framework and run:

make test

This generates the basic coverage data which you can now process with lcov run from your build directory:

lcov --capture --directory ./ --output-file coverage.info

And last step is to generate the html output:

genhtml coverage.info --output-directory /path/to/generated/pages

Open your favorite web browser and load index.html in the specified path and enjoy.