How Desktop Grid brought back SystemSettings

One of my personal adjustments to KWin is using the top right screen edge as a trigger for the Desktop Grid effect. With the switch to KWin on 5 this hasn’t worked for me any more as we don’t have code in place to transfer the old configuration to the new system.

Last week I had enough of it. It was breaking my workflow. So I had to do something about it and had the following possible options:

  • Change the default: bad idea as we don’t want to use the upper right corner as that’s where maximized windows have their close button
  • Carry a patch to change the default on my system: bad idea as that breaks my git workflow
  • Just change the value in kwinrc
  • Bring back the configuration module

As I considered just modifying the config value in kwinrc as not a durable solution I decided to go for bringing back the configuration module (KCM).

After a little bit of hacking and facing the problem that it used some not-yet ported library inside kde-workspace I got the code compiled and installed. But when I started it using kcmshell5 from kde-runtime I hit an assert.

So either my code was broken or kcmshell5. I didn’t really want to investigate the issue of the assert and decided to try the most obvious thing to me: use another implementation to test the kcm. And so I started to port systemsettings.

After about one hour of work I had systemsettings compiled, linked and installed and could start it, but alas my KCM was still hitting an assert. So what to do? Is it my KCM or is something more fundamentally broken which needs fixing in a different layer? How to test (I still didn’t want to investigate) so I started to port a few very simple KCMs from kde-workspace as systemsettings is at the moment still rather empty. And look there, they worked in both kcmshell5 and in systemsettings.

So I started to port another KWin module and that one also hit the same assert. After some discussion on IRC with Marco I learnt that he also hit the same assert in another KCM which means that there was a pattern. And finally I started to investigate. Very soon I had a testcase with a slightly reduced KCM which I could get to hit the assert with adding one additional element to the ui file. A good start to find a workaround and now my KCM loads in systemsettings and I have my screenedge back:

Screenshot is featuring both systemsettings and KWin using Qt 5 and KDE Frameworks 5!

Now all I need to do is to extract my minimum “make it crash” code as a testcase and submit a bug report to Qt.

kde-workspace: frameworks-scratch is dead, long live master

This is a public service announcement: the frameworks-scratch branch in kde-workspace is no more and has been merged into master. This means master depends on Qt 5 and KDE Frameworks 5!

If you used to build from master you will need these dependencies now. In case you don’t have Qt 5 or KF 5, it will obviously end in a CMake error. Please be aware that kde-workspace on Qt 5 is still pre-alpha quality and probably not suited for everyday usage. If you used to build kde-workspace from master for everyday usage, you want to switch to KDE/4.11 branch, which is kept alive for a longer time.

We are sorry for any inconvenience this change might cause you. Happy Hacking!

Generating test coverage for a framework

Over the last week I was trying to add unit tests to some rather old piece of code in frameworks. Of course I wanted to know how good this test is and looked into how one can generate test coverage for our tests. As I consider this as rather useful I thought to share the steps.

For generating the test coverage I used gcov which is part of gcc. So nothing to do there. As a frontend I decided for lcov as that can generate some nice html views. On debian based systems it is proved by package lcov.

Let’s assume that we have a standard framework with:

  • framework/autotests
  • framework/src
  • framework/tests

The first step to enable test coverage is to modify the CMakeLists.txt. There is the possibility to switch to CMAKE_BUILD_TYPE “Coverage”, but that would require building all of frameworks with it. Interesting for the CI system, but maybe not for just developing one unit test.

In your framework toplevel CMakeLists.txt add the following definition:

add_definitions(-fprofile-arcs -ftest-coverage)

And also adjust the linking of each of your targets:

target_link_libraries(targetName -fprofile-arcs -ftest-coverage)

Remember to not commit those changes 😉

Now recompile the framework, go the build directory of your framework and run:

make test

This generates the basic coverage data which you can now process with lcov run from your build directory:

lcov --capture --directory ./ --output-file coverage.info

And last step is to generate the html output:

genhtml coverage.info --output-directory /path/to/generated/pages

Open your favorite web browser and load index.html in the specified path and enjoy.

Generating a private key I can trust

Given last weeks news about the state of cryptography and the influence of the NSA on standards I decided to enter paranoid/tinfoil-hat mode. The result is that I do no longer consider my asymmetric keys as long enough. So I need to regenerate them. This should be an easy task, but I’m in paranoid mode.

The big problem is “can I trust my systems to generate a safe key”? I decided no, I cannot. Not without investing some time. Normally I would trust my distribution, but I had once to regenerate my SSH key because they got random numbers wrong:
Random Number by Randall Munroe (Creative Commons Attribution-NonCommercial 2.5 License)

So whom do I actually trust? This list is short: the Linux kernel (Linus tree) and FSF/GNU.

Whom do I not trust? That gets more complicated. First of all I do not trust any hardware. It’s impossible to verify that the hardware doesn’t have a backdoor and randomness looks random even if tampered with.

Of course I do not trust the US and any US-based company or company which has interactions with the US. As we had to learn the NSA is putting backdoors into products of US companies and the companies are not allowed to talk about it. This means I do not trust the Linux kernel of any distribution placed in the US or relations with the US. Obviously I extend this on all distributions of companies based in other spy countries (e.g. Five Eyes). This makes the list rather short.

I also do not trust binaries as there is no way to ensure that the binary reflects the source code of the package[1]. This further reduces the list. I’m basically left with Linux From Scratch or Gentoo – two distributions I do not have any experience with. Or use a binary distribution and create the required packages by myself (Linux kernel, GPG). Obviously there is still the risk of a tampered compiler but I consider this risk as rather academic.

Last but not least I do not trust my systems and myself. If I keep the key on the hard disk the security is basically reduced to the strength of the chosen passphrase. Hard disk encryption can add some security to it, but I prefer to have my system in suspend so the key might be in memory and there is always the risk of cold boot attacks. In summary I do not think that hard disk encryption is a solution for protecting the key. Also there is always the risk of an application attacking the system. Getting passphrase through an X11 keyboard logger is unfortunately trivial.

A solution to this problem is getting the key on an external dedicated device. But this is of course conflicting with my “I do not trust hardware” requirement. If hardware random number generators are involved in creating the keys or doing the encryption this would be problematic.

The requirement would therefore be a hardware device which keeps the key secure but does not generate it and is not involved in the session encryption. Today I ordered an OpenPGP Smartcard. This fulfills most of my requirements. It’s trusted by FSFE (Fellowship card), developed by a company which also develops GnuPG and is Germany based. Still I do not trust hardware, but one can upload an externally created key.

So sometime soon I will blog the public key of my new key pair.

[1] I am aware that bitcoin is experimenting in that direction. But this doesn’t help me with the problem to verify the Linux kernel.

Next step: dogfooding

Almost a month since my last blog post. And of course lots of work in KWin in the frameworks-scratch branch since then – about 160 commits. Today I finally reached the next milestone: dogfooding. I dared to replace the KWin of my working session by the new version based on Qt 5 and it’s useable. Of course it’s not yet in a fully functional state, but there’s still quite some time till our release and being able to dogfood will make the work much easier as I can see the regressions better than in the restricted Xephyr session.

The main remaining problem I was facing were regressions in the window decoration. The problem was that once you resized a window, the decoration was broken. I spent quite some time in the debugger to figure out what is going wrong. As Qt switched from XLib to XCB some of the assumptions inside KWin didn’t hold anymore. These were fun debug sessions. In the end a one line change after each window resize step fixed this issue.

With that done the most important functionality is present. I would share a screenshot, but it doesn’t make much sense as there is visually no difference to notice at all. Hugo did an awesome job with Oxygen and that means everything looks the same.

As compositor I’m currently using the OpenGL 2 on egl backend as the glx one is rather broken. I haven’t investigated yet and won’t do anytime soon. So if you want to test better use the environment variable to force to egl or just fix it 😉 I still hope that egl drivers will be present in all major driver implementations till our next release and that would allow to just drop the glx backend.

Of course there is still lots of work needed and your help is always appreciated. And of course there is the chance to get a sneak preview and a development setup by using for example Project Neon.