KWin is one of the oldest KDE applications still in active development without having been rewritten from scratch. Development of KWin started in 1999 for KDE 2, but some parts of KWin even date back to KWM from KDE 1 times. The main software design of the Window Manager component hasn’t changed that much since the initial commit – there is a “Workspace” which manages “Clients”.
Personally I find it very interesting to see such old parts of the code and software design. If we think about it we realize that the development started just a few years after the Gang of Four published their book about Design Patterns. Also development-methods like Test-Driven-Development and unit testing got described around that time for the first time.
Given that it is not surprising that when development to KWin started unit tests were not added to the code base. At the time when QtTestLib got introduced KWin had already been in development for six years and not necessarily a code base which could easily be unit tested. Some classes inside KWin had turned into a size which makes it hardly possible to mock the required functionality.
In order to use QtTestLib the port to Qt 4 had been required. At the same time compositing support got added to KWin – an area which mostly depends on visual representation and which is hardly unit testable at all. It is quite understandable that at that time nobody considered it worth the effort to add unit tests to the existing code base.
Also it’s quite difficult to start with unit testing if you have been developing an application for a decade without unit tests. You might not see the need for the tests and in general we can say that adding the first test is the most difficult. For an application like KWin which highly requires interaction with an external application (X-Server) it is quite clear that one might consider it as impossible to unit test KWin at all.
Personally I believe in the usefulness of unit tests especially during bug fixing. So I think that each bug fix should come with a unit test illustrating the problem. Unfortunately there is still a long way to go before establishing such a policy in KWin due to the difficulties in performing unit tests for a window manager.
Nevertheless even KWin has parts which can be unit tested. And recently I added the very first test to KWin. In 4.8 the KConfig Update Script had a small bug which resulted in very rare cases in an incorrectly migrated configuration. When adding the KConfig Update Script for 4.9 I wanted to make sure that all possible upgrade paths are considered and therefore added the very first unit test for the KConfig Update Script.
With that the most important first step is done: the KWin source tree contains a tests directory and now it becomes easier to add further QtTestLib based tests. So when I recently started to refactor some code which can be unit tested I decided to go for adding a unit test together with the new implementation. And I must say it turned very useful – some minor bugs I added, could be easily spotted without any code investigations. Just the way I like it.
Unfortunately unit testing with QtTestLib allows only to test a very small portion of KWin. Anything interacting with the X-Server cannot be tested that way. Many parts require to be a window manager and as the situation is you can only have one window manager running. So a unit test would need to be a window manager and would interfere with the working system – nothing we want from a unit test.
Given that we would have to basically start the full-blown KWin to perform tests which interact with the X-Server. Unit tests are out of scope and only integration tests seem feasible. But in fact running tests against a running KWin cannot work as it would interact with the running system. E.g. how to test that a window is set to keep above if there is a window rule that forces the same window to keep below.
This is a difficult topic to solve. We basically need a dedicated testing framework which starts a (nested) X-Server, starts KWin, performs a test and shuts down both KWin and the X-Server. A framework which is decoupled from the running system.
I’m currently supervising a Bachelor Thesis to evaluate the possibilities of such a framework and to implement a prototype tailored towards the needs of KWin. The current ideas look very promising and are based on KWin’s scripting capabilities by injecting KWin scripts to test certain functionality. I’m looking forward to this implementation as I hope that it will make our life easier – even if it will not be possible to run the tests on a Jenkins installation.
Also I really like the idea of working together with students. We already do that during GSoC, but there are so much more possibilities for FLOSS to work together with academics. Being it Bachelor Thesis or just seminar papers: FLOSS source code is a great area to work together with real world software and not just a demo application as it is so often the case at Universities.
Considering the fact that this mostly revolves around X.org will the things change with Wayland?
we don’t need the interaction with the X-Server any more, but I don’t expect to see an X free KWin any time soon. But of course for the Bachelor Thesis we discussed how to ensure that the tests will also work with Wayland.
I had a similar problem while writing tests for Nepomuk. We required a separate instance of the database to be running and a separate dbus session. You case is however a lot more complex.
You can look over my code if you want. It’s fairly simplisitc, but it helped me find a couple of bugs – http://vhanda.in/blog/2012/03/nepomuk-test-framework/
Right now, it starts a separate dbus session, kde session and initializes the virtuoso db for first run. It then starts with the tests.
Isn’t what you describe a system test?
According to Wikipedia System Testing is blackbox testing, while what I describe would still be whitebox testing.
We’re actually doing something similar (but rather simple) in Gentoo, we allow the test phase to start up its own framebuffer x server. See http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/eclass/virtualx.eclass?view=markup …
that’s something to look at. Thanks for the link
I don’t think an X-server is really necessary for you.
You can write a simple dummy library with the same interface provided by X11, and load that library instead of X11 when doing the unit test.
If you give me that library, then yes. But I doubt anyone wants to write a mock to X11
Have you considered what is done to test the XMonad window manager. It is way lighter than KWin, hence easier to test. However, the fact that Haskell separates clearly side effecty code and pure code makes it easy to extract the ‘logics’ of the window manager and test it using their regular testing framework (QuickCheck).
TLDR XMonad’s programmers separate the logic from the interface with X (which is tiny) and test the latter in a regular way with QuickCheck.
We tried having a full mock X Server when we were looking at how to test compiz. The idea was that we need something that behaves like an X Server to get an accurate idea of how compiz is behaving. So lp:xig was born, which is an emulator of the X protocol with an internal model of what X’s state should look like. Xig allowed us to verify that certain requests were made in certain operations and that certain events were posted back.
What we found is that 90% of our time was spent fixing problems in Xig, to do with how it communicated certain requests, especially the tricky ones like property notifications and property updates.
Integration testing is important, but for now I’ve decided a higher priority is to get things under a sensible unit test story. A large fallcy which I’ve been trying to “correct” in Canonical is that if you have some external system dependency X, then you can’t test without X, because you don’t know what the behaviour looks like.
In reality, I’ve found that this is not true. If you have good code design and good mocking frameworks like Google Mock, it is easy to get almost anything under test. Recently I refactored our pixmap update code, which was dependent on XComposite to get it 100% under test. (have a look: https://code.launchpad.net/~compiz-team/compiz/compiz.fix_1002602_tests/+merge/108448). If you code to interfaces and use dependency injection sensibly, you can get the trickiest of interwoven complex dependencies under test.
Lets say for example that you have some code that’s dependent on XGetWindowAttributes and XConfigureWindow. The first thing I’d always do is hide those two functions behind an interface, for example:
class X11WindowInterface
{
public:
virtual ~X11WindowInterface () {}
virtual ConfigureWindow (const XWindowChanges &, unsigned int mask) = 0;
virtual GetAttributes (XWindowAttributes &attrib) = 0;
};
Now I make my base window class implement this interface
class MyBaseWindow :
public X11WindowInterface
{
…
};
(lets say for example, that MyBaseWindow has actual references to the window id and Display *).
Now lets see some function in MyBaseWindow that I want to get under test:
MyBaseWindow::doSomethingWithWindow ()
{
XWindowAttributes attrib;
if (XGetWindowAttributes (dpy (), win (), &attrib))
{
if (attrib.blah)
bar ();
if (attrib.foo)
baz ();
…
XConfigureWindow (dpy (), win (), &xwc, mask);
}
}
That code is pretty terrible for dependencies, lets refactor a bit:
void doSomething (Display *dpy, Window win)
{
XWindowAttributes attrib;
if (XGetWindowAttributes (dpy, win, &attrib))
{
if (attrib.blah)
bar ();
if (attrib.foo)
baz ();
…
XConfigureWindow (dpy, win, &xwc, mask);
}
}
Almost there – we still depent on bar () and baz (), so lets hide them:
class BazzyBarCallInterface
{
public:
virtual ~BazzyBarCallInterface () {}
virtual bar () = 0;
virtual baz () = 0;
};
And MyBaseWindow inherits that too, and forwards to the real foo () and baz () in our program.
Now we can express the code like this:
void doSomething (X11WindowInterface *x11, BazzyBarCallInterface *call)
{
XWindowAttributes attrib;
if (x11->GetWindowAttributes (attrib))
{
if (attrib.blah)
call->bar ();
if (attrib.foo)
call->baz ();
…
x11->ConfigureWindow (xwc, mask);
}
}
Now our code only depends on some interfaces. Lets say we want to test it.
Google Mock is amazing at this. Have a look:
#include
#include
using ::testing::_;
using ::testing::Return;
using ::testing::Invoke;
Ok, so we want to test that when XGetWindowAttributes returns true, and attrib.blah is set but attrib.foo is not set, that baz () is called and ConfigureWindow is called with 1, 2, 3, 4 and mask is 0
So first we create “mock” classes which implement those interfaces:
class MockX11Window :
public X11WindowInterface
{
public:
MOCK_METHOD1 (GetWindowAttributes, int (XWindowAttributes &));
…
};
(ditto for MockBazzyBar)
Now we create a test fixture.
class DoSomethingTest :
public ::testing::Test;
{
}
We also need a fake X11Window which returns the attrib we want.
FakeX11WindowAttribReturn :
public X11WindowInterface
{
public:
FakeX11WindowAttribReturn (XWindowAttributes &a) : mAttrib (a) {}
bool GetWindowAttributes (XWindowAttributes &at)
{
at = mAttrib;
return true;
}
prrivate:
XWindowAttributes mAttrib;
};
TEST(DoSomethingTest, TestBazCalledOnAttribBlah)
{
XWindowAttributes attrib;
MockX11Window mx;
MockBazzyBar mb;
attrib.foo = true;
FakeX11WindowAttribReturn far (attrib);
XWindowChanges xwc;
xwc.x = 1;
xwc.y = 2;
xwc.width = 3;
xwc.height = 4;
unsigned int mask = CWX | CWY | CWWidth | CWHeight;
EXPECT_CALL (mx, GetAttributes (_)).WillOnce (Invoke (&far, &FakeX11WindowAttributesReturn::GetWindowAttributes));
EXPECT_CALL (mb, baz ());
EXPECT_CALL (mx, ConfigureWindow (xwc, mask)); // Matchers
doSomething (&mx, &mb);
}
Now your doSomething is under test!
In terms of the argument to do with the fact that you expect certain events on certain requests, the reality is that the X Server is just a black box. You can test the event-handling and request-making code independently. After all, all you care about is the particular small unit you’re testing. So you can synthesize the events by making your event handler call certain functions in the actual “event reactor” classes. Same idea basically.
Hope that helps 🙂 We have about 130 unit tests in compiz and that number is growing now thanks to this strategy.