As you might have seen in Jonathan’s blog post we discussed Mir in Kubuntu at the “Mataro Sessions II”. It’s a topic I would have preferred to not have to discuss at all. But the dynamics in the free software world force us to discuss it and obviously our downstream needs to know why we as an upstream do not consider Mir adoption as a valid option.
This highlights a huge problem Canonical created with Mir. I cannot just say “Canonical sucks”[1] to discard Mir as an option, I have to provide proper technical arguments why we won’t integrate Mir. I have to invest time to investigate the differences, advantages and disadvantages. As I have those arguments, I thought it might be a good idea to share them in a blog post.
The discussion started during a presentation about X11 and Wayland to my fellow team mates at Blue Systems. I decided to first explain X11 as I think one cannot understand the needs for Wayland without understanding X11. I did not intend to discuss Mir at all, but somehow the discussion drifted into the direction and the valid questions were raised about what are the differences and advantages of Mir or Wayland. What followed was kind of a rant about Ubuntu and Canonical [2]. So later the week we discussed “Mir in Kubuntu” in more detail to try to find answers to the many questions this raises for our downstream.
Introduction
Frustration and lost Motivation
Before I go into more detail I want to make one thing clear: Canonical is totally allowed to develop whatever they want. I’m totally fine with this and don’t care whether they develop another display server, an own os kernel or yet another desktop shell. I couldn’t care less. It’s Canonical/Mark’s money and he can invest it in any way he considers as useful. I wouldn’t even care if it would be proprietary software, that’s all fine.
What is not fine is causing a major disruption in the free software ecosystem by giving false technical arguments and doing bold statements about software Canonical does not contribute to. This is not acceptable. This was very frustrating and destroyed lots of trust I had in Canonical. It will be difficult to rebuild this trust. Canonical can be glad that it is the free software world and not the normal corporate world. There were quite some statements which could have raised the legal department in the normal corporate world[3]. It also cost lots of motivation at least on my side and I even questioned whether it’s still worth to be a member of the free software ecosystem. Instead of working together we now have a situation where members of the ecosystem become a competitor and which badmouth part of the software stack. A very frustrating situation.
There certainly are valid reasons for developing Mir which also make sense. Unfortunately they have not been presented so far. I’m quite sure that I know the reasons and if they would have been said straight away it would have been for me and other projects probably much easier. It would have taken away the frustration which the announcement caused and we would not need to discuss it at all, because those question marks would not exist. But apparently Canonical decided to give false technical arguments over the real ones.
Not ready yet
At the moment Mir is not there yet, this is important to remember. With the announcement we basically had four options on how to handle the situation.
- Continue with the Wayland plan and ignore Mir
- Switch to Mir and ignore Wayland
- Support Mir and Wayland
- Delay decision until Mir is ready
If I map our time line for Plasma Workspaces 2 against the time line of Mir I see no overlap. We want to support Wayland before Mir is ready. So delaying the decision would be a rather bad idea. It would just throw us back. This also means that option 2 is not valid especially as we would need to delay until Mir is ready for this to happen. So the only valid options are supporting both Mir and Wayland or only Wayland. At the moment the code is not ready yet to properly decide whether supporting Mir in addition to Wayland is a valid approach or not. Last time I checked the source base I hit a few stubs and then obviously stopped looking at the code as it’s not worth the effort yet. So we have to evaluate on the knowledge we already have and that doesn’t look good on the Mir side.
Wayland vs Mir
Possible Advantages of Mir over Wayland
The differences between Mir and Wayland are rather minimal. One of the differences is that Mir uses server allocated buffers while Wayland uses client side buffer allocation. I cannot judge whether this is an advantage or disadvantage. But I trust Kristian and the Wayland team more on that topic.
Another difference is that Mir uses test-driven development. To me development methodology is not a technical argument. I rather use a working system without unit tests than a system with unit tests that doesn’t work [4]. Also KWin does not use TDD. If I would consider TDD superior I would have to question my own development methodology.
But that’s it. That are the differences I found so far which could count as an advantage for Mir. But of course there is the advantage that Mir is going to be awesome. For the disadvantages I will spend a complete section on each point.
Distro specific
So far Mir is a one-distribution solution. So far no other distribution has shown any interest in packaging Mir even if it would become a working solution. Unfortunately I don’t have the ability to see into the future, but I can use the past and the present to get ideas for the future. The past tells me that there are other Canonical specific solutions which are not available in other distributions. I do not know of any distribution which packages Unity and from all I have heard it’s even impossible to package Unity on non-Ubuntu distributions. Given that it is quite likely that Mir will go the same road. It’s designed as a solution for Unity and if distros don’t package Unity there is no need to package Mir.
This has quite some influence on a possible adoption. I do not know of any kde-workspace developer using (K)Ubuntu. I do not see how anyone would work on it or how we should be able to review code or even maintain code. It would mean all the adoption would have to go into ifdef sections nobody compiles and nobody runs. This is the best way to ensure that it starts to bit-rot. Even more our CI system runs on openSUSE so not even the CI would be able to detect breakage. Of course a downstream like Kubuntu could develop the adoption and carry it as a patch on top of upstream, but I would highly recommend them to not do this as KWin’s source code churn is too high. Also we all agree that downstream patches are evil and we would no longer be able to help in any way downstream’s user from a support perspective.
Architecture
Mir’s architecture is centered around Unity. It is difficult to really understand the architecture of Mir as the specification is so full of buzz-words that I don’t understand it [5]. From all I can see and understand Unity Next is a combination of window manager and desktop shell implemented on top of Mir. How exactly this is going to look like I do not know. Anyway it does not fit our design of having desktop shell and window manager separated and we do not know whether Mir would support that. We also do not know whether Mir would allow any other desktop shell except Unity Next, given that this is the main target. Wayland on the other hand is designed to have more than one compositor implementations. Using KWin as a session compositor is an example in the spec.
License
Wayland is licensed like X under the MIT license, which served us well for a display server. I think this is a very good choice and I am glad that the Wayland developers decided for this license. Mir is licensed under GPLv3-only with CLA. I think this is very unsuited for such a part of the stack and would render quite a risk for usage in KDE Plasma. KWin (and most KDE software) is GPLv2-or-later, this would no longer be possible, it would turn our code into GPLv3-only as KWin (or any other software which would depend on mir-server) would be a derived work of Mir. I do not consider GPLv3-only software as a possible dependency of any core part of our application stack. It renders a serious threat for the future in case of a GPLv4 which is not compatible with GPLv3. I also dislike the CLA [6]. So from a licensing perspective Mir is hardly acceptable.
Unity Specific/No Protocol
One of the most important aspects from Wayland for us is the ability to extend the protocol. This has already been a quite important feature in X and we are using our own extensions over ICCCM and EWMH to implement additional functionality. Of course our workspace has own ideas and it is important for us to be able to “standardize” those and also make them available to others if they are interested. This is possible thanks to protocol extensions.
Mir doesn’t have a real protocol. The “inner core” is described as “protocol-agnostic”. This renders a problem to us if we would want to use it. Our architecture is different (as described above) and we need a protocol between the desktop shell and the compositor. If Mir doesn’t provide that we would need to use our own protocol. And that already exists, it is called “Wayland”. So even if we would support Mir, we would need the Wayland protocol?!? That doesn’t make any sense to me. If we need to run Wayland on top of Mir just to get the features we need, why should we run Mir at all?
But it gets worse, the protocol between Mir server and Mir clients is defined as not being stable. In fact it’s promised that it will break. That’s a huge problem, I would even call it a showstopper. For Canonical that’s fine – they control the complete stack and can just adjust all bits using the protocol like QMir.
For us this looks quite different. Given that the protocol may change any time and given that the whole thing is developed for the needs of Unity we have to expect that the server libraries are not binary compatible or that old version of the server libraries cannot talk with the latest client libraries. We would constantly have to develop against an unstable and breaking base. I know that this sounds overly pessimistic but I know of one case where a change got introduced in a Canonical protocol late in the release cycle completely breaking an application in Kubuntu which wanted to use the protocol. Given this experience I would not trust that the protocol doesn’t change one day before the release meaning that Kubuntu cannot ship.
This is not awesome, it’s awful. It means KWin will not work just fine on Mir.
I hope this shows that using Mir inside the KDE Plasma workspaces is not an option. There are no advantages which would turn Mir into a better solution than Wayland and at the same time there are several showstoppers which mean that we cannot integrate Mir – not even optionally in addition to Wayland. The unstable protocol and the licensing choice are clearly not acceptable.
What this means to Kubuntu
Question marks
For Kubuntu the Mir switch by Canonical created quite some questions. One of those questions is answered: Upstream has no interest in supporting it and would most likely not accept patches for support. With upstream not using Mir the question is how the graphics stack for Kubuntu will look like once Ubuntu switched to Mir? The questions cannot be answered right now but it doesn’t look good.
Patches to the stack
Ubuntu has always had one of the worst graphics stack in the free software world. I can see this in the bug tracker. The quality of the Mesa stack in Ubuntu is really bad. For Mir Ubuntu will have to patch the Mesa stack even further. This is nothing which I would like to see. Also Mesa needs to be packaged with Wayland support. But will Canonical continue to do this? If not, would Kubuntu (and other Ubuntu flavors) need to ship their own Mesa stack? What if the changes by Canonical are so large that a standard Mesa stack doesn’t run on top of the Ubuntu stack?
Switching Sessions
One of the advantages of free software is that one can select the desktop environment in the login manager. This looks like no longer be possible in a Mir world. Unity will run with a Mir system compositor with LightDM nested underneath. We will need either the X Server or a Wayland system compositor. So from the login manager it will not be possible to start directly into a session using a different system compositor. How will it continue to be possible to use both Unity and KDE Plasma on the same system? Running a Unity and a KDE Plasma (or GNOME or XFCE or anything) session at the same time seems to no longer be possible.
System Compositor
How deep into the system is the system compositor going to be? Will it be possible to disable the Mir system compositor and replace it with X or Wayland? What if the packages start to conflict? Will it still be possible to install Kubuntu and Ubuntu on the same system? Will Canonical care about it? Will the system compositor mean that one has to decide in Grub whether to boot Ubuntu or Kubuntu?
Packages from Where
So far X, Wayland and Mesa have been packaged by Canonical. But what about the future? Will there still be packages for X, will there be packages for Wayland? If not, where to take them from? Debian unstable, most likely. But Debian might be frozen. Will it be possible at all to use the Debian packages for X and Wayland in the Ubuntu stack? Will they meet the requirements for KDE Plasma[7]? If Canonical doesn’t provide Wayland packages, they would drop to universe, so Mesa in main cannot depend on them. How to get then Mesa with Wayland support?
Only Future can tell
Those questions cannot be answered right now. It will have to wait until Mir is integrated into the Ubuntu stack. Then Kubuntu developers will see how far the stack broke. I’m not really optimistic that it will still be possible to provide the Ubuntu flavors once the transition to Mir is done. I don’t think that Canonical has any interest in the community provided distributions on top of Ubuntu any more. There are many small changes in the direction which indicate that. But we will see, maybe I’m too pessimistic.
[1] Given how Canonical introduced Mir with incorrect information about Wayland I consider this as a valid approach to dismiss the technology.
[2] I was very fed up with Ubuntu at the time anyway because our bug tracker once again exploded after the Ubuntu release.
[3] I do admit that I thought about asking KDE e.V. to send an Abmahnung after the statement that KWin would just work fine on Mir.
[4] In fact I consider TDD as utter non-sense and as a useless methodology though some aspects are useful.
[5] “with our protocol- and platform-agnostic approach, we can make sure that we reach our goal of a consistent and beautiful user experience across platforms and device form factors”
[6] Yes I know that Qt also has a CLA, which I have signed. But for Qt there is also the KDE Free Qt Foundation agreement.
[7]Last week a feature hit KWin which I cannot test/use because the X-Server is too old in Debian testing.
Firstly, I cannot respond with any technicalities, it’s not my expertise, but hopefully get my point across as a user.
I think Kubuntu is a great distro representing KDE, but the underlying software stack provided by Ubuntu has been far from great. I’ve had countless problem upgrading and software packages becoming broken although releases have improved compared to the past. A recent upgrade to ringtail messed up the mesa symlinks and source headers which as a developer created problems working on another software project.
Getting to the point, for both developers and users to use Mir seems a bad idea and not in the interest of the overall community. To me there seems a divergence in the overall direction of Ubuntu’s derivatives, and questionably is there any benefit for Kubuntu using Ubuntu stack since ties with Canonical are less than they used to be? I would say no, but it’s probably not an options worth considering.
Anyway, a well written post. Thank you for taking the effort to write in such detail.
Luke.
They’d be more than welcome to build on openSUSE, for example 😉
? strictly prefer debian based distro with two flavours testing and stable.
Stimmt.
What’s a good Debian-based distro with up-to-date KDE packages?
Siduction.
Yeah but then there is still the shame of using SUSE to deal with and lots of people still feel dirty just saying the name.
Not sure how you manage to get broken packages. Unless you do tons of funny things with PPAs, I guess.
Hey !
I know it’s not the main point of the post, but I read the #4 note, and I want to react about it. In my current work environment, the TDD method is trusted beyond reason (imho). People are almost religious about it. Not using TDD basically means you are a misinformed developer. Or a bad one. Or a lazy one. Or the three altogether.
It turns out that these religious people never even tried to look for different opinions on the matter. I do however, and there’s very little material.
My point is, if you wish to write about what you think of TDD, there’s at least one person who will find that very interesting. Also, if you have any good reading on the subject, I’d be glad to know.
Back on the topic at hand.
There’s a point that I think is missing: who is going to write video drivers and what are they going to support ? Wayland, Mir, both, none ? And how might this affect the final decision ? As of now, it looks like Wayland is getting much more support, but what happens if the ‘corporate’ only decide to support Mir ?
It’s too early to tell, but I would say such systems will just have to stay on X. That’s totally fine with me. If there are enough users demanding Wayland support in the proprietary drivers, it will obviously happen.
Maybe the KMS kernel module is the solution , Nvidia and Catalyst drivers have already implement a kernel module-setting.
It will make it graphical server independant.
Mir will either use Android drivers or patched Mesa drivers.
One Canonical employee sent the first patchset to support Mesa for early review to Mesa but got zero replies. (No idea if there was an additional review request at some point later.)
Intel’s interest for quite some time is to support Android on Intel platforms by adapting Mesa (see https://01.org/android-ia/ for details), so Canonical has no choice to support Intel on Mir other than Mesa. Intel is also heavily invested in Wayland – in short term for Tizen and in the long term maybe via a Wayland-powered Android platform.
The only open question is whether they will use a Mir-patched Mesa or Android-Mesa.
AMD has increased the number of Radeon Mesa driver developers in a clear sign that AMD wants to follow Intel’s path here to get APU support into Android.
I’m not aware of any moves by AMD to support their proprietary driver for either Android, Mir, or Wayland.
NVidia revealed early this year that they will abandon the Tegra custom GPU for cores based on desktop (Kepler) GeForce GPUs (see http://arstechnica.com/gadgets/2013/03/nvidias-next-tegra-chips-will-get-a-big-boost-from-new-geforce-gpus/ for reference).
I find it highly unlikely that NVidia will deliver a driver solely written for Mir. I’m confident that Canonical will just package the Android driver for newer-generation GPUs (Kepler and later) and use a patched Nouveau for older GPUs.
No plans by NVidia for Wayland have been revealed so far, as far as I’m aware. However a Jolla employee, based on work by Collabora and his older hobby libhybris project, already developed support for loading Android GPU drivers by Wayland. (See http://mer-project.blogspot.fi/2013/04/wayland-utilizing-android-gpu-drivers.html )
Wayland could therefore go the same Android-Nouveau route I described for Mir.
I wish kubuntu distro was based on directly Debian. KDEbian might be a nice name. 🙂
The Kubuntu trademark is owned by Canonical and they won’t simply give it away. http://tanglu.org/ is probably what you’re looking for.
There is a project started to bring a kde from debian, tanglu. If they succeed it’s to early to say.
Am i missing something? Kde 4.8 is in wheezy and for a full desktop KDE debian experience with the latest KDE you can use “Neptune” (listed on distrowatch) which has all the latest and greatest KDE/qt apps and is virtually clean of any GTK cruft.
Neptune is perhaps the most _aesthetically_ cogent and concise distro which I have so far had the pleasure of using. I say this even though I prefer a much leaner interface (Openbox) for my day-to-day use.
I don’t even see why anyone bothers with Kubuntu any more, given its association with Ubuntu which is now veering off with Unity and Mir and the Amazon shopping lens. Wouldn’t it make more sense to just move over to Linux Mint KDE Edition? It’s based on Ubuntu (but with all the stupid stuff left out), and they also have a Debian-based version they’re working on. They’d probably be happy to have some help with a KDE Debian-based version.
>I don’t even see why anyone bothers with Kubuntu any more, >given its association with Ubuntu
>Wouldn’t it make more sense to just move over to Linux Mint >KDE Edition? It’s based on Ubuntu
Do you even bother reading what you write or is this just stream of consciousness and dont care if it make no sense?
Nothing like reading fanbois of one distro badmouthing the users of another because we all know that the distro we each use is the ‘bestest’ in the world.
while I share your feelings about mir, the license part you wrote about is not true: if KWin is GPLv2+ and depends on something GPLv3only, then KWin is not forced to go GPLv3. The only thing which it needs to do is to be distributable under the terms of the GPLv3, i.e., GPL2+ or any permissive are okay. Even this is also only necessary as long as you want to distribute the two software packages together, i.e., if you only distribute a GPLv2only software it may still link to a GPLv3only package as long as the two libraries use a different distribution channel. (N.B. that’s how the proprietary NVIDIA driver stays legal.)
finally, I suppose that not even canonical is dumb enough to distribute the MIR client libraries under the GPL. (If they do not take extra measures, that would effectively prevent them distributing apps from their store.) rather, I would guess that they will use the LGPL, MIT or a BSD license for that.
the client libraries are LGPL, but the server library is GPLv3-only. We would need to use the server libraries and that’s causing a problem (not now, but might in future).
Concerning the NVIDIA driver: to my knowledge it is not clarified whether that’s legal or not.
> the client libraries are LGPL, but the server library is
> GPLv3-only. We would need to use the server libraries and
> that’s causing a problem (not now, but might in future).
not for KWin in any case 😉
> Concerning the NVIDIA driver: to my knowledge it is not
> clarified whether that’s legal or not.
as far as I understood it, the driver is perfectly legal from the NVIDIA perspective (as NVIDIA does not distribute the kernel). The problem is with the linux distributions: they cannot ship this driver and not enter legally murky waters (at least as long as they use the linux kernel)
well I as the KWin maintainer consider a dependency to a GPLv3-only library as a problem. We do not depend on any GPL library (any more) and I think that’s a good thing. I don’t intend to change that.
> I don’t intend to change that.
I didn’t mean to. My point was rather that your argument about the license was flawed. IMHO you should not use such arguments to reject mir (that would be the same behavior as canonical when it introduced it in the first place). rather just state ouright that you do not want GPL(v3) dependencies (which is perfectly okay IMO)…
well it is a possible problem in case a future GPLv4 is incompatible with GPLv3. If it’s not possible to depend on GPLv4-or-later and GPLv3 library at the same time, we have a problem in case we want to depend on GPLv4-or-later library. This is not a made-up argument, just look at http://www.gnu.org/licenses/gpl-faq.html#AllCompatibility and the combination of GPLv2-only with a GPLv3-only library. We have to consider that a future GPL version is not backwards compatible, it happened before!
>If it’s not possible to depend on GPLv4-or-later and GPLv3 library at the same time
It might be possible: Software that depends on projects using incompatible licenses must adhere to every of these licenses, but not to all at the same time. IANAL, though.
Anyway, all of this is highly hypothetical and the point mood because you chose not to depend in GPL projects. I still think that you should not use that argument against mir. (or at the very least, you should explain it much better.)
To me it is an important argument as I care more about the future than the present. I have to consider the possible consequences of a dependency choice. KWin is licensed under GPLv2-or-later for a reason, it is my task as a maintainer to protect this choice of each of the developers who express with each contribution that they want to have the code under GPLv2-or-later. By depending on a GPLv3-only licence I would limit the choice. This is clearly not what is intended by the developers. Yes it is hypothetical, but I have to consider it. Because of that it is a valid concern from my side and I have to name it as a disadvantage. It’s not the reason why we won’t integrate Mir, but it’s one of the many small puzzle pieces which speak against Mir.
If you think it’s not a valid argument that is fine with me. Then we have to agree to disagree in that point. For me as the maintainer it is a valid argument and I think that is what matters 😉
Considering that the important components i.e. the libraries, the parts where KWin actually hook into any project derived from Mir, are LGPL (not GPL as you are insistant on FUDing) if support was implimented, unless the contributions were written as GPLv3, KWin would not be effected because it is only linking against those libraries.
no, they are GPL. I checked before writing the post
In the License section you write “(or any other software which would depend on Wayland-server)”. Shouldn’t that be “Mir-server”?
If I understand your concern correctly, the compository in a Mir based system runs as a plugin or extension of the Mir server and not as a special client, thus being bound by the server’s license and not by the client library license?
Thanks, corrected. And yes we would have to use the mir-server libraries and would not be able to use the client libraries. (That’s quite similar to the Wayland situation).
Ah, thanks for clarifying!
I think you guys have to wait till Ubuntu 13.04 gets released. They told, they are going to ship Unity on X and mir (as additional package too for testing). As for Kubuntu, it will use what ever upstream (you guys) use. Canonical is focused on Ubuntu, but support KDE, Gnome, Xfce. Since they don’t control the development of any one of these, they will use the stock packages (DE, compositor) as provided by upstream. There was a recent video (jonobacon on air), where he was asked similar question about Ubuntu gnome remix. I don’t know how much he is in sync with the developers, but his answer was “Ubuntu gnome remix is going to use upstream wayland, if Gnome does not support mir”.
It is up to you guys to decide, whether you are going to support mir or not, because it does not look like it is their focus (at least for now). Either you might choose to wait, till it gets matured, or simply ignore the downstream.
Wasn’t Ubuntu 13.04 released something like two weeks ago? So we are suposed to wait for what?
I am not sure whether you meant 13.10 or 14.04.
I read it as 14.04
Here’s the problem with what you are saying.
Tying a framework bit of intrastruce such as a display server’s release model directly to one distribution’s release model makes it impossible to work with that project as an upstream project.
For Mir to be considered a credible upstream project, and not a piece of Ubuntu specific plumbing… Mir must have a release roadmap that stands on its own and is communicated. There have to be established tarball releases that show up and announced with the intention of those tarball releases being consumable by other distribution such as Debian. It can’t just linger in an amorphous state in an lp branch. And the Mir releases can’t jsut be whatever ships in whatever the next Ubuntu is. There has to be some structure in how Mir development is handled that shows an intent to run the project like an upstream piece of tech that is interested in being widely used and something other projects can depend on. Upstream releases of Mir can’t be after-the-fact codedrops that fall out of the Ubuntu release cadence.
The absolute best thing Mir developers could to to establish Mir as an honest upstream project is to make Debian experimental a 1st tier target alongside Ubuntu. You want to be taken seriously for adoption by the other environment projects, make it a point to get an official released Mir version into Debian experimental before Ubuntu 14.04 is released.
No would in if sentences. At least I learned that in school as it is a typical false-friend alike mistake we Germans like to do.
there are things in English I will always do wrong. That’s one of them 😉 The other one is that I have serious problems with present perfect. But thanks for pointing out, though given the length of the article I won’t correct it.
Sometimes teachers create grammar rules for their students that are probably good for their students at the time but aren’t actually rules. 😀
I call bullshit on your claim of “no would in an if sentece”. If you would care to cite the rule you claim, then I may consider it valid.
I think TDD is not overrated. It is important to cover your software with tests as good as possible, especially if its a long term project and the codebase gets huge. Otherwise its not maintainable.
The point about TDD is the following: writing tests sucks. It does. It sucks even more if there are no tests yet or the software is written in a way that makes testing terrible. Because of this you write your tests in paralell to your code.
In case of MIR i dont think its an argument that justifies Canonical’s move. Still, if done right, it will ensure a higher quality of the software.
Which isn’t TDD. In TDD you write your test first. That I consider as utter non-sense. Writing the test in parallel is a good thing, writing it first is just a waste of time if you have to change anything (and no, my code/architecture doesn’t pass review on first try).
Nope you write them in paralell 😉
Its kinda hard to understand why you begin with the test, I only got it when I really started doing it:
You write it first to make it fail because you want to know that the test is correct. That kinda allows you to not write the test twice like: have a failing test and have a passing test.
Another reason to write it first is to test your “API”. It makes you think how others will react with your code. Overall this leads to better code because you think about the big picture rather than the implementation. You also see if its hard to use because the test will be hard to write for it.
It also ensures that your code is fully working at every point of time (feels really good, believe me).
Its still kinda hard for me though to follow it strictly. Im working on it but its really hard to write not more than whats needed 🙂
Paralell as in: Write only as much of testcode to make it fail, then write as much implementation to make it pass, then write as much testcode to make it fail again and so on.
yes I know how TDD works, yes I worked with TDD, yes I consider it as stupid and not helpful if followed strictly.
I try to develop everything I do by writing tests first so that I know tests are in place and that they are doing their job, but I’ve not really heard a well-argued opposing view. So I’d be interested to hear a more comprehensive list of reasons as to why you believe this is the case, purely from an academic perspective.
I don’t say that there is anything wrong with using test first development approach. If it works fine for you, that is awesome and you should stick with it. What I don’t believe in is TDD as a development methodology or even as the methodology for writing software. Like all methodologies there are advantages and disadvantages and it might be that other methodologies work better for developing the software or having a high quality.
TDD puts a too strong emphasis on unit testing in my opinion. I do not believe in unit testing being the key to high quality software. It’s certainly an important aspect but it’s not the only one. Good code should come with unit tests no matter which development methodology is used.
We should remember that it is impossible to proof that software does not have bugs. Testing can find bugs but cannot proof that there are no bugs. If one has a large number of tests it means nothing. It could be that it redundantly checks the wrong things. Trusting into unit tests (and methodologies which emphasize unit tests) can be dangerous. One’s code can have a test coverage of 100 % and be developed in a perfect TDD world, but still fails in multi-threading or doesn’t scale.
I think there are better methodologies around to write good code. For example having the test being written by a different developer in a black-box way. This is especially important if the developer is doing a systematic error. If he does, he will most likely also introduce the same systematic error in the test code and by that not finding the issue – TDD doesn’t protect against such issues. For that I find code review much more important.
TDD is a methodology to fix a social problem. Writing unit tests sucks. It’s a dumb work and most developers are over-qualified for writing stupid and repetitive test code. They don’t want to do it, so they don’t do it. The idea of TDD is to force developers to write the unit test before they write the code. As a goodie the developers get a “you have to think more about your API, so it will be better” and “you see directly whether it works”. But let’s face it: there are other methodologies which can provide the same and writing test code still sucks. Now it sucks even more because you have to adjust your test code whenever you change the code during the implementation
There have been in the past other methodologies to fix social problems: they all failed. Especially if something is about metrics. Line of comments per line of code? Good idea till developers start adding // incrementing i by one. You cannot fix the social problems with a methodology. If your developers don’t want to write tests, they won’t. One can easily do TDD and produce non-sense tests if one wants to. Quantity doesn’t matter, quality does. TDD is a methodology ensuring quantity of tests, not quality. There are other methodologies which work better, to ensure quality.
I don’t say TDD doesn’t work, certainly it can work quite well in a developer group where everybody believes in it. But if there are developers who don’t believe in it, it can fail. So saying that your software is super-awesome because it’s TDD and another one isn’t is just failing to see that TDD is not the silver bullet (there are no silver bullets in IT). If one has to emphasize that one uses a certain methodology it sounds very much like something is going wrong. One wants to have high quality software, the way to go there, should not matter. It’s like saying one is “agile”. Nice buzzword, but meaningless. TDD cannot guarantee high quality software, so why emphasize on it?
TDD certainly has aspects which I find very useful. For example bug fixing is much easier in a TDD setup. But for feature development I think only parts of it should be used and combined with different, better working methodologies.
There are probably almost as many reasons people claim as the ‘one true’ justification for doing TDD as there are people doing it, but the one I find most convincing is nothing to do with amassing a collection of unit tests per se, it’s all about using the tests to guide the actual *design*. Testing tightly couple code or functions that have side-effects or objects with more than one responsibility is painful, and because we get to experience that paion so much earlier then when we eventually write the actual production code that interacts with the system under test, we are more likely to write it with more modularity and looser coupling in the first place instead of having to go back and refactor it *and all its callers* later – because in the real world ‘later’ is often ‘never’
There are other ways to achieve the same thing. Interactive languages (Lisp and SmallTalk) with a tradition of developing at the REPL tend not to hit the problem to the same extent. Just having Really Good Taste as a designer will often work too.
Of course, whether this is what the Mir folk are doing or not, I have no idea
Interesting points, thanks for mentioning them.
I should probably mention here that I agree that comprehensive test coverage and test-driven development are not methods used to prevent problems from occurring in software – they’re more of an instrument used to assist in maintenance. Writing the tests first is effectively a mind-hack to ensure that the code one writes can be run inside of a limited-scope testing scenario and having tests is there to save work on manual quality validation.
Both depend on the competency of the team that is actually writing the tests and doing the design of the project. Its very easy to test irrelevant things or to miss testing important interactions between components. Its also very easy to write fragile tests which often need to be changed or tests which are too imprecise and don’t catch problems when they should.
All that being said – I think projects that make a significant upfront investment in long-term quality measures – no matter which way it is done (automated testing, stringent code and functionality review, extensive user testing) tend to fair a lot better than those which don’t. To that extent, both wayland/weston and Mir seem to do the right thing in this regard, but they both employ a different methodology in doing it.
And therein lies the problem…
I have yet to meet a TDD developer that does not follow this pattern:
1) write test
2) write code that passes test
3) call it done and commit
What if the test is wrong or does not actually have coverage? They do not consider the design of their code before writing it. They just shit out the first thing that comes to mind, and if it passes the test, then it must be correct, right? WRONG!
TDD developers are like children playing with a multiple choice test that scores each answer along the way and allows infinite changes. They can find what appears to be the right answer (according to the test) but they don’t know why it’s right (or even if it actually is right as that depends on the correctness of the test).
To me, TDD means “I can’t architect a properly solution so I’ll write something that passes a test”. I guess it shouldn’t be expected when most of these people were taught in school that all that matters is answerign the test correctly, not understanding why.
Since it was mentioned (albeit briefly), “Agile development” to me means “we can’t plan our way out a paper bag”. Quality engineering requires planning, regardless if it’s designing software, hardware, a house, a bridge, or wahtever else. Every form of “agile” development boils down to “plan nothing, shoot from the cuff, and if you miss say that was your intent because you are agile enough to intend to be wrong”.
Can you imagine any other engineering discipline using “agile” methods or test driven development? If a mechanical engineer applied TDD to building a bridge, do you think he’d live to see it finished, or be murdered by the families of his testers?
P.S. Indenting replies to ease readability makes sense in theory, but indention is not normally done from both sides. You’ve already commited the sin of forcing all page content into an absurdly narrow column without regard to the browser window size. Constraining each reply further is madness!
case in point
I wonder how the tech press will misread this blog.
So far “the” tech press has ignored Martins blog post ;).
” I don’t think that Canonical has any interest in the community provided distributions on top of Ubuntu any more. ”
From the recent news maybe it is true…
I still don’t understand the reasoning about server allocated buffers. I read the blog post of Christopher Halse Rogers, can see what’s different, and still can’t figure out why client allocated buffers are no-go for ARM based devices. Just because Android does it that way (with their own display protocol/server if I’m not mistaken) it’s the only sane thing to do?
Maybe somebody can enlight me on that.
I would put it quite simple: if you want to find a technical difference and sell it as a reason why you cannot use Wayland you will find one.
I am sure that Canonical had a true reason to start a new project from scratch. “Not being able to control Wayland” could justify forking but not Mir.
I am also sure that Canonical has a true reason, which they did not tell us. Instead they present reasons like this one here. That’s what I wanted to say with this comment 😉
It certainly is a competitive advantage to have the most knowledge about a software solution and to be able to control its development up to a certain point*. Even more so if the solution is as integral as a graphics stack is to a desktop os.
That said, I don’t _know_ what canonical has in mind. Also I’m not an expert in the field of development politics. But what I do know is: Coming up with a defensible reason why Canonical would do something like this is not difficult at all.
* Why up to a certain point? Because you can simply fork GPLed software and continue on your own. But even if having the software satisfy your demands is great for you, you have to bite some bullets when forking it. The worst of it probably being that know-how and development power will only slowly (with the risk of not at all) move to your fork and you as a late-comer always being the one who is just a bit less proficient with the codebase and the current development network (e.g. the people contributing to it). Add to that that your rival will be the nr. 1 address to go to when dealing with other parties (e.g. 3rd party driver providers).
As an example, consider the situation now with steam on linux: Currently only ubuntu is officially supported. Graphics card providers – if at all – are particularly interested in providing drivers for ubuntu. If you are the one holding on to the threads of power (even if they are thin) it can be advantageous to break away and sail off on your island – under certain circumstances the rest of the world will follow with your rivals trailing along after you.
All I’m saying: There are less technical explanations for this behavior which are much to plausible to be discarded instantly.
IIRC the server-side allocation is crucial to energy consumption optimisations according to Mir developers.
Wayland supports both server-side and client-side buffer allocation: it’s _totally_ agnostic as to which one you choose. Mesa implements client-side allocation because the platforms it targets can use that just fine. At the time Mir was first announced, I was on-site with a client implementing a Wayland EGL platform for them which used server-side buffer allocation, so you can imagine my amusement when Mark was telling everyone that Wayland was strictly client-side.
Mir, on the other hand, strictly enforces server-side allocation always. Which kinda sucks, because you want to be client-side unless your buffer is a potential full-screen scanout candidate, which is the only time server-side buffer allocation is ever helpful (again, this is just an ARM thing, not x86).
Maybe the easiest solution would be have something like WaylandMir (similiar to XMir), to nest a Wayland server inside Mir. Then all the community DE should run without problems.
But I think it is not your task to do such stuff, but Canonical should do it. They were the ones deciding that a new display server is needed.
I thought about integrating that into the blog post. I would consider that as a “second class citizen”. Would go well with the “blue headed step-child” 😉
Martin, you wrote:
“Ubuntu has always had one of the worst graphics stack in the free software world. I can see this in the bug tracker. The quality of the Mesa stack in Ubuntu is really bad.”
Would you elaborate on that a bit, please?
I recommend to look at the bug tracker, it shows the problem. But you want examples? What about having a drm-module which is too new for the kernel shipped?
I hope this all pans out well for Kubuntu…. We need Kubuntu to continue to be the awesome flavor that it is!
As a Kubuntu user, does this mean my current distro of choice has a bleak future? If so, you hinted at what the kde devs use more (openSUSE), but is that what most of the devs use? Is maybe Fefora or Debian even more popular, or are they mostely on openSUSE?
What I like about Kubuntu BTW are the huge ubuntu/debian repsoitories that seem much bigger than the equivalent SUSE/Fedora ones, it’s also simpler to get nVidia drivers running than with Fedora IMHO.
BTW I wish you had an edit on here… I’d clean up some of the sp mistakes.
“There certainly are valid reasons for developing Mir which also make sense. Unfortunately they have not been presented so far. ”
Wayland development being slow, Xorg not going anywhere, need to make something work and to be pushed to the users in one or two years time and not in 10 to 20 years time…
Technically speaking it probably comes down to Android drivers support and proprietary driver support through EGL in this next generation display server that provides that. It comes down to drivers model in my opinion and probably Canonical talked to different vendors and made a choice based on feedback.
Reading this:
http://www.phoronix.com/scan.php?page=news_item&px=MTM3MTc
They did what others in FOSS world refuse to do and that is focused on the future and Unity 8 will not use/work on Xorg anymore for example. But about the part where you expressed concern about Ubuntu flavors probably user will still be able to choose desired session just like it’s planed for Ubuntu 13.10. About Xorg being available in repositories i think it will be and maintained to for foreseeable future because it will run on top of Mir for a lot of different purposes.
And about KDE and Mir. In the end it will probably come down to driver support strategy.
If Mir would enable KDE to run stable and fast on platforms/devices it can’t right now then it would probably make sense to support Mir…
Which platform should that be? Android? Well without Wayland Mir wouldn’t be working on Android in the first place.
True but now comes the big BUT!
BUT this just doesn’t work with Wayland does it? You can’t run KDE or Unity… on Android with Wayland can you and probably it will takes years to get there.
who would want to run on Android? Nobody, not even Canonical wants that. One wants to use Android graphics drivers and that problem is solved and pretty much independent of the windowing system.
Yes and that is exactly what i said in original post. Its about the drivers strategy and nothing else.
Canonical needs Android drivers NOW not few years later down the road. And probably EGL does make sense for proprietary drivers on the desktop and Mir will probably do this before Wayland.
but Wayland on Android drivers works now, you are just badly informed.
In terms of “proof of concept code” yes but not in a way you could start pushing the whole user experience on android hardware…
You can’t do that on regular desktop PC ATM to be able to test something like GNOME, KDE or Unity for example using Wayland and not X11… Unity 8 on the other hand if i understand correctly doesn’t support X11 anymore and it does run using Mir on top of android devices and desktop PCs.
http://www.phoronix.com/scan.php?page=news_item&px=MTM3MTg
This is what i expected Wayland would do when it was announced and when it was decided it will replace X11. Back than i was confident GNOME and KDE will be in the state Unity 8 is on the video by the end of 2012…
You know we also have things going on like Plasma Workspaces 2. Just we decided that we support both X11 and Wayland. We have already shown demos of the new Plasma Workspaces 2 and we will continue to do so, if there is something to show. But I rather have working code than a hype 😉
Yes some working code, proof of concept UI, anything… and the hype was started by developers! Few years back Wayland was announced and back then EVERYBODY agreed this was ti. How often does it happen in FOSS world EVERYBODY put their weight behind something?
And now we are at the middle of 2013 and we can’t do a limited test of running KDE on top of Wayland… on any device…
there is a two year old proof of concept code in the kde-workspace repository. What else do you want to have?
oh and you could of course just run E18. AFAIU they have a working Wayland port.
“there is a two year old proof of concept code in the kde-workspace repository. What else do you want to have?”
KDE using Wayland and working OK on my desktop PC in next 2 years. Working on android devices would be nice but at least working and useful desktop session.
“oh and you could of course just run E18. AFAIU they have a working Wayland port.”
Yes i like it how they excepted the future in form of Wayland and implemented it no question ask. Considering their long long E17 dev cycle one would not expect they took the lead!
*accepted
ah an Ubuntu Fanboy. Wow that took long till the first one came and put on the FUD.
Why does somebody that disagree with you deserve the “Fanboy” tag? I just do not understand.
OK, someone comes here with the name “Mir Server” and repeats the “oh Wayland took so long, that’s why you need TO DO SOMETHING!”. It has been shown quite often that this is not the case. Mir builds on top of what Wayland has achieved, they even take the credit for work of others (also look in the Internet). Somebody with such a wrong opinion and such a name is clearly a Fanboy or a troll.
Be true Wayland fanboy and ditch Xorg support or at least don’t focus on it anymore and build KDE support on top of Wayland by the end of 2014. This is the only important thing nothing else mater.
Do this and then you will prove to me Wayland isn’t taking (too) long to become of something we can use and not just talk about.
we don’t want to drop X11 support (saying Xorg support doesn’t make any sense given that Wayland is also an Xorg project). We did not want a few years ago and we will not want in a few years.
Concerning Wayland support: my time line tells me that before the end of 2014 we will run on it. As I wrote in the blog post: there is no overlap between the Mir timeline and our porting time line.
Yes this is it the only important part. By the end of 2014 support should be there and distros to be comfortable to offer it at least on top of FOSS drivers BY DEFAULT.
“Wayland took so long” is the most stupid reason to start a new project in the open source ecosystem. If Wayland development is too slow you just need to put more developers to work on it instead of starting something completely new. It isn’t smart at all to start everything from scratch again instead of adding missing pieces to wayland. To be honest I didn’t expect any ready to use X11 replacement in less time than this.
http://www.phoronix.com/scan.php?page=news_item&px=MTM3MzY
One thing that I note as an outsider is that it seem like Wayland [Waitland ;-)] is picking up speed due to the Mir project – for a long time Wayland has had a lot of promises but nothing really seemed to happen.
Disclosure: I use Ubuntu and like Unity. I also think Mir looks interesting and I think that quite a few of the reasons for developing it are rooted in mobile. And I think that Canonical deserves much more credit then they are getting.
That’s not the case. I already explained it for the case of KDE. We had a clear time line for Wayland adoption prior to the Mir announcement. Mir didn’t change anything, it just throw us back a few weeks because we had to invest time to de-FUD and lost motivation.
That you as a user have not seen anything about Wayland is a good thing, not a bad. We don’t need a hype around it. We need a working solution where no user notices that he got transited to Wayland. It’s a technical detail. For the user not interesting. But for interested people there was work going on. Like the Wayland 1.0 release.
For the end-user (like myself) it is interesting to get a great graphical experience – as promised by Wayland and delivered by Android, OSX. It is not delivered by X and that is why Wayland looked so promising but just seems to take forever (I know its a big technical challenge and its free software but that is not the point – the longer it takes to deliver superior graphics the more users are lost to other platforms then the one we use and love).
We the users will push you the developers to pick up speed and Mir was exactly what was needed to get wind under our wings again. 😉
You will see in the end Wayland will be available faster because of Mir and if you think it won’t be then probably Mir will be used as standard Linux display server. Here is your motivation and not lost of motivation and embrace it and push Wayland harder or you will have to use Mir in KDE. 😉
Re base Kubuntu on Debian guys…. It is the only logical solution… I’m sure there would be dozens of questions about how best to do this etc etc… maybe an open discussion would be a good idea to create a plan on how best to achieve this….
If they were to work with the qt-kde group of Debian it would be really the perfect solution. The problem would be that the amount of work would probably increase considerably, since they’d have to work around the age of X and Qt when the freeze comes.
As much as the migration to Debian would be good, maybe it’s not as plausible as one could thing. I, for one, would love if Fedora were to become a possibility, since it’s one of the best distro out there and OpenSUSE is already well built around KDE.
KDE normally works fine on Debian testing. That’s kind of our minimum requirement. For KWin it might also be related to the fact that I use Debian testing (though we currently have two features in master (one in 4.10) which I cannot run due to outdated stack in Debian testing).
“GPLv2-or-later, this would no longer be possible, it would turn our code into GPLv3-only”
And that would be the end of the world as we know it.
It clearly restricts the freedom of the user to choose a later version or just GPLv2. It might sound not as a big deal for you, but for others it might be a reason to not depend on something which is restricting user freedom. After all we are developing free software.
I am sorry for saying this, but I think it your fault. Canonical is free to decide what to do with Ubuntu because they are the ones investing money and time. Distributions such as Kubuntu, ElementaryOS, Mint……. decided to based their distributions on Ubuntu, UBuntu did not force you guys to do that or keep your distros depending on Ubuntu. It is like if I was taking a test, copy all the stuff from another guy and then blame him if I fail the test because I did not copy the right stuff or he did not have the right answers.
Canonical has been brave enough to face new challenges that are to come with tablets, smartphones and the like getting more and more popular. THey might be late for that, but at least they are trying. What other distros are planning to support smartphones and tablets???? Fedora? OpenSuse? Debian? I do not think so. Fedora can barely keep supporting workstations and instead of getting better it just gets worse. Ubuntu cannot keep considering other distros if they are not willing to move forward in the same direction, that is why they have made so many changes to their distribution. If others are not willing to move in the same direction, then they have to find their own way, create their some software or patch their system so that it works with other devices besides laptops, desktops and servers.
You do not like Ubuntu anymore to make your job easier supporting a distro like Kubuntu? then just based it on something else and move along. Canonical has done a lot for the LInux world, it has pushed other distros to do better and stop being afraid of evolving and improving their technologies. If it wasn’t for Ubuntu, Linux would still be a shadow in the Windows world when it comes to Desktops and Laptops. If you guys think Canonical is moving to fast to keep up with them, then just move back to Debian and stay there. At the end, the only distributions that will survive will be those that adapt to new technologies, and I don’t really see many distros doing anything about adapting to new technologies such as smartphones and tablets, which are technologies that are here to stay with us for a long time.
Its truly a shame you didnt take the time to RTFA.
Martin does an amazing job explaining in 3 paragraphs in the section
Introduction
Frustration and lost Motivation
where he explains the reason developers feel betrayed.
But no rather, you make the same Pavlovian conclusion to add the generic tripe you find on Gizmodo defending your precious.
Your childish defence of Canonical at all costs WITHOUT taking in consideration what Martin talked about in those 3 paragraphs shows that your ability to carry on a conversation is rather limited to what you what to spout out while adults who engage in debate usually bother to read and understand what is being said.
You fail on all accounts.
In the real world all this doesn’t matter at all. At the end the only server remaining will be the one that has any or better driver support than the competition. Nobody likes a platform that’s not supported by the major graphics companies.
“It renders a serious threat for the future in case of a GPLv4 which is not compatible with GPLv3”
Complete FUD. No GPL license will ever be incompatible with the previous ones.
So GPLv3 is not incompatible with GPLv2? That’s news to me.
Nice post, but would you stop talking about cash registers (“till”)? The word you want is “until” (or possibly ’til).
thanks for pointing out. I hate it when words are similar, spell checker doesn’t catch those 😉
and corrected.
No problem.
I’ve mostly given up on spell checkers, dictionaries are much more reliable.
BTW, “quite some” isn’t often appropriate in English. “Quite a few (countable things)”, “quite a number of (countable things)” or “quite a bit of (some uncountable thing)” is generally better.
I use dictionaries, too. But mostly only if I have the feeling that my English is wrong 🙂
You should not attempt to correct someone if you are not sure you are correct yourself. Try consulting a dictionary first…
While “till” might be a cash register in England, you will not here it used that way anwhere else in the world. The first meaning of “till” that comes to mind is “to work, toil”, but the second and also very much correct meaning of till is synonymous with until, whereas ’til is a degenrate short form barely better than (yo)u or (yo)ur.
s/here/hear
(thanks auto-spellcheck, now where is the edit post button?)
Personally I don’t see what all the fuss is about. Mir is a Ubuntu only toy which they decided to use God knows why. Wayland is and probably will be used by everyone else.
The only advantage I saw with Mir is being able to use Android drivers but a Jolla developer proved that’s possible with Wayland too.
I think KDE should be firm in only supporting Wayland and just ignoring Mir. There’s just not enough resources IMO to support both and Ubuntu shouldn’t be encouraged to make thier own toys whenever they feel like it. If they was to support someone to port KDE to Mir fine but I don’t think the community should do this.
Then again my only contribution to KDE is a bug fix so in the end you decide what to work on. As a user of a distro that won’t adopt Mir or any other Ubuntu toys I plead for Wayland and only that.
First of all, thank you for taking time and effort to further clarify current situation with Wayland and Mir. As far as I can tell, you tend to be unbiased when presenting information, which is rather nice to see.
What caught my eye are comments on Wayland taking too long to develop, and that presumably Mir development urged Wayland developers to pick up the pace.
I personally don’t see that to be the case (I loosely followed news about Wayland development during the years). Wayland is a complex system which was being designed from scratch, taking many things into consideration, so it’s only natural to take a certain amount of time to be developed.
What to some call recent speedup in development, it looks like a natural progress to me, as every large system will at one point rapidly start to gain usability, once most of pieces are done and they start to fit in. I’ve worked on couple of large software systems exhibiting same behavior during development.
It bugs me that people who seem not to have experience (or very little experience) in software design/development can boldly state that developing project of this magnitude is taking too much time. One shouldn’t make firm statements about something without experience in that field.
This is a bit off-topic question, but could you tell me how is time/resource planning done for Wayland development, and how predictions about time needed for development turned out to be?
I quite agree with your points about the “speedup”. I once read a wonderful comment highlighting how ridiculous it is: “If Intel would remove all devs from Wayland and start a competition project, Wayland development must speed up even more, because competition helps Wayland!”. I think the key factor was the 1.0 release which means that toolkits can start implementing it.
Concerning your question. I do not have any time/resource planning for Wayland adjustments in KWin. It’s nothing one can measure as there has hardly been any pure Wayland work (ok since Monday I have only been working on Wayland). Most of it is refactoring and Qt 5 porting which also takes Wayland into consideration. That is designing the API so that it can also support Wayland. But that’s not Wayland specific.
Actually, I was hoping that you might have some info on Wayland development process, but I guess that this is not right place to ask about that 🙂
As for KWin, I figured that you can’t plan that much (if at all). I guess that you depend on following upstream Wayland development and then adjusting to it.
Thank you for your answer nevertheless.
Nobody is asking you to support MIR. MIR is not being developed with KWIN in mind.
Someone from canonical stated that kwin will support Mir or something like that.
I understand that some of the Canonicals decisions have caused trouble to some developers, but blaming them for searching new directions and solutions is hardly productive for anyone.
When I started using GNU/Linux, everyone emphasized how great it is that there are many options to choose from, not just one. I believed and still belive that it is one of the most important thing what Linux world has to offer. However, when time passed and I gained more knowledge about Linux and communities around it, I realised that there is only certain amount of freedom offered. Yes, Canonical can develop what they want, but if they, or anyone else for that matter, create something innovative and new, they will be accused for being selfish and causing disruptions to free software ecosystem.
The point is rather simple: they are part of a certain ecosystem, their decisions may damage it.
And because they are part of the ecosystem, they should only make decisions that are 100% safe and not innovate one bit?
Some people seem to think that Canonicals greatest obligation to people is not to upset anyone in anyway. If you’re Ubuntu user and feel somehow betrayed because of this Mir thing, why don’t you install some other distribution and let it go.
which innovation?
I’m not a prophet but let me tell what I think will be happen on a near future:
1- Waylan in slow but strong steps will be the best gs ever exist.
2- MIR will quickly becomes the best Minehunt game ever.
If Mir become the better choice then it would be foolish for Kubuntu Developers to skip it. I see more politics of late than what’s best for the Users.
Kubuntu Developers don’t work on these parts of the stack. If KDE developers don’t implement Mir support, nobody will. If the situation as of today (Ubuntu only, GPLv3-only, no protocol stability) doesn’t change, it doesn’t matter whether Mir is better or not. We would be a fool to implement support.
And rightly so. Again Ubuntu is taking stuff other have achieved, fork it so they can slap their own name on it and take a disproportional part of the credit and make it harder for the competition to benefit from new features. Then create a feel-good story to harnass parts of the FOSS community to get behind their fork/name/brand.
It really is a corporate parasite.
Probably you missed to underline how all this thing is a huge waste of time for everyone. This started to be a waste of time since you have been forced to write some blog posts on it, and it would be a huge waste of time to implement Mir support instead of developing something different. It will be a waste of time for Mesa developers that they will have to deal with this somehow.
Canonical had the chance to be involved in freedesktop.org Wayland development so they cannot complain for any technical problem: they had the chance to fix it (probably because there isn’t anything that needs to be fixed).
Mir is just damaging freedesktop.org and Canonical doesn’t care at all. Probably if they had the chance they would have replaced dbus with canonical bus.
Just one last thing: I’m using Kubuntu and when I will be forced to switch to Mir I think I will switch to another distribution and continue workspace development from that.
Ask your questions in: https://lists.ubuntu.com/mailman/listinfo/Mir-devel
And it seems like the OP doesn’t understand “In a disagreement, in the first instance assume that people mean well.”
Why should I ask on the Mir mailing list? The Mir devs didn’t send a mail to our mailing list to inform us that Canonical won’t help us with the Wayland adaption…
“The Mir devs didn’t send a mail to our mailing list to inform us that Canonical won’t help us with the Wayland adaption…”
Please re-read it and make it logically sane and spelling error free.
> Why should I ask on the Mir mailing list?
Because you do have questions about Mir. Do you expect your blog readers answer these questions for you or you just want to spread FUD?
“The Mir devs didn’t send a mail to our mailing list to inform us that Canonical won’t help us with the Wayland adaption…”
“adaption”? I guess you mean “adoption”.
Mir developers won’t help you doing Wayland stuff, obviously. Would you expect Firefox developers fixing WebKit for you?
Well you know Mark Shuttleworth promised us that Canonical would help us with Wayland…
So what?
Firstly, it’s from a personal blog IIRC, if you like to make such distinction. Mark isn’t a Ubuntu developer after all.
Secondly, how old are you? Haven’t you grown up enough not to take such promise seriously? Google promised to seize H.264 in Google Chrome. Have you seen it? Have you heart broken and switched to DuckDuckGo or Bing because of that?
Therefore, what I suspect is that you are just making dishonest excuses of dismissing Mir. You have every right to hate Mir, just that you never stated the real reasons.
You know what: if Mark is not honest about that they would help us with Wayland, why should I trust anything else written by Canonical? Why should I trust him if he writes that “KWin will just work fine on Mir”? Why should I trust him that “Mir will be awesome”? I hope you get it. In German we have a saying that you don’t trust a person who lied once.
I don’t know what that kind of strawman you are pulling with Google. I don’t care about any video codecs.
Seriously, patented codecs are bigger issues for FOSS adoption. Not matter how fancy your desktop shell is, it has to play videos flawlessly.
sorry but this comment doesn’t make any sense.
“Mark is not honest about that they would help us with Wayland”, it’s merely a plan change. The so-called promise’s context is “Unity on Wayland”. I’m confident to find many unaccomplished plan within KDE project.
I’m pretty sure “KWin will just work fine on Mir” without mentally anti-Ubuntu developers.
I’m not sure “Mir will be awesome” though.
Well if you cannot stick to a plan you should not make great announcments about other projects 😉 And if you do you should communicate with them.
I think I made it clear in my blog posts that we will not block any code contributed to us as long as Mir is not an Ubuntu only solution. You cannot expect that I or anyone else from the existing KDE community will work on it, we don’t have the time for that. And no, given the problems I outlined KWin will not work just fine on Mir. The API/ABI stability problems are preventing a “works just fine”.
> The API/ABI stability problems are preventing a “works just fine”.
https://lists.ubuntu.com/archives/mir-devel/2013-June/000155.html
I see nothing really Ubuntu-only. If no downstream package something, it appears Ubuntu-only. I hope you can correct me here. As I have no way to get notified about all these stuff.
What I have really experienced is that GNOME and IBus becomes increasingly Fedora-only (probably GNOME OS only in the near future). KDE seems quite good as BSD people are also willing to take it.
yes the requirement is that it gets packaged in other major distributions and with that I don’t mean an AUR for self-compiling. Let’s say at least Debian, Fedora, openSUSE, Arch and Gentoo.
http://tech.slashdot.org/story/11/08/07/1325209/kde-plans-to-support-wayland-in-2012
What a lie!
That’s slashdot 🙂
Now I did the talk and I gave that timeline, yes. But I said it’s only possible if and only if other developers join in. That didn’t happen. Then shortly after the presentation I was informed that Wayland would soon see a stable release which meant it’s a good idea to wait for that stable release first. And last but not least there was the Qt 5 announcement which I could not expect back when the presentation was done.
> So far Mir is a one-distribution solution. So far no other distribution has shown any interest in packaging Mir even if it would become a working solution.
What’s wrong with Mir then? It’s other distributions’ problem.
> I do not know of any distribution which packages Unity and from all I have heard it’s even impossible to package Unity on non-Ubuntu distributions.
https://github.com/chenxiaolong/Unity-for-Arch
> yes the requirement is that it gets packaged in other major distributions and with that I don’t mean an AUR for self-compiling. Let’s say at least Debian, Fedora, openSUSE, Arch and Gentoo.
Why keep YaST in openSUSE as Debian doesn’t package it?
sorry, I don’t get that comment about YaST.
Distro specific feature is just so common.
Yes and there is nothing wrong with distro specific stuff. But it’s not a solution to depend on them. KWin doesn’t depend on YaST, so I don’t get what you want to tell me with that comment in regards to Mir. In fact it rather proofs my point.
What’s the sense of “Let’s say at least Debian, Fedora, openSUSE, Arch and Gentoo.” ?
Many useful things are distro specific.
yes many useful things are distro specific. But we as an upstream don’t depend on distro specific stuff. In order to have Mir as a dependency it neeeds to be possible that every developer can use it. A dependency which ends up as dead code doesn’t help us. A dependency which means our CI doesn’t compile it doesn’t help us.
So you seems making another excuses (from Mark tell lies to Distributions don’t package Mir)
Sorry but I don’t get that. Just to make it clear: we cannot depend on distro-specific packages, because that would mean:
* no non-Ubuntu developer can test it
* our CI would not be able to compile it
This has nothing to do with Mir, that’s a general requirement
This merely shows how your software is under tested, well, you don’t believe TDD at all.
Maybe a better solution for Kubuntu is replace KWin with something else.
Please stop commenting on my blog. Your comments don’t make any sense
No need to waste my time either.
Why should an upstream project bother to care whether distributions package it?
I’m just know a new openSUSE specific software today — hwinfo . It has Debian package but the Debian package is outdated and broken. It is orphaned and no one adopted it yet.
And, can anyone tell me can I package “One-Click Install” into Debian / Ubuntu ?