The Costs of Supporting Legacy Hardware

The interesting IT news of last week is probably that the next Mac OS X version will drop support for some legacy hardware. Looking back at the history of Apple we see that this is not the first time, but that the company dropped support for old hardware quite regularly by changing the CPU architecture. This time it is different as the GPU is the reason for the drop of support.

This made me wonder what are actually the costs to support legacy hardware in KWin? While it is completely acceptable that new Windows versions and new Mac OS X versions do not run on legacy hardware, there seems to be the demand that free software has to support all kind of legacy hardware. So what does supporting legacy hardware cost for KWin?

What’s important to remember when asking this question is that supporting legacy hardware has nothing to do with supporting low end hardware. That is supporting OpenGL ES 2.0 hardware is in fact an improvement as most of the code is shared with the OpenGL 2 backend, but supporting OpenGL 1.x hardware really requires different code paths. So optimizing for low end hardware improves the overall system while optimizing the legacy hardware might in fact decrease the quality of the overall system.

So what is the legacy hardware we are facing with KWin? Well basically anything not supporting at least OpenGL 2.0 and not supporting non-power-of-two (NPOT) textures. The latter ones are really causing headaches as we are unable to develop for it and have broken support for it during the last release cycles.

Up until recently Mesa did not support OpenGL 2, but this is nowadays no problem any more, so we can be sure that if OpenGL 2 is not supported, the hardware is lacking features. On ATI/AMD OpenGL 2 has been supported since the R300 (with limitations in NPOT support) which got released in 2002. On NVIDIA OpenGL 2 has been supported since NV40 which got released in 2004. On Intel OpenGL 2 has been supported since I965 which got released in 2006. (All this information is taken from KWin code)

This means if I talk of legacy hardware it means hardware which is at least six years old. Supporting such hardware comes with some costs. E.g. on Intel you have the problem that you cannot buy the GPUs, that is you have to get a six year old system just to test. With ATI I faced the problem that even if I wanted to test an old R200 I cannot install it in my system because my system does not come with an AGP socket anymore – the same holds true for old NVIDIA systems.

So the only possible solution to test on real hardware which is not OpenGL 2 capable means to use hardware as old as the GPU to test. As there is the saying that free software development is about “scratching his own itch”, I must admit that I cannot scratch any itch with running legacy hardware. Especially not when I want to develop. I have a rather powerful system to not have to wait for compile jobs to finish. Developing on a several year old single core system with maybe a gigabyte of RAM is nothing I want to do.

So in fact we cannot develop for the legacy hardware. Now let’s have a look at the costs for us to have legacy hardware support. In KWin this comes as the OpenGL 1.x backend and to a certain degree also as the XRender backend. XRender has a some usefulness without considering legacy hardware as it provides translucency for virtual systems. Nevertheless we should consider it as a legacy support system. According to SLOCCount the XRender backend has a size of about 1000 lines of code. But I can only count the discrete source files for XRender. It does not consider the branches in other code pathes and so on.

Getting an overview of the OpenGL 1.x related code is rather difficult. SLOCCount cannot help as the code just uses different code branches in the same files. Looking at the code it is clear that there are several hundred lines of code dedicated to OpenGL 1.x. This is unfortunately also scattered around many files. Overall about 5 % of our core code base  is dedicated for supporting legacy hardware. All OpenGL 1.x related code is also ifdefed to hide it from the gles backends. So each dedicated OpenGL 1.x call comes with an additional ifdef and in many effects there is a branch for OpenGL 1 and for OpenGL 2.

To sum it up: we have increased complexity, increased maintenance costs and lots of code just for OpenGL 1.x related hardware which we cannot really test. So a rather bad situation. Additionally it is nothing which we can continue to support in the future. Neither Wayland nor Qt 5 will make sense on such hardware (XRender based compositing might still make sense with Qt 5, but as the name says not with Wayland).

Given this the logical step would be to remove the OpenGL 1.x related code completely. This would of course clash with the demand of some user groups thinking we have to run on old legacy hardware. In the case of Intel GPUs it might be in fact true that there is still a larger number of users around – this is of course difficult to judge.

Another real issue for removing is that the proprietary ATI driver (aka Catalyst/fglrx) only provides a decent compositing performance with indirect rendering restricting the available API to OpenGL 1.x. So removing OpenGL 1.x support would mean removing OpenGL compositing support for all fglrx powered systems even if the GPU supports OpenGL 4. But to be honest: given that the radeon driver has no problems with OpenGL 2 on the same hardware, I would not mind removing support for proprietary drivers.

What might be a nice solution to this problem are the llvmpipe drivers in Mesa which hopefully provide a good enough experience without hardware support. At least Fedora wants to use llvmpipe drivers for GNOME Shell. As soon as Mesa 8.0 hits my Debian testing system I will evaluate the usage of llvmpipe drivers for KWin as this will hopefully improve our experience on virtual machines. If I am satisfied with the performance, I will be tempted to remove the OpenGL 1.x based legacy code…

135 Replies to “The Costs of Supporting Legacy Hardware”

  1. The NVIDIA binary blob for legacy systems (NV30 and before) Xorg ABI support is stuck in 1.10 so in any case we are forced to swich to nouveu and llvm-software stuff. Besides it seems that NVIDIA is reluctant to support wayland in a time so every NVIDIA user will be forced to change to free drivers or live with X forever.

    1. “Xorg ABI support is stuck in 1.10 so in any case we are forced to swich to nouveu and llvm-software stuff. ”

      If this means breaking the OEM drivers be it nVidia or otherwise then I am against it. I use nVidia and I use ONLY the OEM drivers. Thats your choice to use drivers which are lacking, and its MY choice to use ones that work.

      “Besides it seems that NVIDIA is reluctant to support wayland in a time so every NVIDIA user will be forced to change to free drivers or live with X forever.”

      I don’t have any interest in this project and will be using X, period. Namely XDMCP, yes its used.

      Basically the carnage that this would cause is just not a good idea.

      1. You have missunderstanded me : The Nvidia OEM legacy drivers are currently “broken” for They are lagging support for last Xorg ABIs. If you don’t had notice it is because you have an old X version. If you have one “second class support old card” like me, the best solution is to move to opensource drivers because :
        – You have KMS support
        – You puss hardware vendors to improve/open their drivers
        – You encourage and help people working on free drivers reporting bugs or spreading their use.

        You say that you are happy with X, good for you. The future on Linux Desktop is Wayland for a very long number of reasons but don’t worry, X will stay with us forever so you don’t need to change is you don’t….BUT keep in mind that Nvidia is laggin or stopping the work on legacy drivers so be ready to hold X to an old an unsupported version.

  2. What driver would you recommend for Radeon HD 4530? Currently I can’t really tell you which driver is used (I don’t have access to my home PC at the moment), but as far as I know I have an open driver that comes by default on openSUSE 12.1. The problem that I have with it is that the compositing is not as fast as it could be and running a flash video in full screen with enabled compositing was always a big problem on my PC … do you have any suggestions for me?

      1. What an awesome answer, don’t use it.. Man if only the rest of the internets had thought of that. I’ll let my friends at youtube know along with the millions (under estimate) of sites that still use flash. I’m left wondering with such an answer why wide spread Desktop Linux adoption just hasn’t been kicking off. It’s really sad you’re just the kwin maintainer and not the preseident of the interwebs.

        1. it’s unfortunately not our fault that Flash has a bad implementation. If someone complains about performance in combination with Flash there is nothing we as the open source community can do about. Sure, don’t use it is the only solution to that problem. My smartphone does not have Flash, nevertheless I can watch lots of videos on it.

        2. Try the YouTube HTML5 Beta.

          There are also plenty of downloaders for videos on Flash-using sites, which bypass Flash and allow you to view the video in any local media player.

  3. from a very subjective point of view, supporting legacy hardware is a waste of time.
    doing stuff you don’t want to do on hardware you don’t have will eventually result in you quitting and we who enjoy using kde/kwin for years will loose.
    i would push this issue even more and focus on what is currently happening in open source graphics space and what will eventually happen.

  4. In fact you already do not support legacy hardware because of proprietary legacy driver. I tried to run GNOME 3 (crashed to fallback mode) and KDE 4.7 with a NVIDIA GeForce 4800 SE was not running fluently even with desktop effects turned off. If someone wants legacy support he can fork older branches of your project which is a great benefit of Open Source. You are free to move on. No need to justify yourself 🙂

  5. From what you mention, removing OpenGL 1 support makes sense at some point. Not having KWin-OpenGL with the fglrx driver is a big autch however.. It would be sad if that happened (at least to me ;)). Is that still the case with the latest driver version?

  6. Unfortunately, the old 945 GPU that is still bundled with most Intel Atom systems (when they are not bundled with the even worse PowerVR-based GPU) still doesn’t do OpenGL 2, and there are *a lot* of systems out there like this, and linux/kde works great on them currently.

    Also I cannot say I am happy about dropped support for fglrx: the open drivers have worse power management, worse performance, and no support for the built-in hardware video decoder.

    I understand that one developer can’t do it all, but it’s a bit sad to see all this support being dropped, especially after all the blog entries where the kwin architecture is praised for being modular and easily able to support features across multiple GPU APIs…

    1. But to be honest: given that the radeon driver has no problems with OpenGL 2 on the same hardware, I would not mind removing support for proprietary drivers.

      On a System with radeon driver (HD4250, not so old onboard chip, r600g driver) i experienced that kwin runs a lot faster if i disable the “Use OpenGL 2 Shaders” option even with mesa 8.0. I have to Mention that i did not test this with KDE 4.8 so far, only 4.7 and that i don’t know what this option exactly does.

      Also I cannot say I am happy about dropped support for fglrx: the open drivers have worse power management, worse performance, and no support for the built-in hardware video decoder.

      Sadly this is true. Fglrx is a binary Blob and buggy as hell but you have no choice in some Situations.

      Unfortunately, the old 945 GPU that is still bundled with most Intel Atom systems (when they are not bundled with the even worse PowerVR-based GPU) still doesn’t do OpenGL 2, and there are *a lot* of systems out there like this, and linux/kde works great on them currently.

      True, but they also should work well with XRender.

      I understand that one developer can’t do it all, but it’s a bit sad to see all this support being dropped, especially after all the blog entries where the kwin architecture is praised for being modular and easily able to support features across multiple GPU APIs…

      I definitely agree here. You have done very good Work so far.
      Especially if you look at the Gnome Shell which, in contrary to the KDE Desktop, fully relies on compositing and on how many systems it fails. 🙂
      But if llvmpipe is fast enough it may be the better solution.

  7. At first, when I started reading I though “Wait, probably he wants to remove something … that much explanation and emphasis on the obsolence of OpenGL 1.x …”

    All good, OpenGL 1 usage should be killed with fire these days …
    probably using it a couple of years ago made sense … now it doesn’t

  8. Well, I think the real question we should ask is what options are available to the people who do still rely on OpenGL 1.x (the only people I know with this issue are using embedded Intel chips on desktops- most Intel laptops I’ve seen use OpenGL 2).

    So, if most systems requiring this functionality are desktops, it should be simple to buy a 25 dollar OpenGL 2.x graphics card and install it. Most people who are planning to run a composited environment will have bought this hardware already, due to poor performance.

    On the other hand, with the proliferation of low-powered, cheap devices with GLES support, if a company needs new devices to run Linux on, there is plenty of opportunity for cheap upgrades on hardware (this is better than the alternative). Even to this end, llvmpipe is there to accelerate Qt 5, so worst-case scenario, older machines run with no effects, or very low effects (basic transparency, I guess).

    When Unity came out on the most-used Linux distro needing greater than OpenGL 2 support, a lot of people just upgraded. I think Kwin’s target audience probably won’t care all too much, especially if it means a better Kwin in the near future.

    1. For really old systems that only have AGP slots, there’s no such thing as a cheap card that has OpenGL 2.0 support. The highest OpenGL support I can find for AGP is OpenGL 1.5.

  9. I just got a new sandy bridge workstation and kwin with the intel drivers is silky smooth. It is an absolute pleasure to use KDE.

    In contrast my ATI card with FGLRX ( radeon doesn’t do multimonitor on 4650) is no where near as smooth. Anything that isn’t making Kwin stronger should just get dropped, just let us know which graphics cards / drivers work correctly. It’s a shame i can’t buy an intel graphics card for my desktop 🙁


    1. Ivy Bridge and up will be strong competitors even when it comes to gaming GPUs. Hopefully it won’t be so uncommon to buy a desktop with Intel inside, moving ahead.

  10. P.S. Dropping fglrx support really isn’t ideal. Of course, if ATI won’t provide any ability to run on OpenGL 2, I guess that’s not something we should have to deal with. ATI will most likely never contribute directly to the open drivers for their recent hardware, meaning KDE users won’t be able to rely on highly powerful, new ATI hardware.

    The obvious solution is to get ATI to resolve the problem keeping us from using direct rendering. If not, I can see a lot of users getting upset by this, including a few of my friends who actually quite enjoy the performance on their fglrx drivers.

    We could always put our foot down and let ATI know it’s our way or the highway, but I wonder if we really have that much weight to push around.

    1. In case you haven’t noticed yet: ATI has not existed for 6 years now. It got bought by AMD in 2006.

      And AMD’s employees are the main developers of the free radeon drivers.

      If one doesn’t have the time to read some news every couple of years, one should not waste other people’s time by posting completely outdated and wrong statements.

  11. Your post is a little disingenious in that you can still compile on your powerful system despite testing on legacy hardware. It’s best not to cut corners when trying to make a point.

    1. Well last time I actually tried to do that, I failed with crosscompiling from amd64 system to i386. This might be better now with multi-arch. Oh and than there is a space limitation on my desk 🙂

  12. If in the case of fglrx, the driver is faulty, then I see no reason about not dropping the legacy support.
    Maybe the fact that fglrx will no longer work with Kwin in Composited mode will finally force AMD devs to fix their own s#*t and make a properly functioning driver that will not require ugly legacy code or workarounds in the DE’s side to work correctly. OpenGL 2.x/ES isn’t a big requirement and it’s quite atonishing to see that fglrx still doesn’t support it correctly…

    So I completely agree here for dropping support for legacy and broken drivers/hardware. I see no reason why a DE should care of drivers flaws in the case of fglrx. Compositing is not that essential anyways so these users can survive without it. KDE works just as great without compositing.

      1. Basically there direct rendering path sucks for compositors. We don’t really support indirect rendering in mutter / gnome-shell so that fglrx is forced to use direct rendering which causes all sorts of weird corruptions. It improved a bit in recent releases but is still not on par with the radeon drivers. We recommend users to use the radeon driver instead which just works ™.

        AMD should really fix that it has been broken for *years*.

  13. Assuming that llvmpipe provides a decent software solution, I am in favor of this as well. fglrx users can live with no acceleration until AMD actually decides to fix their drivers. Old intel users can live with llvmpipe (assuming it’s actually fast enough – if not, then I would be against the change). Other users benefit by the developers no longer having to worry about those old code paths.

    However, I’d be disappointed if you dropped support for r300 hardware (by removing NPOT support). That was really common hardware, is still in wide use, and I think you could continue making sure at least some minimal amount of compositing works. If you don’t want to maintain blur or other advanced effects on it, then that would be fine as long as a certain baseline was present.

    1. I don’t plan to drop support for r300 – in fact I fixed a few issues recently. I have an r300 based card which I can still install in my system.

  14. > At least Fedora wants to use llvmpipe drivers for GNOME Shell.

    Actually, we also want to allow KWin desktop effects with llvmpipe in Fedora 17, we are already shipping the F17 Alpha (to be announced really soon) with this enabled.

    Note that this currently affects only hardware with no accelerated OpenGL at all or setups where the user has manually set LIBGL_ALWAYS_SOFTWARE=1. The basic patch we’re carrying isn’t implementing any automatic or GUI-checkbox-driven fallback to LIBGL_ALWAYS_SOFTWARE. I think it should be pretty simple to do, given the already existing support for LIBGL_ALWAYS_INDIRECT. If you come up with some code we can test in that direction, we’ll be happy to try shipping it in Rawhide and (if ready soon enough) the Fedora 17 branch and reporting any feedback.

    (That said, as long as the OpenGL 1 code is still there, I don’t think using LIBGL_ALWAYS_SOFTWARE by default on OpenGL 1 hardware would be a win. Having a checkbox might make sense though. Oh, and one thing to keep in mind is that legacy hardware also has legacy CPUs, so don’t expect the llvmpipe to run as fast on it as on your fast multi-core machine.)

    You can find us on #fedora-kde on Freenode IRC or on the kde mailing list at

    Oh, by the way, I happen to have a Radeon 9200SE (rv280) on my primary desktop machine (a Pentium 4 Northwood with 1 GiB RAM), so in principle I can do some testing on OpenGL hardware, though I can only easily test stuff which we can pull into the Fedora 17 branch (or Rawhide after Fedora 17 gets released) and thus on our nightly live CD images. (Testing a KWin built directly from git on my production system is something I’d rather not do, sorry.) My machine fits your description of “legacy” exactly, but it works fine for me, so I’m reluctant to replace it. I have a notebook which is actually faster (Core 2 Duo, Intel GM965, 4 GiB RAM), but I’m too used to the desktop form factor to switch to it as the primary machine (and soon you’re going to label even that notebook as “legacy” anyway 😉 ), though sometimes I (ab)use it as a server for CPU-intensive tasks (builds mainly). 😉 Oh, and my previous notebook was 10 years old when I replaced it with that Core 2 Duo one in 2008. (Pentium II 266, S3 Virge GPU, RAM upgraded from 32 MiB to the maximum of 160 MiB, anno 1998, ran KDE 3.5 just fine.) 🙂 I intend to keep the new one in use until at least 2018. :-p

    1. Personally I find it really bad that you patch KWin without contacting the development team. You should have dropped a mail to to ask for advice. Doing random changes on an unknown code base doesn’t sound sane and if KWin runs amok on llvmpipe it will be KDE which gets the blame and not Fedora.

      I kindly ask you to not ship any llvmpipe support before the kwin team gave the OK for it.

      1. FYI, it looks like we’re not going to ship that patch in Fedora 17 Alpha after all, because the llvmpipe improvements are not actually in yet. (We thought they were, oops.) We’ll be trying to go through the Alpha blocker process to get the patch dropped for now, but we still DO intend shipping this in Beta and Final if the llvmpipe improvements land.

        But talking to the KWin team is exactly what I’m doing here! Fedora 17 isn’t going to be officially released for another 2½ months.

        Your reaction is very unhelpful, we are offering you to help with your testing and instead of working with us, you want us to do only what you dictate.

        1. Sorry, but I don’t consider adding comments to a blog post which matches the topic by chance as communicating with the KWin development team.

          That’s how I would like to see this happening:
          1. Write a mail to with “hey GNOME Shell will be using llvmpipe in next Fedora, we had been thinking that this would be awesome for KWin, too. Can you help us there”
          2. Wait for our answer
          3. All patches upstream first (Reviewboard) – downstream patching is evil.

          If downstream uses patches which we don’t know about we have serious issues. Just consider us investigating bugs because we see a crash with software rendering in a code path which should not be reachable. Just think about what troubles it would cause us if we don’t know that Fedora modified the definition of what is a software renderer.

          This is absolutely a no go for a downstream.

          1. I have a great idea talk about how maintaining a huge code base is hard and you need to drop things that are still needed (despite your own opinions) because it’s too hard to maintain, then when a rather large contributor to KDE and Linux in general says they’re going to try and implement something with the help of another kwin developer, you slap them in their face not even knowing the full story. Wow! The awesomeness abounds here. Between learning that I can stop using flash today and how to look completely like a elitist tyrant or better yet how to show I’ve lost an argument by bringing up the past, I think I’ve had my fill of you. I guess you’re only saving grace for me is working with Razor-Qt for a standalone kwin version. If that ever happens, they’re not going to be happy though with support for just the newest hardware. Kinda defeats the purpose of Kwin on Razor-Qt to get the best out of old hardware which is what many light environments attempt to do, but kwin will fail horrifically now at. Yay for YOU! Come on really?

            1. this has nothing to do with KWin development. It is a general thing about how I see upstream and downstream collaboration. All software development should happen upstream. I am very disappointed if I see development happening downstream without upstream even knowing about it. That in this case one developer knew about it, is great, but the team did not know.

              You know we developers do all our development in the open and do Review Requests for patches we ship. There’s a reason for it. I just don’t like having code there not going through the same review process.

            2. Just FYI, I stopped using Flash years ago. I tried Gnash for a while, but stopped using that months ago because it caused more problems than it solved, and went back to no Flash at all. These days at least YouTube mostly works (HTML5 Beta, not glitch-free in Konqueror/KHTML, but works).

              But this has nothing to do with KWin. 😉

        2. An update on this: Fedora 17 Alpha will NOT ship this patch: We got it removed through the Alpha blocker process, because llvmpipe turned out not to work well enough yet, see bug #794835.

          What will happen for Beta and Final and for future Fedora releases will depend entirely on the progress of the llvmpipe.

        1. that’s great that the patch comes from someone knowing the code, nevertheless why didn’t you drop a mail on or discussed it on #kwin or #plasma where there is the chance that I and others are getting aware of it?

          1. There have been quite some misunderstandings that lead to the fact that this patch landed in the fedora 17 kde-workspace repo. I should have made more clear that it was only for testing purposes, to gain a better understanding of how far the TFP enhancements in llvmpipe have come. A short conversation with ajax then showed me that the performance improvements are just not in place yet and probably need some time. Once Mesa, the kernel etc. have proper support for these enhancements we can reconsider this again.

            1. looking at that patch I thought that it was for testing purposes, because that’s what my patch to just test whether it works would have looked like 🙂

            2. Well, to us, to “test” things means we stuff it in the Alpha and let our users see for themselves (and file bugs if it doesn’t work). 😉

              Unfortunately, this turned out to not even match Fedora Alpha standards (which are fairly low, e.g. it is not release-critical for an Alpha to have working sound, only for Beta and Final, and in fact, sound doesn’t work at least on the GNOME spin of Fedora 17 Alpha) and as such had to be pulled out.

              Are you in contact with ajax over this already? Or should I send out some e-mails? I gather that KWin OpenGL effects need more power than gnome-shell and thus more work to work acceptably with llvmpipe, so when do you think it will be ready? In 2 months (Fedora 17)? 8 months (Fedora 18)? 14 months (Fedora 19)? Or just “when it’s ready”? 😉

              Oh, and for Martin, the patch I would have come up with as a permanent distro patch if tasked with it would probably have been identical. 🙂 We like minimal changes in distro patches, even if it means the function name “lies”.

              1. We like minimal changes in distro patches, even if it means the function name “lies”.

                which is seriously bad. It might have caused us in bug reports to invest time because we don’t know about it. Think about the implications for upstream if you change such semantics.

                The patch I had already in my mind to add support for llvmpipe included at least a huge warning to stdout that llvmpipe is used. Something we can easily see when we ask the users to provide command output.

        1. You know that there are people giving full blame for the 4.0 PR problems to Fedora, right? All this bad press because one distro decided to drop 3.5 before 4.x was usable, right? Better never mention that as an achievement ever again if you want that I consider your comments as serious 😉

            1. no, they kept KDE 3.5 around and only offered 4.0 as an additional environment, not as the default one.

                  1. This was communicated way too late to impact our decision process for Fedora 9. It was just not feasible to revert everything so close to the Fedora 9 release, after having spent the whole release cycle on upgrading things and making applications work with the new environment.

                    Back when we made the decision, you were all boasting about the great new features to expect in the next stable release and there was no talk at all about 4.0 being essentially a beta branch.

                    In addition, those messages implied (or at least didn’t clearly deny) that 4.1 would be the end user release, and we knew that we were going to ship 4.1 as an update to Fedora 9 as soon as it became available. (It actually took a bit longer, partly because the Fedora infrastructure was broken into by some cracker right when we were finally ready to push the update to the stable updates, which was not foreseeable either.) Fedora 8 was still supported then, so nothing was forcing users to upgrade to Fedora 9 before 4.1. You only started talking about waiting for 4.2 weeks after Fedora 9’s release.

                    Blaming Fedora for your poor communication is ridiculous.

                    And in addition, somebody has to bite the bullet and do the testing. If we hadn’t done it, who would have? 4.2 wouldn’t have been any less buggy than 4.0 if it hadn’t been for the many bug reports from Fedora users, and also a few patches from Fedora developers. Software doesn’t magically test itself, only getting it to the users will get it tested.

    2. nice to see others, who see, that you don’t need always the newest hardware. last but not least this also affects the nature/pollution.

      I read an article before a few weeks, that for many devices, the eletric power they consume extends the power for their manufacturing (the whole chain) after 3-5 years!

  15. Any backend that does not provide functioning VSync is not acceptable really. Does XRender and softpipe do VSync? If not, it’s not even worth thinking about, IMO.

    As for the binary drivers, isn’t OpenGL ES now included in OpenGL 4.2? AFAIK, the NVidia driver is fully OGL 4.2 compliant and therefore provides full support for GL ES through If fglrx also claims 4.2 support, then it also should support ES. Currently though, KWin does not seem to be aware of this; it’s impossible to run kwin_gles on nvidia or fglrx.

      1. It’s likely an extension that’s present for all hardware their driver supports, although I don’t know that for certain or how well it works. But requiring a recent version of fglrx to work wouldn’t be much of a burden.

      2. “OpenGL 4 requires new hardware.”

        Requiring new hardware is better than not supporting anything at all.

        1. the problem is not gles, works fine with the nvidia blob, but egl – works not whatsoever (last time tested, at least)

  16. I saw this day coming. Indeed the fact that wayland will eventually replace the xserver and the fact that it requires opengl es 2.0 means that keeping the opengl 1.x renderer doesn’t make sense. Even if I’m one of those affected by this I must admit that this is a sane decision.

    P.S.: Now that mesa 8.0 is out now, do you plan to develop an opengl 3.x backend for kwin?

    1. We don’t need to develop an OpenGL 3.x backend. After dropping OpenGL 1 support, KWin will have a fully core profile compatible rendering path, so using OpenGL 3 is just a different call to GLX.

      I really would like to experiment with geometry shaders in KWin but I doubt I find the time for it.

  17. Martin,

    It’s true that the i915/945-class hardware doesn’t support OpenGL 2, but the driver actually does expose support for GLSL shaders via extensions. It’s limited, of course, but ought to be sufficient for what you need. The only reason it doesn’t advertise GL 2 is because the hardware can’t do occlusion queries. But I doubt you’re using those anyway.

    i915 also supports OpenGL ES 2.0 with GLSL, if that’s helpful.

    So I say go for it! I’m always happy to see fixed function OpenGL 1.x code die.

  18. The reason that I’m still using fglrx is the power … It’s so important to a latptop. (Even not think about power, the temperature will simply go up more than 10 centigrade higher… )

    As you can see, even if manually set to a low (Which means performance for kwin will be unbearable.), it’s still consume higher power. I would see a large group of ATI user will use catalyst because of this.

    I don’t know that there is a way or not for KWin to do something or driver must be fixed. I would be willing to donate some hardware for this if kwin guys don’t have r600 series card.

  19. Apple also by default will disable the ability to run non “App Store” programs on users own machines… All I’m saying with this is that if Apple are j*rks doesn’t mean FOSS devs should be as well, after all most of us don’t try to “force” users to move to “latest and greatest” hardware(which they should buy from us), simply because most of us don’t sell laptops/computers/phones 😉 Btw, any machine which can not run Mac OS or Windows is pretty much one more machine that will run Linux(eventually KDE).

    Anyhow, IMHO OpenGL ES is much more important then OpenGL 1, just like OpenGL 3 is much more important then OpenGL 2. And FOSS being FOSS if someone wants legacy support, someone should be able to read something like “how to join KDE”, “easy tutorial for starting KWin development”, etc.

  20. Sounds like a good plan to me. To be quite honest I don’t think KDE is the right DE for users with old hardware. There’s Razor QT now if you want a lighter DE.
    KDE should just move on an improve the experience for the majority of users which have modern hardware.

  21. I can see the point KDE developers can’t support old hardware for the reason you said. This doesn’t apply to fglrx indeed. You can use it on modern and fast systems. This might (but i don’t think you’ll like it) be a way to mantain openGL 1.x code. And please remeber the FOSS radeon driver is *not* a full replacement of fglrx. I have and AMD hd515v on my laptop. No way of using radeon on that system. Just to say kwin performance are better with fglrx with opengl 1.x then with radeon with opengl 2.x, but the main reason is the heat generated by radeon (even with the low profile which is a bit ugly for me since i’m used to the dynamic one of fglrx).

    Of course i’m ok to use kwin via llvmpipe (i already have it as my default software render) *if* everything else on my system uses fglrx (in)direct rendering. In other words my system is configured to use fglrx by default and kwin automatically fallback to software rendering.

    And Martin, if fglrx is buggy (and we know it is) there is the unofficial AMD bugzilla, fglrx devs read it, and they also answer! Really they do. You might try reporting the problem you and your team find on fglrx to said bugzilla. You loose a very small time, and if AMD ignore the issue, well you tried, drop support for them ;). Worth a try.

    Anyway i really hope intel IGPs will be powerfull to do basic gaming so i will never by AMD again (as i’m already doing with NVIDIA from 2001).

  22. Fglrx sounds like a blocker until you realize that it is a driver issue, and that another workable driver is available. So go ahead, it’ll only put pressure on AMD to move in the right direction.

    Llvmpipe may be the fastest software renderer yet, it’s pointless if you’re trying to run on legacy hardware, which will have a slow CPU.

    For me, as long as non-composited kwin remains an option, it’s fine. On legacy hardware that’s how I’d configure kwin anyway, for speed and reliability reasons.

    So I’d say go ahead removing OpenGL 1, but keep a close eye on non-composited. At least you don’t need old hardware for this one 🙂

    1. Another driver is available indeed. But as I use this other driver, I can tell you that I still have to keep desktop effects disabled because this driver’s performance is simply unacceptable. Even without any effects or animations, it introduces huge lag when switching windows or desktops as soon as compositing is enabled.

  23. I think that dropping support for OpenGL 1 is the right move, if it will make maintaining KWin simplier.

    Problems with fglrx drivers are problems of fglrx drivers, so it’s their work to support OpenGL 2 propertly.

    Do you plan dropping XRender support? Is it used by someone? Won’t llmpipe be nicier than that (with auto-disabling some effects on llmpipe by default and tunning performance settings?).

    Thanks for your work :-).

    1. No I don’t want to drop XRender. As I mentioned in the post it provides other benefits than just supporting legacy hardware.

  24. So long then, KWin.

    I currently own 3 machines, all with integrated Intel graphics. My main desktop has a i845G (OpenGL 1.3), while my 2 netbooks use the i945GME (OpenGL 1.4). I’m not planning on updating any of those any time soon, they work like charm. And of course I’m not going to single any of them: If I cannot get KWin to run in one of them, none will.

    1. First of all: I did not present any plans to remove anything. Second of all: KWin will still be working on your system even iff we remove OpenGL 1.x code pathes. Third of all: we would have replacement solutions like llvmpipe or XRender and last but not least: nobody forces you to upgrade to a new version. You can always stay with the last version which provided OpenGL 1.x support iff we remove OpenGL 1.x code.

      1. At least you *have* replacement solutions. Consider that Google Chrome has bondage-and-discipline GLIBC dependencies that mean the latest 6 versions won’t run on a bunch of stable production machines I don’t want to upgrade the OS on, and then have to re-build/test a *lot* of custom software for. I can’t afford the kind of down-time Google wants me to waste.

  25. Unfortunately, the open-source radeon driver is not a working replacement for the fglrx blob. People keep repeating that for some reason, but it is plain wrong – performance in simple games like Neverball or Tuxracer (or however that is called currently) goes down noticeably, but the games are still playable. More complex Windows game run through wine however completely fail to work. So, even though fglrx is a pain, uninstalling it is not an option with the current state of open-source drivers. My card is an AMD Radeon Mobility 3200 (so putting in some card for 25€ which is properly supported by open-source drivers is not an option either).
    However, I am disabling compositing anyway because it makes the system feel slower. So I don’t rely on the OpenGL 1 backend either. Not sure if that also counts as “legacy support”, I kindly ask you to keep in mind that there are still people using a totally uncomposited desktop 🙂 (And yes, I am aware that this is just some rambling of yours, not a plan to remove anything)

    1. well the question here is whether KWin has to care about fglrx doing better for games than radeon? For desktop usage it is quite clear: radeon gives a better performance than fglrx, that’s why our recommended driver is radeon. In general it just has nothing to do with each other. It is totally fine that users want to use fglrx for gaming, but that might in future perhaps come with the cost of having to use llvmpipe for compositing.

      1. “radeon gives a better performance than fglrx”

        Except when it doesn’t. At least for me radeon does not nearly give acceptable performance for desktop compositing on a r600 generation card. And it’s not that I run too old software with Mesa git and the latest rc kernel.

        Apart from that I can understand people not wanting to waste 100W of power (leading to noise from the coolers) because of totally unusable power management in the radeon driver.

      2. That’s okay. I understand the best desktop experience is out of reach with the binary driver, and I am willing to live with that. As long as that does not mean that KDE will be unusable when using fglrx, I’m happy 🙂

        Thanks for all the time and hard work you spend for KDE – I think one can not repeat that often enough. It is definitely appreciated!

  26. Would removing indirect rendering have any effect whatsoever with compositing disabled/suspended?

    The open source radeon drivers may work well enough for desktop use, but that’s not enough for some. There’s the minority that stubbornly wants to play games on linux (o/), and stability issues notwithstanding the Catalyst driver pushes better numbers than the open radeon one, in most cases. You’re obviously free to form your software as you see fit; I just want to highlight the existence of this minority.

    Obviously the optimal solution would be for AMD to — in a stroke of genius — assign (more?) paid developers to work on the open drivers and help us bring them up to par with the proprietary ones (assuming there aren’t “software patents” in the way), delivering OpenGL 2+ to the masses. I’m not holding my breath, though.

    1. Would removing indirect rendering have any effect whatsoever with compositing disabled/suspended?

      sorry, I don’t get the question

      1. >”Another real issue for removing is that the proprietary ATI driver (aka Catalyst/fglrx) only provides a decent compositing performance with indirect rendering”

        Nevermind; you mentioned compositing performance there and I just read performance.

  27. With the limited resources (regarding both time and money) available to the respective developers, I think it would be appropriate to strive for providing the best user experience possible to about 80% – 90% of potential KDE users and forgetting about the remaining ones if that really speeds up development and gives us a better hold in the desktop world. Keeping backwards compatibility with hardware that is 6-10 years old shouldn’t consume the resources needed to be competitive now and in the years to come.

  28. Just to be clear: This doesn’t mean that OpenGL 1.x hardware can’t run kwin at all, but that just the desktop effects won’t work, right?

    If that is the case, then there should be no holding back in dropping the old code.

    1. No, desktop effects could still be enabled thanks to XRender support, though not all effects are available in XRender mode.

  29. Don’t throw the baby out with the bath water. Linux has long been used as a means to “rejuvinate” older hardware into a usable state (and not necessarily as CLI servers).

    The laptop I just got is about 5-6 years old (Thinkpad T42) which is a move up from what I had prevously and was the first one I had which could handle Gnome-shell and Desktop Effects and otherwise look pretty good.

    One side of supporting legacy hardware is accessibility. Those developing don’t have it, and those that need it usually aren’t involved with the development. Part of that is the user’s technical know-how, and part of that is that if testing requires any compiling or resources it is evidently more painful on a slower, weaker machine.

    On the plus side, small improvements are more noticeable on older, slower machines which may not be a drastic enough change on a more powerful system.

    So if we can connect-the-dots between people with older machines and developers, we might be able to find a “perfect” balance?

  30. I’m using fglrx with my notebook and in comparison to the radeon driver it gives me nearly 1,5h more time with battery. At the moment the battery times, would be equal you can drop what ever you want. 😉


  31. Why you are not using the OpenGL ES 2 Implementation from the fglrx? Since 10.8 or 10.9 AMD ship it with the monthly package.

      1. Right but the fglrx users can use the desktop effects and you can remove the old renderpath without getting insult.

        How do you mean your last statement?

  32. “Up until recently Mesa did not support OpenGL 2, but this is nowadays no problem any more”

    Well, now it supports OpenGL 3, so I hope we’ll get some new, fancy and more efficient code paths using that 🙂

  33. It’s not about hardware support it’s about how you dill with users, if you just drop support for some hardware without any warning or fancy pop-up “your hardware don’t support effects/this effect” you’ll just upset people. Right now I can choose all effects enable compositing and get half screen warning that they couldn’t be enabled AFTER I do this. It should be other way around, effects that are not supported should be grayed out with nice “ahtung – this shit is not supported by your hardware” sign.

    About dropping hardware accelerated compositing for older graphics cards is the same thing, If you decide that you can’t maintain kwin for them that’s fine but give people polished non-composited desktop without glitches or black corners – yeah you are not plasma developer but users doesn’t care about it, users just use – and most of theme doesn’t even know what window manager is.

    [personal opinion]
    People nowadays are just easy on resources, they forgot that couple years ago hardware with less power than today cell-phone was enough to do serious work, play multimedia and 3D games too. Now to display just text editor you need 1GB ram and graphic card that can handle crysis…

  34. Is it the OpenGL Shader based compositing backend thats not working good with fglrx?
    Is it an driver problem or an kwin problem i.e. developed on nvidia and surprised it didn’t work good with fglrx?
    If i compile kwin to use OpenGL ES (KWIN_BUILD_WITH_OPENGLES) does fglrx work with compositing then?

    This is what you said about OpenGL 1.1 a year ago.
    “With GLES KWin has a forward compatible code path, though we still have OpenGL 1.x code for legacy systems and that won’t be dropped in the near future. Considering deprecated OpenGL 1.x code it’s looking quite good, e.g. glBegin/glEnd is removed in the GLES branch”

    1. Is it an driver problem or an kwin problem i.e. developed on nvidia and surprised it didn’t work good with fglrx?

      fglrx – I myself have used fglrx for ~ one year.

      If i compile kwin to use OpenGL ES (KWIN_BUILD_WITH_OPENGLES) does fglrx work with compositing then?


      This is what you said about OpenGL 1.1 a year ago.

      as my post does nowhere mention that I intend to drop OpenGL 1.x based code, what I wrote there is holds true. It’s now one year ago and the current code will stay at least for another year – nothing which I consider as “near future”.

  35. > This means if I talk of legacy hardware it means hardware which is at least six years old.
    This is actually not true. It means that if you talk of legacy hardware, it means hardware which was _designed_ at least six years ago. But OpenGL 1.x hardware was sold well after that. I’ve bought a netbook with a GMA950 graphics core in June 2009, and I think it shouldn’t be thrown on the scrap heap yet

    1. and I bet you can run KWin with OpenGL 2 shaders on it – give it a try 🙂 Easy test: try the invert effect.

          1. I have upgraded to KDE 4.8 (with the KDE 4.8 repository offered by openSUSE) on my netbook.
            Using non-GLES kwin, the invert effect still doesn’t work. With kwin_gles, it does work, but everything is extremely slow. The system takes tens of seconds to react to anything when using kwin_gles. I think I’ll file a bug about this.

              1. Hi Thomas,

                I had the blur effect enabled, and it worked fine with kwin. I tried to disable it, and I now also get a satisfying performance with kwin_gles. This is probably another bug worth reporting. Thank you for the hint.

                1. Try to reduce the blur strength. That chip will certainly hit som limit, but the driver seems quite sloppy on those an pot. falls back to some SW implementation.

                  But if this:
                  glxinfo -l | grep GL_MAX_VARYING_FLOATS

                  is smaller than 32, the GLES code is too generous.

                  1. Hi Thomas,

                    When I set the blur strength to the lowest possible setting, the blur effect works fine. As soon as I set it higher than that, it is unusably slow again. Also, kwin_gles prints messages like this to the terminal: “i915_program_error: Exceeded max nr indirect texture lookups (6 out of 4)” (or “8 out of 4” etc. for higher blur strengths).

                    glxinfo -l doesn’t print a GL_MAX_VARYING_FLOATS variable.

                    1. > glxinfo -l doesn’t print a GL_MAX_VARYING_FLOATS variable.

                      … because you don#t have OpenGL 2.0 shader support – sorry, i’m stupid 😉

                  2. Perhaps it is possible to query the maximum number of indirect texture lookups and disallow any setting exceeding that?

                    1. GL_MAX_TEX_INDIRECTIONS_ARB is not part of GLES (but eg. used in the ARB shader variant of the legacy path this blog entry is all about)

                      The corresponding value would be GL_MAX_VARYING_VECTORS but
                      a) it’s reported to crash on nouveau
                      b) therefore the specified minimum “8” is used, leading to that software fallback / error report.

                      Maybe one could read out the OpenGL 1.x ARB limit (in an external process, in doubt grepping glxinfo), assume it HW holds for GLES as well and use that as “weak” config limit. No idea.

  36. When I finished reading your post, I tried using kwin with direct rendering again (KWIN_DIRECT_GL=1 kwin –replace) with the fglrx drivers, on a Radeon 4650. To my surprise, the performance was actually pretty good, if you disable VSync that is. (The tear-free option in catalyst doesn’t seem to work when using direct rendering) There are still bugs like flickering and frozen images, but it seems better then what it used to be (atleast from what I remember).

  37. Martin,

    your decision to remove support for the catalyst drivers would let me left in the dark. Let me explain: for traveling I bought an AMD C-60 based notebook. While the overall cpu performance is low, its video decoding abilities are great and checking 1080p material works great. Especially even for 15000kps material which directly comes from my diving video cam.

    While the catalyst driver supports the hardware video decoder, the open source taxation drivers don’t. So they are completely useless to me and make my notebook useless.

    The decision to remove catalyst support would mean that I need to return to win7. What a shame for KDE which always presented itself as a freedom desktop. What good a freedom would this be when I would be forced to return to win7 just because one of the most prominent usecases doesn’t work: video playback?!

    1. You misunderstood. You still can use KDE even iff we remove the legacy OpenGL 1.x code path. OpenGL based compositing is no requirement to run the KDE Plasma Workspaces. You can run it without compositing, you can use the XRender backend or you can use llvmpipe to do OpenGL based compositing iff the fglrx driver has not improved till we remove the code (iff we do so).

      1. Martin,

        switching off compositing is a bad joke. You can also tell me to install XP SP3, right?

        Alas, how much will performance suck when moving to llvmpipe or XRender?

        1. well unfortunately it is not always possible to suit all needs. We as the developers might have the need to remove the OpenGL 1.x codepath. It is not our fault that the proprietary driver does not work properly for OpenGL 2. There are also users demanding that we should promote free and open solutions. This is what we do here: we recommend each user of ATI hardware to use the free radeon driver for compositing as in that it does offer better performance. As you can see we have here conflicting user and developer demands.

          We as the developers have to find solutions giving our users the best possible mix. That is if we decide to remove the OpenGL 1.x codepath we have to ensure that the advantages for most of our users are higher than the disadvantages for a minority of users. For the users of fglrx there are several possible solutions:

          • switch to the radeon driver – there is at least one year for the development of radeon to improve
          • lobby AMD to make the driver better. Best organize with GNOME Shell users who CANNOT use fglrx at all
          • switch to XRender
          • switch to llvmpipe
          • Turn off compositing

          Given the amount of possible options with several of them providing a solution for a fully functional composited desktop, I don’t consider it as a bad joke to point out that there is the option to turn off compositing completely. This is a quite important advantage compared to the situation with GNOME Shell in that regard.

          Concerning performance with llvmpipe I cannot say anything yet, because – as you have already known from my blog post – I have not investigated it yet. But also from my blog post you know that a working llvmpipe implementation will be a precondition to consider dropping OpenGL 1.x code. So yes if we go to drop the OpenGL 1.x code it implies that llvmpipe is a proper solution.

          XRender is already as of today a better performing compositing mode depending on the driver and Qt graphicssystem in use. Of course it does not support Cube and other fancy stuff.

    2. KWIN_DIRECT_GL=1 kwin –replace &

      That, with the latest Catalyst set.

      Try that, and report any bug you find to the unofficial AMD bug tracker. fglrx needs to stop forcefeeding OpenGL 1.x to KWin and fix their stuff; that’s the REAL solution to this.

  38. Ok. You want to drop support for hardware you dont have. However, as is usual for many developers, you fail to consider those who don’t have access to your type of system – multicore, new gpu, gigs of RAM, etc.

    That’s one of the reasons I use KDE3 over 4. It WORKS on my older hardware. My Pentium 3/Pentium M based Thinkpads have AT Mobility or Radeon Mobility chips. So, if I WANTED to use the new bling, I couldn’t.

    Then people say, well buy a newer machine. Ok, sure. Buy it for me. I can’t afford a newer machine. And many others can’t as well. Or I’m told to upgrade the RAM – My Thinkpad A22p maxes at 512MB. It costs about $20 for a PC100 256MB stick. I could buy 4GB DDR3 for that.

    Those of us who use older hardware understand the tradeoffs and limitations. You and other devs obviously are only concerned with what new features you want to implement, and not making your code efficient.

    That’s why we have choice. I choose to use my older machine because it’s paid for and works. It does what I need it to do. Others prefer to spend money they have on newer hardware. That’s life.

    1. After dropping support for this old hardware, KWin of KDE 4 will behave exactly like KWin of KDE 3. So where is the big deal for you?

      1. Right. So why bother with KDE4 when it will start working more like KDE3. I wasn’t specifically speaking of myself. I was speaking of others who are in my position that may want to use KDE4 and the newest stuff.

        It’s always easier to drop support than to make stuff work. If you guys were really that concerned, you could pick up older hardware off craigslist(in the US), eBay, or whatever local system you have. And you really don’t have to compile it on the older hardware. Just test it. You made this comment:

        “I have a rather powerful system to not have to wait for compile jobs to finish. Developing on a several year old single core system with maybe a gigabyte of RAM is nothing I want to do.”

        That’s something I take issue with. I complile kernels for old PPC systems on my dual core x64 server. I don’t conpile it on the 300Mhz G3.

        Anyway, when you guys get right down to it, you are gonna do what you want(which is the beauty of FOSS). It would probably just be better to decide and then let everyone know and give them a period of time that if they want to step up to take care of it they can(like Trinity has done for KDE3). That’s pretty much how it went with KDE4 anyway.

        Since I’m not a programmer, I can’t.

        1. > Right. So why bother ..
          Because it has random bugfixes and improvements regardless of some compositor?
          Because it’s actively maintained.

          But you’re right. It’s a shame that you cannot use GL compositing on a Riva TNT because it can only handle 512x512px textures. (Who cares about GL_texture_from_pixmap – kwin should re-introduce the shm conversion which never worked anyway, maybe even XGetImage, and then use texture tiling so you can have a 1fps Desktop.

          Oh, oh – and wtf can’t i use my Vooddoo2 chip – it was horribly expensive … then.
          OK, it could hardly do more than VGA and not in a 2D context (ie: OpenGL) but – oh wait: the entire driver seems about to be dropped.

          But at least the compositor should be tested with all plugins on every HW available and claimed to be supported. So please go, find some retarded Hardware, hopefully get it to run and test n^m cases. And on all kernels/drivers shipped with the major 100 distros during the last 5 years. Such is a MUST.
          If that time (can only be months per change) and money (for what other ppl. consider junk) isn’t worth it to Martin, he should better pass maintainership to somebody who cares (and has the time, and money … what’s usually mutually exclusive)

          Do you volunteer?

          PS: you maybe want to tell such random game developers. Setting minimum requirements is sooo much 2000. Consoles and Atari didn’t have such either!

          /baaaad sarcasm

          Now, here’s a serious rule of thumb:
          if your HW cannot do OpenGL 2.0 you’re *very* likely better off with the xrender compositor backend anyway since it’s a far thinner layer on X11, drains less VRAM and doesn’t require pixmap/texture conversion (given you’ll have a matching CPU and no GPU conversion assistance for limited shader programming abillities)

          1. As a former user of Kompmgr with a Voodoo3 card, I fully agree: XRender is the solution for old hardware.

  39. If the software was developed correctly the problem shouldn’t exist at all,
    the problem is how to develop software in open source code (should be “projects”, if they have projects). The ad hoc method is not good for program development.
    Has the hardware does not change the code, if so, should be verified using
    for that, programs, not the device as usual. The use of imperitive languages
    have their cost to, whith or whithout abstract data types(objects).

    1. so you know how our code should look like without knowing our code. You know what: that’s awesome. Why don’t you come around and make it better?

  40. It actually *is* a bad joke! – Not that you guys stop supporting ever older hardware; that is understood.
    But I just recently bought an AMD triple core with APU and I don’t want to run anything but KDE; and necessarily composited. All my desktops are highly configured and all the same on all my machines.
    Now fglrx will be toast, as I understand. Then AMD, I have supported them over more then 20 years, because we need choices, can close shop as far as I am concerned, if they don’t manage to finally correct their Catalyst driver. I don’t need to play games, I don’t need all sorts of shaders; but fully composited KWin; yes, I do need.
    (And don’t tell me to plug a cheapo Nvidia card next to it. It is exactly why I bought the AMD with APU: to have a green PC; and even a lowest-end Nvidia card screws up the quiet and cool green PC!)

    1. then complain to AMD. It’s unfortunately not possible for us to fix their proprietary driver. It’s a pity that the proprietary driver uses the legacy code path, that you don’t get features which are available with other drivers on hardware capable of providing these features.

      Luckily there is enough time for AMD to fix that till we will do anything 🙂

  41. I can’t stalk you Martin, you should of had a wiki about you long time ago… Just wanted to find out how old are you? 🙂
    If young than KDE is in a big luck.

  42. I spent 100+ hours trying to make an open source project work with ‘legacy hardware’, i.e. OpenGL 1.0. and it used pretty simple stuff; Framebuffer Objects, stencil buffers, depth buffers, and then simple stuff like gltriangle() and friends.

    I can very well understand KWin’s desire to just chunk the whole OpenGL < 2 and get on with the code.

    One problem I found to be very incorrect OpenGL documentation at GLX in particular, has blatantly wrong entries on the website, and it also doesn't list the GLX version number in the documentation pages that are describing it's functions.

    There are also many confusing references to the FBO stuff, since it recently switched from an EXT to a 'ABI' extension. Well, this is the way for a lot of extensions, they switch from EXT to ABI, and you have no guarantee the names or the API will be the same between OpenGL versions. The documentation just seems to expect you to throw away all the old stuff and only use the new stuff, and tell your users to 'suck it' and upgrade to OpenGL 3.0 or 4.0 or whatever is the latest thing. Unfortunately, a lot of the documentation has not been upgraded to 3.0 or 4.0 or the latest thing. And as noted for GLX, the documentation sometimes doesn't even tell you which version it refers to.

    Also most releases of linux and BSD in 2010/2011 are still working with OpenGL < 2.0 drivers by default. Yeah OpenGL 2.0 came out many, many years ago, but that doesn't mean the distro maintainers updated their drivers or upgraded Mesa. So, in theory, it would be great to just force everyone to upgrade, and in a year or two everyone would be upgraded. In reality, it doesn't take a year or two, it is taking 5 or 7 years. The recession of 2008 did not help matters; people are using what they got and fixing it with duct tape until the economy picks up again and they can afford new hardware.

    If were an open wiki, it would be easy to spend an hour or three fixing a lot of these documentation issues. It's not. Instead, we can just watch various OpenGL discussion forums write about the same bugs over and over and over again, and try to copy the multi-version kludges they use to make OpenGL work 'in the real world'.

    When I was a young user i might have faulted Kwin. But now that i have personally seen up close the horror of OpenGL version issues, i have total and full empathy for them.

    If people really, truly, want KWin to support OpenGL 1.0, they will start a kickstarter project or something, so that the developers can actually buy this old hardware to work and test on. I know developers who could never afford to spend even $100 on their open source project. They have to choose between food and the project, and food usually wins that argument because being hungry is kind of a bummer. It is, basically, impossible to write portable code for funky stuff like OpenGL without access to the actual machines that you are porting to. It is not like ordinary C or Python where you can just follow portability guidelines and have it work most of the time – the portability guidelines for many OpenGL problems seem to be documented as 'buy a new computer'.

    1. Personally I don’t see this documentation issue. The core profile is clearly documented (at least in the OpenGL superbible) and there is also the GLES documentation. The GLX functions and everything is documented in the extension specification, so it just requires to know the right places to look for and also takes some time to look for 🙂

Comments are closed.