27 August, 2013

Silverstone FT03mini, Intel Haswell and Linux

I'm a hardware geek. Not really into the gadgets, but I like the tech behind workstation-type systems: CPU's (realworldtech is amazing, so is Anandtech), GPU's and so on. Aside from reading it is of course also nice to build something every now and then. So this summer, just after Intel released their new Haswell CPUs, I decided to build a new desktop system around it in a Silverstone FT03 Mini. Goals were to have it as quiet as possible (cooled by a single 140mm fan) and of course have very decent performance.

FT03 Mini

As you might have guessed from the name, the case isn't very big: mini-ITX. And it is a rather unusual design: a standing tower with a single 140mm fan on the bottom and solid aluminum walls. Quite a beauty, if you ask me. A review here.

Specs

I ordered a Core i5 4670 - not a K one as those miss out on some of the new instruction extensions. Yes, that is actually called Product Sabotage. Well, Intel is not a charity and there simply is no competition in the space above the core i3 performance levels so they have to differentiate their products to extract the 'consumer surplus'.

Aside from the CPU, there's a 256GB Samsung 840 Pro SSD (2.5"), 2x 8GB ram and it is all connected to a ASRock Z87E-ITX motherboard. I also have a passively cooled Zotac GeForce GT 640 Zone Edition graphics card, but more on that later.

Cooling

For cooling I used the recommended all-in-one watercooling - a simple Corsair H60. I contemplated going for a H80 (double-thick cooling block) but that would simply be overkill for the platform and make building it all harder. I've replaced the 140mm silverstone fan with a more quiet Noctua (NF-A14 FLX), which, at its lowest speed, is indeed very quiet. The whirring of the water pump is actually the noisiest thing about the case although it is barely audible.
Going to put it together

Power supply

I'll MAKE it fit!
I did do a few unusual things with the case: I took out the HD cage (I only need one 2.5" SSD) and the slimline DVD player cage to make room for a full ATX modular, Platinum-rated, fully passive Seasonic power supply. The SS-400FL2 required some some creativity in attaching it and the bending of a single metal bar to fit it in - and of course, the tiny space gets even more cramped. But it isn't a huge stretch and it seems Silverstone could easily add a screw hole or two to make this officially supported.

Building

The basics were not hard although this case was the smallest I've ever build and I never played around with AIO water cooling. I didn't yet have the power supply so I used an old one I had still lying around - that's the blue thing on the box in the pic ;-)

Put together - first round, with the passive Zotac
I fired the thing up and lo and behold, it all worked fine. I did play around a bit with the cooling and after some burn-in tests I ended up putting the pump on a variable voltage and the fan on a low, continuous one. That leads to most quietness. Not total silence but I'll survive the noise...

Graphics card - u no fit?

Unfortunately I got in trouble after my new power supply arrived: it didn't fit with the video card. The video card has a huge cooling block and its cooling fins are extruding on the back of the card. Unfortunately, that is where the power supply had to be - the price for not opting for the recommended small form factor variation.

Now I'm stuck with a passive Zotac Geforce GT 640 (anybody interested?). Turns out, the integrated graphics in the i5 are plenty fast for what I need and I don't have to fight with proprietary drivers. So for the time being I'll stick with this... When I decide I want to play more modern games than Warcraft III or have a third screen I'll find a solution that's not only silent but also has no fins on the back ;-)

It's busy in there, yes...
No place for the Zotac card.

Bleh, Broadcom

I did also replace the badly supported Broadcom wireless mini-PCI-express card that came with the motherboard with a decent Intel one (Ultimate-N 6300). I couldn't get the broadcom to work under Linux, the Intel card of course worked out of the box. It is a triple band card but unfortunately I still can't get decent speed out of it - more than 50MB/s won't work and I've got a 100MBIT connection so that's a bit sad. The problem might be that I'd like it to use 5ghz and it is near impossible to find decent antennas for that. The antennas that ASrock included seem to be less than perfect. But honestly, I don't understand antenna tech very well... Maybe I do something wrong.

In the end, I'm fairly happy with the setup. The goal of having only one fan cool the entire case did work out and CPU core temperatures hover consistently between 30 (idle) and 50 (under stress),

Haswel and Linux

Getting the system to work with Linux (that'd be openSUSE, of course) was easy-peasy. After my power supply arrived I had to rely on the integrated graphics and for that, upgrading to the latest kernel from kernel.opensuse.org as well as the latest stable Xorg from the X11:Xorg repo made sure it is now all smooth. Boringly little to say, everything works great and the system is butter smooth - something you'd expect from your new work station.

25 August, 2013

On Distributions, Numbers and Breaking Prism

A week or two ago I noticed the prism-break.org site. It's quite cool to see a site offer some concrete links on how to opt out of the global data surveillance. Granted, I already use Linux and I cryptographically sign (not encrypt) almost all my mails because Kontact makes it so easy. Otherwise, I'm not particularly careful - my main mail account is still GMail (although work mail is on SUSE's servers) and I extensively use Google Plus and to a lesser extend Facebook and Twitter.

What distro is popular and easy to use?

The site has many interesting tools listed but obviously my eye was drawn to the Operating Systems on top. I notice that Ubuntu hasn't made the cut due to some of the stuff they've pulled recently. And I see Fedora appointed as 'most popular' with Linux Mint Debian Edition as 'easiest to use'. I tweeted to Peng Zhong that I find it hard to not disagree with these choices...

Fact or fiction

We all prefer our technology to speak for itself. Yet, better technology often looses out. And I thus feel I should at least try to correct common misconceptions.

Statistics presented at the openSUSE Conference show over twice as many unique IP's connect regularly to the openSUSE update servers compared to Fedora. We know that these numbers are not terribly reliable: a more reliable method has shown the IP addresses to over-estimate our user base by about a factor 10. But both being equally biased, we can assume they are comparable.

That is not to say that Fedora isn't doing awesome stuff - they're the ones pushing a lot of technology like systemd and GNOME Shell. But many end users did pick a more conservative OS like openSUSE. And they probably appreciate our work on improving YaST or technologies One Click Install!

More fiction?

About the ease of use - the recommended distribution there is Mint Debian edition and I would find it hard to argue that it is easier to use than openSUSE (the Mint team talks about 'rough edges' and 'less easy to use'). But I'd leave that to random choice as facts seem to bear very little on what is considered 'easy to use'...

Of course, if you mean with popular "the number of articles on LWN" or a metric like that, well, Fedora and Mint probably win.

19 August, 2013

Basic Usability Testing at Home - notes from the workshop at Akademy 2013

bad usability bad

In the weeks before Akademy 2013 I convinced KDE Usability Guru Björn Balazs (from User Prompt) to lend his experience to a usability workshop I wanted to organize at the conference. The goal was to teach developers how to do 'basic usability testing at home' by guiding users through their application and watching the process. To help developers who didn't make it (and those who did but can use a reminder) I hereby share a description of the process and some tips and notes.

User Application Testing

The goal of this exercise is to get input from a user about the applications being tested. The test works by putting the user in front of the application, giving them a task and letting them execute it, guided by the developer asking questions.

Process

The process works as follows: The user is put in front of the opening screen of the application/tool. The developer or usability expert guiding the process gives them a goal, something like 'play a song from Cold Play' or 'connect to the wireless network'. You can use this nice guide by the Canonical usability team on picking tasks for the users to execute.

Then the following protocol is executed by the person guiding the process, asking the user:
  1. What is your first impression?
  2. What do you think you can do here?
  3. What would you do to reach your goal?
  4. What do you expect to happen when you do it?
  5. Please do it... (let user execute action)
  6. Did it do what you expected?
  7. if not: was it better or worse?
  8. If task is not completed yet, go back to 1 
Important:
  • Make clear that the user is ONLY to execute a SINGLE action in step 5. This means that most of the time, you are checking the expectations and ideas of the user instead of watching him/her clicking around.
  • As guide you're not supposed to give any hints as to what button the user should click or what he/she should do, other than perhaps remind him/her of the end goal he/she has been given
  • Despite the above point, there is of course no reason to let the user helplessly waste his/her time. If the user get stuck, help them to get unstuck. 
  • Try to closely stick to the protocol, not skipping questions. Of course, if a user has already answered question two in his answer on question one, there is no need to ask that question again...
If the task is finished, asked what the user thought of it all, discuss improvements etc.

Easy Cheese

Note that this kind of testing is very good to find grave issues in applications; it is not very good at fine tuning the interaction with your application. You also have to be a bit wary of the feedback from a single user: repeat the process at least once with somebody else to make sure that what is an issue for one person is really a problem. Ideally, run the test a few times with various people to increase the reliability of the feedback.
    Extra tip: it is very valuable to record the sessions. Often you see things later on that you were not aware off during the session..

    Results

    In practice, it can be a bit hard to stay disciplined, not giving hints or telling the user what to do... And to stick to the protocol instead of just quickly letting the user click to the end. Try to keep to the structure anyway, it works best.

    This process works incredibly well at showing which things are unexpectedly non-obvious for users. For your enjoyment, here is a youtube video of the Network Manager session with myself as subject:
    Part 1:

    Part 2:


    I think the results speak for themselves and you can probably imagine the frustration on the side of the developers.

    I've put the notes of the workshop also on the wiki.

    Thanks to User Prompt for supporting the travel and support of Björn and Heiko Tietze as they were running the actual workshop, I was just there as participant ;-)

    16 August, 2013

    Help KDE Communicate Frameworks 5 and the Future of KDE!


    With the 4.11 release out of the door (yay!) there is more and more focus on what is coming from KDE in the future. The Workspaces (and only the workspaces) are out with a long term support release: for the next two years, the Plasma team will keep churning out bugfix and tiny-feature releases for the Workspaces 4.11 release.

    Meanwhile, the libraries essentially have already been in long term support since their 4.9 version, with only minor changes coming in (with a few exceptions like Nepomuk). Most Platform developers are focusing on Frameworks 5, which will bring some exciting changes to our libraries and the way they are laid out. This will open up our libraries to users outside of the KDE world, potentially widening the 'KDE ecosystem' significantly. And we need to communicate this!

    The applications on the other hand are expected to do couple of more releases before a (probably gradual?) transition to Frameworks 5. And who knows what will happen then?

    How confusing!

    All this means some confusion. We've already seen the first articles where elements of the above are mis-understood. It is time we do some work on clarification and get a dot article out on where we are going. I have made a 'skeleton' article, with the most important facts (like the above!) and links and hints to where to learn more.

    It might sound like a terribly complicated thing, and to some extend - it is. But at the same time, it has all been explained in various places already, most of the writing is just bringing things together. So you don't need to know that much about the whole thing to do a good job! Who feels like doing this? You can contact me (I can't believe my mail address is hard to find on the web, considering how much spam I get :D) or ask on kde-promo@kde.org, our mailing list.

    Who's up for it?

    06 August, 2013

    using software.opensuse.org

    A Frequently-Asked-Question: should I grab packages from software.o.o and should I add the repositories offered by One Click Install? Read on for an explanation of what is going on with One-Click-Install and what is wise.
    software.opensuse.org in action

    What does it all mean?

    A basic explanation of repositories, packages and software.openSUSE.org.

    On Linux, Software usually comes from a central location: your distribution. Apple used this model for its appstore, so did Google with the playstore - they are the same thing, expect that there is a payment system included. Every piece of software comes in a package (.rpm on openSUSE, .apk on Android). Like with Android app stores, there are various locations where you can get 'additional software'. As we all know, Apple prefers to keep you in your golden cage, no additional legal application stores for you there.

    Thanks to the unique Open Build Service, openSUSE has also been offering a central place to get additional software outside the distribution: software.opensuse.org. This web based application store allows you to search not only in the official, released openSUSE packages but also offers access to the numerous (200K+) packages on build.opensuse.org. These are build as part of the process of developing openSUSE (in the 'devel projects', often offering newer versions of official software) or by private users for their own purposes (in their 'home projects', offering newer versions, obscure packages or special builds).

    OBS is not all about obscure or updated software: some software is build by software vendors or otherwise 'officially blessed'; other things are there because they are too big for openSUSE Factory (games for example) or require very frequent updates (some software development tools and libraries).

    The web interface works with openSUSE's One Click Install, a tool which makes it easy to install packages while adding the originating devel- or home projects to your software sources ('repositories').
    Software vendors integrate OBS for official packages
    When your openSUSE checks for updates to your packages, it will not only look in the official openSUSE sources, but also go over these additional repositories added when you installed extra software, making sure you have the latest security, feature and stability updates.

    Adding repositories or not?

    So, should you use software.opensuse.org and should you add the repositories which One Click Install will offer you? Installing software from software.opensuse.org should be fine as long as you follow the rules below.
    • Add the repo. Yes, you should. Your package gets updated/fixed, giving you security and stability improvements and preventing future system updates from getting stuck on a single obsolete package.
    • Make sure you pick repos for your specific openSUSE version. Don't mix repo's of Factory with 12.2 and 12.3 or such...
    • Pick 'devel projects' over home repos, devel projects are maintained by a group of people and thus safer from a stability and security point of view. Home projects start with 'home:'.
    • When upgrading to a new openSUSE, make sure all repos are modified for the new version or remove those which are not available (yet). Use the Apper or YaST repository management tool for this.
    • Don't use zypper dup after adding repositories unless you keep a very close eye on what is changing repositories and know what it means. Zypper up should service you fine (tools like Apper and YaST are safe by default).
    The biggest potential issue with additional repositories is that besides the package(s) you wanted, they might contain more. And a 'zypper dup', in which zypper will pick the newest updates no matter the repository they're from, can thus break things. That chance is relatively small with special-purpose software sources like the games repository. But in some home projects a wide variety of things is build. The Devel projects for KDE and GNOME are a interesting case: you can indeed grab ONE updated application from there, but usually, it is wise to make it an either/or deal: grab them all or none of them. You can grab them all by doing a 'zypper dup --from REPOSITORY' on the command line and the YaST UI similarly allows you to switch packages to a specific repository.
    What and how to pick on software.o.o
    In the image above, you see the choices presented after clicking 'show other versions' and then 'show unstable packages'. Unstable, here, refers of course to anything not available as official update. Here, the official update is as new as anything and unless you want a testing version (2.7.9XXX) you should stick with it. But IF there were other options, this are the options:
    • KDE:Distro:Factory - Factory is where we develop openSUSE. The packages are often available for older openSUSE releases, but be careful, Factory is usually not very stable!
    • KDE:Release:410 - the KDE team maintains repositories with KDE releases. You can pick one of these and be sure to have quite stable, yet up to date software.
    • KDE:Unstable:Playground - guess... No, it is not very stable, as a matter of fact, this is a checkout directly from the source repositories at KDE - no guarantees, applications might not even start up.
    • KDE:UpdatedApps - this is a repository with updated applications. It is by far the best, safest choice.
    • devel:ARM:AArch64:12.3 - this is where the openSUSE ARM team develops their software. Only interesting if you run ARM hardware!
    • home:XX - these are home projects from individual users. If they are your only choice you should proceed with caution. Note that SOME home projects have tens of thousands of users and are very reliable - it just isn't visible from software.o.o (something we should indeed fix, help is welcome).

    Zypper power

    In dealing with multiple repositories, there are a few more advanced things you can do to keep things from breaking.
    • locking - if a package tends to break often and you want to keep total control over what happens to it, you can lock it. Commandline: "zypper addlock PACKAGENAME"
    • repository priorities - you can set priorities on repositories. "zypper dup" will always prefer packages from the highest priority (lowest number). Commandline: "zypper mr -p PRIORITY REPOSITORY"
    • zypper help - learn more on any command from zypper by typing "zypper help COMMAND"

    You can find more information, as well as some more tips, in this article on news.opensuse.org.

    02 August, 2013

    oSC13, Strategy and Stable


    At the openSUSE conference there were discussions about the future of openSUSE. Last Wednesday I summarized some of the ideas around Factory. Today I blog about the openSUSE releases.

    Note: This is a combination of stuff I heard (not just at oSC but also earlier - even from the first strategy discussion, 3 years ago) and ideas I have. I just attempt to put it into text so it is easier to shoot at, comment upon, think about.

    Stable - big picture

    On the Stable release future, things are much more in the air. In his keynote, Ralf hinted that SUSE has some ideas about how the future of openSUSE should look:
    • Open Governance - SUSE wants openSUSE to be successful and independently strong. Ralf noted that SUSE does not expect or aim to make money on openSUSE directly; other parties however are very much welcome to build a business around openSUSE.
    • Filling the gap - SUSE and openSUSE have a great win-win relationship. But SUSE notices a gap between openSUSE and SLE in the market: SLE is oriented to large enterprises and the current openSUSE has a too wide market to feed, from newbies to hard core developers to professionals.
    This is, from the SUSE side, the big picture; SUSE wants to provide room in the openSUSE ecosystem for initiatives, both from volunteers and from the commercial side. And SUSE would like openSUSE to move up a little, offer solutions for people serious about computing in a professional capacity. But what does this mean exactly?

    Concrete changes

    Let's start again with what came out with the keynote. Ralf essentially summarized some of SUSE and openSUSE's history and talked about our current situation. openSUSE Stable, our final release coming out every 8 months, is a compromise between stability and being-up-to-date. But this compromise was made in a time when far less people used the many repositories on OBS offering more up-to-date software; and before Tumbleweed and Evergreen saw the light of day. It is 2013 now and Ralf hinted at a possible re-evaluation of the choices that were made back then: perhaps, we should limit our scope, improve quality and increase the life cycle of openSUSE?

    Improve quality

    Let's talk about quality first. This is difficult, harder than it seems. Factory has some automated testing and openQA got better thanks to some recent work, but nothing can replace human testing and there is not enough of it. There is room for improving our processes here, but there is also the fact that some people in our community care more about Stable and others more about moving forward with Factory. We need to find a way to service the needs of both. Having Factory more stable, hence more usable on a day-to-day base, would be a great first step here.

    Limit the scope

    If you want to do something well, you should focus on it. It's the Unix philosophy many of us share: do one thing, and do it well. openSUSE is well known for doing everything a bit - making everybody a little happy, but nobody REALLY happy. Part of why this model is still working reasonably well is thanks to OBS - which has grown in this role. Many, many users these days use the openSUSE release as simply a base for their favorite OBS repositories. If we want to improve quality and lengthen the life cycle, we can use this model.

    For example, parts of openSUSE which can be and are maintained longer get in the stable releases; other parts with a shorter life cycle (Firefox for example) stay on OBS. This means that being part of Stable depends on commitment from both upstream and the openSUSE maintainers of the packages; as well as testing capacity and our quality standards.

    Lengthen life cycle

    A portion of the openSUSE users would like to have a longer release cycle - the interest in Evergreen and questions on our mailing lists and forums make that clear. SUSE has dedicated a set amount of resources for maintaining openSUSE releases; checking security and entering bug fixes. The maintenance process is now opened up and some teams in openSUSE are contributing to the maintenance, lowering the load on the SUSE maintenance team. A smaller scope (resulting in a smaller release) would do the same, as Ralf mentioned in his keynote. This will have to be a part of the solution we're looking for to increase our maintenance window.

    In practice

    To put it in perspective, three examples of how we could do this. Keep in mind that sticking to the openSUSE-has-all-and-comes-every-8-months model is of course also an option...

    Rings, rings, and rolling!

    Presume we will have rings in Factory (see blog from Wednesday)? Let's put them in the distro. There's the core with ring 0 and ring 1 ("everything up to X"). It is released every 2 years and maintained for 3. Then, there is ring 2, with the basic desktops and their devel frameworks. They are maintained 2 years after release (every year). Then, ring 3, the applications and development tools. They are updated from OBS, essentially rolling.
    It would of course be possible to keep ring 3 also stable - for, say, 1 year. Update it when a new release comes out - you can keep the core and desktop from the previous version but have the new apps.

    Split it up in components

    As Robert proposed on the factory ML this week it would be possible to split openSUSE Factory not per-se in rings, but in smaller components, dictated by their dependencies. Some would be one package, others would be multiple. Each component is developed by a team in a factory-like model. Rob states: "Each component advances based on it's own release cycle and the component team decides which release of it's dependent components to build against. Some tooling will be needed to help sort out the combinatorial problem."

    And "when a release approaches,the release team picks a version of the core component that will be the base for the next openSUSE release. This version is used to populate factory and may be the only component that is in factory for a little while." This scheme should make the job of the release team easier but would still allow us to release the same openSUSE. Perhaps faster, perhaps slower - or perhaps in pieces, giving users an unprecedented level of flexibility!

    Core, selection + OBS

    How about we take the core of openSUSE (ring 0 and 1 from Coolo?), add a few components which have a long-term commitment from their teams; and let OBS take care of the rest! So, we'd have, say, the Core, our default desktop with a base set of applications for it, LibreOffice and Firefox. That is the release. The rest you grab off of OBS. The core is maintained for a longer time (this very much depends on community input!) and much more tested than what we ship today.

    To limit the impact of the change and for the convenience of our users, the installer would still be able to offer the same choices as it always has; but besides adding the packages, it also enables the repository they are found in!

    Initially, the release would be a lot smaller than it is today - but in time, as people step up for long(er) term maintenance, it will grow again. The idea is similar to what was proposed on the factory ML this week, but might lead to less clashes between components as there are less of them (much is in the core, presumably).

    The real discussion

    The issue with all the above scenarios is that while we can technically do them (some are harder than others, of course) the choice doesn't just depend on what we can do but also on what we should do. That makes the discussion a lot more interesting. We have to fix some things in our process, adjust to reality, that is clear. But do we want to shift direction, too? What is our real goal?

    Our previous strategy discussion hinted at the fact that most of our users are professionals, people used to computers. We decided not to focus on newbies and making things simple to the expense of flexibility of our distribution. Should we go further on that path, make harder choices to focus on people using computers for a living?

    Conclusion

    What I tried to illustrate with the 3 examples is that we can go in various directions. It will be up to some of our core developers as well as the dev teams to comment on what makes most sense, what is the least amount of work for the biggest benefit (to users). But there's also the question of who those users are!