Donating your idle computer time to a good cause

Introduction
I have been feeling altruistic again, so I thought I could donate my spare computing power to some scientific project with a laudable goal.

I have tried the BOINC platform in the past and got disappointed, and this time was no different, except that I decided to write an article about it.

Why contribute
Even if you are watching an HD video on the Internet, your computer is so insanely fast that is actually just sitting idle most of the time.

Millions of PCs, video consoles and mobile phones are wasting computational time right now. The prospect of using those combined global resources for some good cause at virtual zero cost is indeed tantalizing. Wouldn't you want to help those poor scientists full of good ideas but starved of cash?

Is it worth contributing now?
Probably not.

Computing power is still exponentially increasing all the time, and most importantly, efficiency is still increasing at a great rate. That means that you are constantly getting more processing speed for less money and with a lower electricity consumption. Therefore, year 2014 is not the most efficient point in time to get started, at least if you consider the current environmental impact of producing energy.

Contributing now helps some project of your choice be the first to achieve its goals. Being the first does have some value, especially for the researcher. For some biological projects, having the results earlier could mean new treatments are available faster, which may be crucial for potential patients. However, this kind of computing-intensive projects are becoming easier all the time, so getting there is just a matter of time.

An increasing number of contributors should raise the general perception that distributed computing is important, which should make funding platform software improvements easier. However, it does not look like BOINC has been improving much in the past years (see below for more information on this). Therefore, participating now probably does not make a difference anymore, at least for that platform.

At a personal level, contributing is going to cost you some time. You will have to install software, choose a worthy project, create an account somewhere (yet another password to remember), and maybe ask your employer for permission first. You will have to learn a new program and, at least when using a PC, deal with the quirks described below. And there is also the increased electricity bill.

It is your call at the end of the day. I decided I wanted to contribute, but ended up writing this article instead. But the way it has turned out to be, I am not sure that this really counts as a contribution.

Does it make sense to use volunteered distributed computing?
This is a human organisational issue. For most projects, the most economical solution at a global level would be to design hardware specifically for the task at hand. This does not have to be completely new hardware (ASICs), as FPGAs will often suffice.

Even if a project uses commodity hardware, a centralised computing centre that buys CPUs (or graphics cards with computational functions) in bulk will probably achieve a much better value for money overall, especially regarding the electricity consumption. With the advent of cloud computing, this is getting easier and cheaper all the time.

The problem is getting the necessary funding. Society does not think as a whole, but each nation, goverment, institution and so on has a separate budget. Using volunteered processing power may help an individual project overcome funding difficulties. It is probably not efficient in global terms, but it does get the job done. After all, there is no good solution in sight for this kind of organisational problem.

Choosing your good cause
You will be giving your processing time for free, so I would choose a project that makes its computing results public. I would also support those projects with open source computing software. Finally, I would also favor non-profit institutions.

For example, the Folding@home project run by the Stanford University states in their FAQ page "following the publications of these scientific articles, we will make the raw data of the folding runs will be available to other researchers upon request" [ sic ]. They also have a "Why don’t you post the source code?" FAQ entry which states "we stress that the vast majority of our code is already open source". Personally, I would expect all data to be public for everybody to use freely and all the code to be open-sourced from the start. Therefore, I would NOT choose a project with such restrictions.

There are many projects available on the BOINC platform, but little help about choosing one. The website only mentions what hardware and software platforms a project can run on, which falls short in my opinion. There should be a way to filter projects by the criteria mentioned above. For example, they could offer an option to sort the project list based on whether the computational results are kept private, made public, or published with restrictions (only on demand, just for certain scientist groups, or with some usage limitations).

If a project is listed in BOINC, it probably has passed some selection criteria, so it should be trustworthy. This is important, because you will be installing a managament software on your PC that automatically downloads binary executables from the Internet.

There are of course other distributed computing projects that do not use the BOINC infrastructure. In fact, Folding@home is one of them.

I went for SETI@home, which is what got BOINC started. It may not be the best use of your computing time though.

First impressions with BOINC
I tested version 7.2.42 (latest as of october 2014) under Kubuntu Linux 14.04 and under Microsoft Windows 7. Note that the Linux system cannot be considered an exotic platform, as BOINC's documentations states that its software releases are tested under Ubuntu.

You need to accept a software license
The first thing you'll notice when installing BOINC is that you need to accept the LGPL license. As a anonymous volunteer, you are probably not in the mood to accept any license. In fact, you do not actually need to accept this license in order to use the software. Section "9. Acceptance Not Required for Having Copies" of the GPL (which the LGPL is based on) starts with "You are not required to accept this License, since you have not signed it".

I am not a lawyer, but I suspect that the additional clause "Restrictions: You may use this software on a computer system only if you own the system or have the permission of the owner", while a valid warning about a potential, common-sense issue, is probably formally incompatible with the GPL, which admits no additional restrictions about how the software may be used. I guess the BOINC project could place this kind of restriction in their service usage policy, but if you think about it, it does not actually make sense.

Replacing your screensaver by default
On Windows, the default installation will replace your screensaver. The BOINC markerting department is probably hoping to get some free advertisement space on your PC monitor. They could just suggest that you use the BOINC screensaver, because it is cool, or mention that it is "good" advertising and you may want to help spread the platform too. But making it the default, and having to press the "Advanced" button first so that you can untick that option, is rather cheeky. Definitely not the right way to treat your volunteers.

Unnecessary system restart under Windows
On Ubuntu Linux, you do not need to restart your PC after installing BOINC. On Windows, restarting the PC should not be necessary either. Nowadays you can install and remove Windows services without a restart.

Why you need a user account
The first time BOINC starts, you are given the chance to add a computing project. I selected SETI@home, and I was prompted to create a new account or use an existing one for that particular project. I do not know yet whether this applies to all projects, or if it is specific to SETI@home, but the log-in dialog box looked generic to all BOINC projects.

True altruism ist anonymous. Besides, I do not want to remember yet another login and password, and give my e-mail address away in the process. My guess is, once more, that this "requirement" is mainly for marketing purposes.

There is virtually no user-identification mechanism in place, so you can make up your name etc. when creating an account. The BONIC infrastructure does not really trust you anyway. Every piece of data is computed twice on separate volunteer computers, effectively halving the available processing power. This way, faulty computers or even malicious participants stand out quickly.

Account scoring
Distributed computing platforms tend to implement some sort of scoring, so that your own contributions are visibly acknowledged. Some platforms are trying to promote a sense of "community" and offer different rewards as an incentive, like your name appearing on the "top 100" contributors list, or some special mention or price if your computer happens to be the one that finds some interesting computational result. When SETI@home introduced a competitive aspect, it even prompted attempts to 'cheat' the system.

There usually are statistics per hardware and software platform. Some companies have used this in the past in order to advertise their hardware, according to the motto "look at BOINC's statistics, we sell the fastest and most reliable computers". I would not trust this data anyway. These projects tend to be starved of money, and it must be hard to resist the temptation to cook the books in order to favour in the statistics some particularly-generous hardware manufacturer.

If you want to promote yourself or your company, you can claim that you are doing something good for society by helping some laudable project in this way. You can also add a link to the particular project, so that, when a visitor clicks on the link, he can see how much power you donated and that your are still contributing today.

While I do not mind this kind of corporate marketing, I do not see the need to participate myself. The computing software could just create some random ID on each PC, and display a message window if it really needs the user's attention. Having a user account and playing such social games should be entirely optional.

Surprise! Surprise!
After choosing a project, computation begins. The first thing you will notice is that your CPU usage goes up to 100 %, which is a good sign.

The trouble is, the CPU fan will ram up to full throttle, permanently raising the noise level to maximum. And your electricity bill will increase more than you probably thought. As a volunteer, this is probably not what you bargained for. To top it all, your system performance may sink noticeably. Under Kubuntu Linux 14.04, I was getting compilation times twice as long as usual.

Of course, your mileage mary vary. You may have an Atom-based motherboard without any fans that always consumes little electricity, no matter what the load is. Or you may be running the Android client on a modest mobile phone processor only when charging overnight. Or maybe your OS copes better with the background load. But I would venture that the picture I painted above is the most common scenario.

Once you have become aware of the problems, if you look carefully at BOINC's documentation and start searching around on the Internet, the underlying problems will start to emerge. You will eventually come across a page titled Heat and energy considerations. That page's first subtitle is "energy cost and environmental impact of running BOINC", and that should have been the main title. The "frequently asked questions" page has an entry titled "BOINC makes my laptop's fan run all the time" too that may be of interest. Unfortunately, the advice on those pages do not really help in practice, see the sections below in this article for more information.

In my opinion, the BOINC documentation does not feature these issues prominently or thoroughly enough. New users are presented the buzzwords and the hype first, from the BOINC's homepage: "Use the idle time on your computer (Windows, Mac, Linux, or Android) to cure diseases, study global warming, discover pulsars, and do many other types of scientific research. It's safe, secure, and easy". Contributors are not given initial advice about what to expect, namely increased power bills, higher fan noise levels and negative impact on system performance. Again, this is no proper way to treat your volunteers.

"Computing preferences" issues
BOINC has a "Computing preferences" dialog which shows a few more settings in the "Advanced view" mode.

There is a "While computer is in use" option. The term "in use" here means no mouse or keyboard activity, which is confusing, as there is a processor usage setting below, but that is another kind of "in use". It is also a rather silly option. First of all, if you are not using the computer, you will probably turn it off or let it sleep to save energy. I do not see the point in stopping background computations while you are typing a letter. And you could also be using your computer even if you are not actively moving the mouse around. You could be connected over SSH, be watching a movie or waiting for some compilation to finish. To top it all, this option does not work under Kubuntu Linux 14.04: if you turn it off, computation never starts.

Then there is a "Only after computer has been idle for x.xx minutes" option. You can enter fractions of a minute, so 0.01 means 0.6 seconds, which is not particularly user friendly. In any case, one minute is far too long for a CPU to idle around. You probably want to monitor CPU loads in millisecond intervals, which is not supported.

I tried that option together with "While processor usage is less than xx percent" at 15 %. The trouble is, BOINC is too slow to react. It takes at least 8 seconds to stop computations, sometimes more than 12 seconds, and somewhere between 0 and 6 seconds to restart them. As a result, BOINC does not back off quickly enough when user load is present, potentially slowing it down during the first seconds, and it does not resume calculations quickly enough, wasting CPU time in many scenarios. There is no adaptive logic, that is, BOINC does not learn whether short CPU peaks are frequent on this system.

Option "Use at most xxx.xx % CPU time" does not smooth the load. Instead, CPU utilisation jumps to 100 % one second, and to 0 % the next second. Your CPU usage graph ends up looking like a saw.

There is a way to limit the number of CPU cores to use, but you have to enter a percentage, and there is no cue about how many cores your CPU has. Sure, you can find out somewhere else, but that is not very user-friendly, is it?

Even if you have one ore more capable graphics cards (which may be integrated in your CPU), enabling the GPU may have no effect if you do not have the right driver installed. There is no easy indication whether the GPU can be used or is being used at the moment. You can only guess by looking at the process names.

There are no GPU-related settings at all, so you cannot limit the GPU load in any way.

No help deciding what computation load is worth running
Some CPU architectures, like Intel's Hyper-Threading or AMD's Bulldozer, do not have symmetric cores. It would be interesting to know how the number of parallel processes contribute to overall performance for a given project.

You may want to always leave one core free for faster reaction times to user loads. Perhaps you wonder if using the GPU is worth it in terms of performance per watt. Or maybe it is worth using your system's GPU only, and the CPU not at all.

In any case, BOINC offers no help in this area. There is a benchmark menu option, but no benchmark options, and when you run it, the results do not come up and are not shown anywhere. Later on I realised that, if you open the event log window and look around, you eventually find the benchmark results. The results are not project-specific anyway. And there is no GPU benchmark at all.

The process priority is not quite right
BOINC's SETI@home processes run with a nice level of 19 and a scheduling policy of SCHED_BATCH, which should actually be SCHED_IDLE.

The "simple view" is not simple enough
BOINC's "simple view" displays projects and tasks. The "suspend" button is at the bottom, so, if you want to pause processing for a moment, you may be tempted to click on "Task Commands", and then choose "Suspend". Then you will probably wonder why processing continues. The reason is, the current task is paused, so the project just goes on to the next one.

My guess is, if you are using the "simple view", you do not care which tasks a project is made of. You just want to see a reassuring message that all of your projects are running well (and not just the one current task in the one current project). And you want a big "Pause" button at the top.

User interface quirks
The ESC key works in dialog "Tools / Computing preferences", but not in "Tools / Options".

Under Kubuntu Linux, closing the window does not close it, it just minimises it to the taskbar. The "simple view" does have the "Exit BOINC Manager" in the File menu. Right-clicking on the taskbar icon displays a pop-up context menu that immediately goes away. You have to hold the right mouse button down. This non-standard behaviour is annoying.

The pause icon and the project status column
The "Status" column in the project list does not say why the BOINC icon on the taskbar is currently displaying the "Pause" icon. There are apparently 2 possible scenarios:


 * The system is busy with higher-priority processes. BOINC seems to realise that the project is not getting any CPU time at the moment. The Status column should then say "computer busy". On the "simple view", you do see an overall "Suspended - CPU is busy" message.
 * The user chooses menu option "Activity/Suspend". In case the user forgets later on, the Status column should say "all BOINC activity suspended by the user". This is different from pausing just the one project, which does display "suspended by the user".

"Another instance of BOINC Manager is already running" prompt
You can close BOINC by right-clicking on the taskbar icon and choosing "Exit". But then, if you start it again, it will complain that another instance is already running, at least under Kubuntu Linux. You are then prompted to enter a host name and a password.

The dialog does not mention it, but if you leave those fields empty and click on the "OK" button, you will connect to the local client and carry on as usual. It is not clear whether you should manually find and kill the previous instance, or whether it is OK to have 2 connected managers at the same time.

About processor efficiency
Modern computer processors are like cars: they consume very little when idle and are most efficient at a certain speed. Above that speed, energy consumption increases uneconomically. Fan noise and heat increase much faster than the corresponding speed gain too.

If your computer is under load, the CPU and GPU are the components that consume most. For example, an Intel Core i7-2600 processor has a thermal design power of 95 W, and some other models go as high as 130 W. GPUs are much worse. Under heavy load, performance per watt becomes important.

Time is money, so, if you are waiting for the computer to finish a certain task, you may not mind the extra cost and the higher fan noise level. The same applies if you want the best possible gaming experience. But you probably do not want the extra burden if you are donating your "spare" computing power. Actually, there is no such thing as "spare" processing power anymore, for your processor will always consume much more electricity doing some actual work than idling in sleep mode.

Global considerations
Electricity prices have increased in the past years, especially in Europe, mainly due to new regulation responding to environmental concerns. If you were to run your consumer CPU full throttle all the time, it would probably be more efficient to donate your increased electricity costs directly to your project of choice instead. For example, making your CPU consume extra 90 W for 6 hours a day in Germany will add around 50 € to your yearly bill (as of 2014).

Europe has been passing some legislation recently in order to increase power supply efficiency and limit the maximum amount of power a consumer PC is allowed to consume, but laws alone do not help much when running software like BOINC. The only places where the ratios between processing power, electricity consumption and heat generation are taken seriously is in dedicated computing centres, and they get to buy special server processors that are not available on the consumer market. Some people are running Bitcoin harvesters at home, and they do look at those ratios as well, but they tend to use special hardware too (mostly purpose-made GPUs).

Wikipedia's page on SETI@home mentions criticism that the project is (indirectly and effectively) contributing to global warming. It also states: "However, this assertion that distributive computing projects like SETI@home are equivalent to data centers may not bear up under scrutiny". That might have been true in the past, but, in my personal opinion, SETI@home is nowadays even worse than data centers. The reason is, data centers have been focusing on efficiency during the last years in order to cut operating costs in the face of increasing power bills. Surely, a particular user can re-configure his system for efficiency, but, as this article shows, it is not an easy task, and you could argue that big, professional organisations have better chances in this respect. You cannot expect such a high technical knowledge from normal users, so BOINC's default installation settings on standard PCs is what actually counts. Of course, power companies are the ones that should ultimately be made responsible for the environmental damage they cause, but there is still the issue of your personal electricty costs.

Operating system shortcomings
Computer processors were not always so frugal when idle, but they have been like this since many years ago. However, BOINC, Linux and Windows apperently have not realised yet. Power managament has seen many advancements for battery-powered devices like laptops and mobile phones, but donating spare CPU processing power does not seem to rank very high on the agenda.

The root of the problem is the so-called "race to sleep" or "race to idle" model of computation used by most consumer devices. This model optimises power consumption for consumer workloads and is not suitable for sustained background loads.

Microsoft Windows offers some power-managament settings buried in a big options tree under "control panel", "power management", "advanced". Linux has more internal flexibility, but tends to lack a user interface for the mere mortals. If you want to ajust even the most basic CPU power settings, you have to edit configuration files as the root user.

Finding the optimal CPU speed
Finding the optimal CPU speed at home is all but impossible. There are just too many factors to consider. Energy consumption depends on the CPU family, model and exact submodel, and is normally not well documented. It also depends on the kind of load, on the current ambient temperature, on the system configuration (other components on the mainboard), and on your operating system's version number.

Even if you can determine the point of maximum efficiency, it may still increase your electricity bill more than your personal absolute limit, so you may want to reduce your contribution based on that anyway.

Here is some practical advice:


 * If the fan noise bothers you, limit the load until the fan quiets down.
 * Use an appliance energy meter to monitor consumption when idle and when under load. Adjust the load according to the extra electricity costs you are willing to pay.
 * If in doubt, choose the lowest CPU frequency available. That is, assuming you can control it, see below for more information.

Measurements on my home computers
For illustration purposes, I did a quick measurement on my home computers with an inexpensive, not-properly-calibrated meter:


 * Laptop:
 * Specs:
 * Kubuntu Linux 14.04
 * Intel Core i3-380M CPU with 2 cores and Hyper-Threading, 2.53 Mhz (min 933 MHz)
 * 8 GiB RAM DDR3-1066
 * Consumption:
 * In standby (sleep mode): 2 W
 * Idle consumption: 17 W The fan normally runs every now a then for a little while.
 * With SETI@home and ignore_nice_load (933 MHz): 30 W (at 13 W a 76% increase from idle) The fan runs permanently at half speed but makes almost as much noise as when at full speed.
 * With SETI@home: 38 W (at 21 W a 123% increase from idle, at 8 W a 26% increase from ignore_nice_load) The fan runs at full speed and is therefore louder.


 * Desktop PC:
 * Specs:
 * Microsoft Windows 7
 * AMD Phenom II X4 910e with 4 cores, 2.60 GHz (min 800 MHz), TDP 65W
 * 4 GiB RAM DDR2-800
 * ATI Radeon HD 7700 series passively cooled (without fan)
 * No case fan, just one CPU fan for the whole system
 * 26 '' full HD flat screen
 * Consumption:
 * PC-only in standby sleep mode: 74 W
 * PC-only idle: 87 W
 * With monitor turned on: 134 W (at 117 W a 688% increase from the laptop) Part of the increase is probably due to the graphics card now doing some work.
 * With SETI@home, CPU only: 187 W (at 53 W a 40% increase)
 * With SETI@home using the GPU too: 208 W (at 21 W a 11% increase) The CPU fan got louder.
 * With SETI@home, GPU only: 168 W (at 34 W a 25% increase from idle) You cannot use the GPU only, so I set a 1 % CPU limit.
 * Gaming (GTA IV): 194 W (at 60 W a 45% increase from idle)

Dynamic overclocking rendered useless
Running a constant background loads renders dynamic overclocking technologies like Intel's Turbo Boost or AMD's Turbo Core useless.

If dynamic overclocking is looking at the CPU temperature, it will always see a warm CPU, as it is constantly doing some work. If it is looking at the number of cores currently in use, they will all be busy all the time.

As a result, you may notice a small decrease in your PC's responsiveness.

You can certainly limit the background load in BOINC's "computing preferences" dialog, but then you will reduce your contribution to the project, and you will probably lose most of the dynamic overclocking advantages anyway.

Shared processor resources
Some CPUs have shared resources between cores. For example, Intel's Hyper-Threading architecture shares most of the CPU resources between 2 logical cores, and AMD's Bulldozer architecture has one floating-point core for each two integer cores.

Let's say a normal process is running on the first core, and a low-priority process runs on the second core, which would otherwise be idle. If the low-priority task is using a shared resource at the moment, and the normal process wants to use it too, it will have to wait, because the CPU hardware does not know anything about thread priorities, and most hardware-based operations cannot be interrupted anyway.

Cache pollution
Nowadays, most CPU cores share a common on-die cache. Any background load, even if it is using just one core, will pollute the cache, as far as normal-priority threads are concerned.

Cache pollution is just another case of sharing a common resource, but it can have far more dramatic effects on performance, because a single CPU core can claim shared cache space very quickly. Say a normal-priority process is waiting for some I/O event or just accessing the same memory region in a tight loop for a short time. When this process attempts to access other memory regions later on, it will have to wait longer than usual, as the other concurrent, low-priority threads will have flushed most of the shared cache out in the meantime. Cache write-back buffers will also be busy with background processing data, and they have no knowledge of thread priorities either.

How much this affects system performance depends on the current load and on the OS scheduling policy. If the foreground tasks do not run continously for long periods of time, but make short I/O pauses and/or trigger many context switches, the effects of cache pollution will be more noticeable.

If your CPU does not support partitioning its shared cache, there is not much you can do to prevent this issue. I haven't seen a modern consumer CPU with this feature yet.

For normal consumer loads, the performance impact should not be too high.

Impact on foreground CPU tasks
BOINC tasks run with the lowest-possible process priority, sometimes called 'idle' priority. Under Kubuntu Linux, the System Monitor shows for those processes a 'nice' level of 19 and a scheduling policy of 'batch'. Under Microsoft Windows, the Task Manager displays a "Low" priority.

In theory, BOINC processes should not have any effect on foreground tasks, because the operating sytem scheduler will never let a low-priority process run while a higher-priority one is waiting. At least that is what I have read in most documentation. In practice, life is a little more complicated, and there are I/O waiting times, variable scheduling time slices and temporary priority boosts. Therefore, a process' priority has some built-in dynamic components and does not always remain at its initial absolute value.

I already mentioned that I was getting compilation times twice as long as usual under Kubuntu, so I decided to investigate further. The compilation task consisted of a full rebuild of a smallish C++ project which included the regeneration of its autoconf configuration script. The autoconf part runs sequentially many small processes, and the compilation part runs in parallel, spawning one GCC instance per source file. Both build phases felt slower than usual when BOINC was running in the background.

Benchmarking an autoconf/GCC project is difficult and inflexible, so I wrote a simple script to simulate that kind of load. In order to minimize the effects of cache pollution, all processes run the same synthetic tasks, which consist of either an empty loop or a loop that echos a fixed string to /dev/null. Each test finishes after a fixed number of iterations, and the elapsed wall-clock time is displayed at the end.

Factors to consider are:
 * Number of parallel background processes.
 * Background process priority, which is a combination of nice level and scheduling policy. SCHED_IDLEPRIO (chrt --idle) seems to be slightly better than SCHED_BATCH (chrt --batch).
 * Type of background task (empty loop, "echo >/dev/null" loop, SETI@home/BOINC).
 * Number of parallel foreground processes.
 * Type of foreground task.
 * Number of sequential child-process invocations in the foreground processes.
 * OS scheduler, like Linux O(1) or CFS.
 * OS power manager configuration, like the Linux ondemand governor with optional setting ignore_nice_load.

I actually did no proper scientific research, but just looked at a few scenarios. This is what I learnt: if you are running stable foreground processes, you do not lose too much performance to background low-priority processes. However, if you foreground tasks start many short-lived child processes, then performance suffers, as the low-priority processes end up getting a sizeable amount of CPU time, even though they actually should not. You can observe this behaviour with your system's process monitor tool. I do not know yet of any easy system configuration tweak to prevent it.

Check out section "Measurements on my home computers" above for the system configuration. Under Microsoft Windows, I was using the Cygwin environment.

Foreground task with many short-lived subprocesses
The following foreground task starts as many parallel processes as logical CPU cores. Each process executes many short-lived subprocesses sequentially:

Linux: ./synthetic-task.sh $(getconf _NPROCESSORS_ONLN) 1000 30 echo_dev_null_loop Cygwin under Windows (slower fork support, so less iterations): ./synthetic-task.sh $(getconf _NPROCESSORS_ONLN) 100 30 echo_dev_null_loop

Running the following long, parallel background task at the same time causes the workload above to take 140 % longer (that is, it takes more than twice the time) to complete on my Linux laptop, and 60 % longer on my Windows PC:

Linux: chrt --idle 0 nice -n 19 ./synthetic-task.sh $(getconf _NPROCESSORS_ONLN) 0 100000000 empty_loop Cygwin under Windows (no support for chrt): nice -n 19 ./synthetic-task.sh $(getconf _NPROCESSORS_ONLN) 0 100000000 empty_loop

Running a parallel SETI@home BOINC project in the background makes the foreground task take 155 % longer to complete on my Linux laptop and 70 % longer on my Windows PC.

Stable foreground task
The following foreground task does not run any child processes:

./synthetic-task.sh $(getconf _NPROCESSORS_ONLN) 0 30000 echo_dev_null_loop

With this workload, the performance impact is much lower. On the Windows PC, the difference is small, but not so on the Linux laptop, where running the low-priority synthetic task or SETI@home in the background yields 70 % longer execution times. You can see with KDE's System Monitor that processes with a "Niceness" of "(Batch) 19" are still getting their fare share of CPU time, even though other processes with a "Niceness" of "0" are permanently ready to run.

Interestingly enough, this did not happen on a second Linux PC with an 4-core Intel Core i5 CPU running the same operating system version. On that second system, short foreground tasks often (but not always) take longer to run, but long-running foreground tasks do get all of the available CPU time, and background low-priority tasks do not run anymore as expected.

I did one more test on that platform with both the synthetic task and SETI@home as concurrent background loads, and I noticed interesting effects. First of all, emacs stuttered while typing this text. Then, I realised that SETI@home processes were running with a scheduling policy of SCHED_BATCH, instead of SCHED_IDLE, so I restarted the synthetic task with the same SCHED_BATCH priority. Then emacs stuttered less, which does not make sense. However, CPU time was still not fairly allocated across all processes. The SETI@home tasks were getting most of the CPU time, and just one of the synthetic tasks was getting about the same percentage of CPU time every now and then, even though all processes had the same scheduling priority and nice level.

After starting the foreground task, it shared CPU time fairly with the SETI@home processes, even though the latter have a much lower priority. After a few seconds, SETI@home does not get any more CPU time. The BOINC manager seems to realise and displays the "pause" icon, as if I had paused the project manually with menu "Activity/Suspend". However, the "Status" column did not say why the project was paused. Stopping the foreground task made the pause icon go away within a few seconds.

The upshot is: if you are running background low-priority tasks, your normal processes will execute more slowly than expected during the first seconds. Exactly how long depends on some scheduler heuristics. The behaviour of the Linux CPU scheduler is definitely too unpredictable for my liking.

Tweaking the OS scheduler
If you do not like the behaviour of your OS scheduler, you are out of luck. I know of no OS yet that offers an easy scheduling configuration dialog.

Linux has switched to different schedulers in the past, but it does not look like you can choose one at runtime. You can change the CPU frequency governor, but that is no easy task either (see below for more information).

Impact on GPU loads
The BOINC FAQ entry "Nvidia CUDA & ATI Stream (CAL) FAQ" mentions stutters and slow-downs when watching DVDs, using a TV tuner, running a 3D screensaver or playing games. I guess desktop visual effects may be affected too. The advice is then to stop using the GPU for BOINC calculations.

There is no automatic detection of normal-priority graphics card activity in order to back off and let foreground tasks run faster.

CPU usage meters rendered useless
None of the CPU usage meters I know of, whether stand-alone applications or taskbar widgets, have an option to ignore low-priority processes. After installing BOINC they all show 100 % CPU usage all the time, so that I cannot tell at a glance whether the system is busy with normal processes or not.

If you know a better CPU meter, please drop me a line.

Conclussion: BOINC should back off more quickly
BOINC should back off more quickly when user processes want to run. If the computer is being used at the moment, BOINC should not attempt to use all remaining CPU resources, as that unduly increases reaction time for important tasks because of the resource contention and scheduling issues described above.

Unfortunately, as described in section ["Computing preferences" issues] above, BOINC is very sluggish to react. In the case of the GPU, I believe this feature is not implemented at all.

Taming your CPU frequency
For normal users, the best solution for the energy consumption, heat and fan noise problems is rather simple: do not let the CPU ramp up its core clock frequency when running background calculations. The CPU should only run full speed when the user really needs the processing power right now.

Note that halving the CPU core speed may not actually mean halving your BOINC contributions, because not all system components run at the core speed. For example, RAM is often a performance bottleneck and almost always runs at a fixed, lower frequency.

Other unsatisfactory workarounds
If you find your energy consumption or fan noise levels excessive, the BOINC documentation suggests a few ways to deal with this problem, and so do other similar platforms, but I found most of their advice unsatisfactory.


 * Cleaning your fan. I would not risk opening my PC for this reason. On laptops, this often means taking the whole thing apart, which can take a skilled technician more than one hour of work.


 * Controlling CPU speed based on the temperature. BOINC runs all the time, so, in practice, this will permanently reduce your system's performance. You probably do not mind your laptop getting hot and loud if you are waiting for a compilation to finish. And this will not help much with your electricity bill either.


 * Limiting the processor speed. This will permanently reduce your system's performance in all situations. Besides, on some systems, it is not so easy. You can usually limit your processor's maximum speed in the BIOS, but you will often be confronted with arcane settings like "frequency multiplicators". On Linux, you will have to edit weird kernel configuration files. On Microsoft Windows, you can lower the maximum CPU speed to some percentage value. This setting is hidden under "power management", "advanced", and so on. The situation is worse with GPUs. At the very least, you will have to learn manufacturer-specific tools.


 * Limiting the CPU usage in BOINC. In BOINC, you can set a percentage limit to the CPU usage. However, CPU usage management is not fine-grained enough, so this option makes the CPU bounce between 0% and 100% in one-second intervals. Modern CPUs change power states very quickly, so the processor will probably switch between maximum efficiency (sleeping) and maximum inefficiency (full speed). In the end, this does not yield an optimal electricity consumption/performance ratio.


 * Limiting the number of CPU cores in BOINC. I guess most scientific projects in BOINC benefit greatly from parallelism, so setting a limit here will severely cut your contribution. Besides, some cores will be most efficient (sleep) while others will be most inefficient (full throttle, not ideal).

Controlling the CPU frequency under Microsoft Windows
In order to see the current CPU frequency, you can use openhardwaremonitor.org. It looks like you do not need elevated privileges to display the current CPU frequency.

When idle, the CPU frequency will probably drop below 1 GHz, and will immediately increase when under load. If the frequency never drops, you computer will consume more electricity than necessary. Check the energy configuration in the Windows Control Panel. It they look alright, the corresponding hardware support may have been disabled in the BIOS.

Power saving support often gets neglected, especially in older PCs, so there may be a reason why it is disabled on your computer. Make a note that you have recently enabled it, in case your PC starts to misbehave in the next days.

Unfortunately, and as far as I know, there is no way under Windows to dynamically adjust the CPU frequency based on the current process priority. You can only lower the maximum frequency under load.

Controlling the CPU frequency under Linux
Modern Linux systems normally use the ondemand CPU frequency governor, which offers more flexibility. Unfortunately, I have not yet seen any easy-to-use user interface that lets you adjust the settings with the mouse. Beware that some of the file paths under /sys in the examples below have changed in the last years. I have tested the following code snippets under Kubuntu 14.04.

First of all, check out what governor your system is using:

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

If it is not ondemand, you are on your own. I have heard that modern Intel chips use the intel_pstate driver, which has its own built-in governor that may not support this kind of frequency control. If you know better, please drop me a line. You could switch to the ondemand governor, but it may not be so efficient for your particular processor.

Next step is to find out what the minimum and maximum frequencies are:

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq

Leave the following command running on a separate window in order to monitor the current frequency in realtime:

watch --interval 0.2 cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq

You need to set configuration option ignore_nice_load to 1, so that low-priority processes do not make the CPU frequency increase:

sudo bash -c "echo 1 > /sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load"

Start your BOINC project and check whether the CPU frequency stays low.

Making the change to ignore_nice_load permanent
Making the change to ignore_nice_load permanent is rather tricky under Ubuntu. The governor changes from 'performance' to 'ondemand' one minute after the system starts (see /etc/init.d/ondemand), so any settings modified on start-up (for example, with /etc/rc.local) will not work.

The easiest way is probably with package sysfsutils. The steps are:


 * Disable the ondemand script in /etc/init.d with:

sudo update-rc.d ondemand disable


 * Install package sysfsutils with:

sudo apt-get install sysfsutils


 * Edit file /etc/sysfs.conf as root. For example:

sudoedit /etc/sysfs.conf


 * Add the following lines to /etc/sysfs.conf:

# One line per CPU here: devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand devices/system/cpu/cpu1/cpufreq/scaling_governor = ondemand devices/system/cpu/cpufreq/ondemand/ignore_nice_load = 1


 * Optionally, revert the change to ignore_nice_load, and then test the permanent method with:

sudo /etc/init.d/sysfsutils start

System-wide consequences of ignore_nice_load
Setting ignore_nice_load has consequences. Linux has three priority mechanisms that do not quite fit together:


 * The nice method is a POSIX standard and affects both CPU and disk priority.
 * The scheduling policy (see chrt) is Linux specific and only affects CPU priority.
 * The ionice method is Linux specific and only affects disk priority.

Interaction among these methods is not properly documented for the standard end-user.

The trouble is, setting ignore_nice_load only works with the nice mechanism and completely ignores the scheduling policy. Any nice level greater than 0 will keep the CPU frequency low, but the best value would probably have been the absolute minimum nice level of 19.

Anything running with the slightest positive nice level will run more slowly. Some system-wide processes typically run with lower-than-normal priorities. For example, pulseaudio usually starts background processes with a nice level of 10. Depending on the software you use, you may not notice any difference, but it is easy to forget that ignore_nice_load has been set in the past, and then you may wonder why your background compilation is taking longer to run. You have to remember to use chrt and ionice instead of nice, but that may not be feasible with some precompiled software packages that are hard-coded to the POSIX standard.

At first, I thought that the fixed nice level was an oversight in the ondemand governor, so I took a look at its source code, namely file cpufreq_ondemand.c. I then followed the call tree down to kernel/sched/cputime.c and found the following code snippet:

if (TASK_NICE(p) > 0) { cpustat[CPUTIME_NICE] += (__force u64) cputime; cpustat[CPUTIME_GUEST_NICE] += (__force u64) cputime; } else { cpustat[CPUTIME_USER] += (__force u64) cputime; cpustat[CPUTIME_GUEST] += (__force u64) cputime; }

It looks like the kernel is not recording CPU usage separately per nice level. If you look at the manpage for /proc/stat you will find a similar situation. If the kernel is not discriminating between nice levels, then there is no way a governor can act differently just for nice level 19.

I guess the Linux kernel developers have not awaken yet to BOINC's energy-consumption issues.

A possible work-around would be to write a userspace governor that monitors all running processes and their priorities, in order to make smarter decisions. However, constantly running a more complex governor algorithm without suitable kernel support would probably mean paying a higher performance penalty.

Controlling your GPU load
Apparently, there is no generic way to control the GPU load or operating frequency. Graphics card manufacturers will probably offer their own tools for the most popular operating systems. I haven't seen any easy guide on this subject yet, and I must admit that I haven't got the time or the inclination to do any further research.

If you have any tips on this, please drop me a line.

The Android BOINC client
I haven't actually got any experience yet with the Android BOINC client, but the situation there looks much more promising.

If you have an Android mobile phone, charge it every night and have WLAN at home, contributing to BOINC should be a much more pleasant experience.

Smartphones are quite fast and very power efficient. Many people update their smartphones every few years, so their efficiency levels does not lag many years behind. And smartphones do not have noisy fans.