Why does sleeping increase the execution time of an independent piece of code that is executed before/afterwards?Improve INSERT-per-second performance of SQLite?Why does Python code run faster in a function?Python time.sleep vs busy wait accuracyWhy does GCC generate 15-20% faster code if I optimize for size instead of speed?Strange (huge) performance difference between std::vector, QList and std::listReplacing a 32-bit loop counter with 64-bit introduces crazy performance deviationsWhy does this piece of code gets slower with time?Perf overcounting simple CPU-bound loop: mysterious kernel work?C++ What does the execution time of a class method depend on?Why is if (variable1 % variable2 == 0) inefficient?
When to ask for constructive criticism?
Can't update Ubuntu 18.04.2
Why doesn't philosophy have higher standards for its arguments?
Can a pizza stone be fixed after soap has been used to clean it?
If I stood next to a piece of metal heated to a million degrees, but in a perfect vacuum, would I feel hot?
Could I use a Greatsword and a Longsword in one turn with Two-weapon fighting and dual wielding feat?
Why does the Trade Federation become so alarmed upon learning the ambassadors are Jedi Knights?
Using print composer name in a label
Why do candidates not quit if they no longer have a realistic chance to win in the 2020 US presidents election
How could an animal "smell" carbon monoxide?
How to change checkbox react correctly?
What alternatives exist to at-will employment?
What impact would a dragon the size of Asia have on the environment?
What is this called? A tube flange bearing threaded for threaded pushrod
Is there a good program to play chess online in ubuntu?
Source of story about the Vilna Gaon and immigration policy
Using two linked programs, output ordinal numbers up to n
Why should I cook the flour first when making bechamel sauce?
Problem with interpolating function returned by NDEigensystem
Getting fresh water in the middle of hypersaline lake in the Bronze Age
Animal Shelter Management C++
What do mathematicians mean when they say some conjecture can’t be proven using the current technology?
In special relativity is mass just a measure of all other energy than kinetic?
Why do so many algebra PhD students drop out or leave academia, compared to applied mathematics PhDs?
Why does sleeping increase the execution time of an independent piece of code that is executed before/afterwards?
Improve INSERT-per-second performance of SQLite?Why does Python code run faster in a function?Python time.sleep vs busy wait accuracyWhy does GCC generate 15-20% faster code if I optimize for size instead of speed?Strange (huge) performance difference between std::vector, QList and std::listReplacing a 32-bit loop counter with 64-bit introduces crazy performance deviationsWhy does this piece of code gets slower with time?Perf overcounting simple CPU-bound loop: mysterious kernel work?C++ What does the execution time of a class method depend on?Why is if (variable1 % variable2 == 0) inefficient?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I noticed something really strange that I haven't seen before. The basic setup is described in this pseudo code:
TARGET_LOOP_TIME = X
loop forever:
before = now()
payload()
payload_time = now() - before
sleep(TARGET_LOOP_TIME - payload_time)
This setup is fairly common, e.g. for keeping the loop at 60 FPS. The interesting part is: the payload_time
depends on the sleep duration! If the TARGET_LOOP_TIME
is high and the program will thus sleep a lot, the payload_time
is way higher compared to when the program does not sleep at all.
To measure this, I wrote this program:
use std::time::Duration, Instant;
const ITERS: usize = 100;
fn main() n
/// Dummy function.
fn count_primes(up_to: u64) -> u64 n % d != 0))
.count() as u64
I iterate over different target loop times to test (1ms to 30ms) and iterate for ITERS
many times. I compiled this with cargo run --release
. On my machine (Ubuntu), the program outputs:
loop_time 1.00ms => 3.37ms
loop_time 2.00ms => 3.38ms
loop_time 3.00ms => 3.17ms
loop_time 4.00ms => 3.25ms
loop_time 5.00ms => 3.38ms
loop_time 6.00ms => 4.05ms
loop_time 7.00ms => 4.09ms
loop_time 8.00ms => 4.48ms
loop_time 9.00ms => 4.43ms
loop_time 10.00ms => 4.22ms
loop_time 11.00ms => 4.59ms
loop_time 12.00ms => 5.53ms
loop_time 13.00ms => 5.82ms
loop_time 14.00ms => 6.18ms
loop_time 15.00ms => 6.32ms
loop_time 16.00ms => 6.96ms
loop_time 17.00ms => 8.00ms
loop_time 18.00ms => 7.97ms
loop_time 19.00ms => 8.28ms
loop_time 20.00ms => 8.75ms
loop_time 21.00ms => 9.70ms
loop_time 22.00ms => 9.57ms
loop_time 23.00ms => 10.48ms
loop_time 24.00ms => 10.29ms
loop_time 25.00ms => 10.31ms
loop_time 26.00ms => 10.82ms
loop_time 27.00ms => 10.84ms
loop_time 28.00ms => 10.82ms
loop_time 29.00ms => 10.91ms
I made a plot of those numbers (the sleep_time
is max(0, loop_time - avg_duration)
):
When the program does not sleep at all, the payload needs approximately 3.3ms (as the first three measurements show). As soon as the loop starts to sleep after the payload, the payload duration rises! In fact, it increases to roughly 10.5ms where it stays. Sleeping even longer does not increase the payload time.
Why? Why does the execution time of the piece of code depend on something I do afterwards (or before)? That does not make sense to me! It looks like the CPU says "I'm gonna sleep afterwards anyway, so let's take it slowly". I thought about caching effects, especially of the instruction cache, but loading instruction data from the main memory does not take 7ms! Something else is going on here!
Is there a way to fix this? I.e. to make the payload execute as quickly as possible regardless of sleep time?
performance rust
add a comment |
I noticed something really strange that I haven't seen before. The basic setup is described in this pseudo code:
TARGET_LOOP_TIME = X
loop forever:
before = now()
payload()
payload_time = now() - before
sleep(TARGET_LOOP_TIME - payload_time)
This setup is fairly common, e.g. for keeping the loop at 60 FPS. The interesting part is: the payload_time
depends on the sleep duration! If the TARGET_LOOP_TIME
is high and the program will thus sleep a lot, the payload_time
is way higher compared to when the program does not sleep at all.
To measure this, I wrote this program:
use std::time::Duration, Instant;
const ITERS: usize = 100;
fn main() n
/// Dummy function.
fn count_primes(up_to: u64) -> u64 n % d != 0))
.count() as u64
I iterate over different target loop times to test (1ms to 30ms) and iterate for ITERS
many times. I compiled this with cargo run --release
. On my machine (Ubuntu), the program outputs:
loop_time 1.00ms => 3.37ms
loop_time 2.00ms => 3.38ms
loop_time 3.00ms => 3.17ms
loop_time 4.00ms => 3.25ms
loop_time 5.00ms => 3.38ms
loop_time 6.00ms => 4.05ms
loop_time 7.00ms => 4.09ms
loop_time 8.00ms => 4.48ms
loop_time 9.00ms => 4.43ms
loop_time 10.00ms => 4.22ms
loop_time 11.00ms => 4.59ms
loop_time 12.00ms => 5.53ms
loop_time 13.00ms => 5.82ms
loop_time 14.00ms => 6.18ms
loop_time 15.00ms => 6.32ms
loop_time 16.00ms => 6.96ms
loop_time 17.00ms => 8.00ms
loop_time 18.00ms => 7.97ms
loop_time 19.00ms => 8.28ms
loop_time 20.00ms => 8.75ms
loop_time 21.00ms => 9.70ms
loop_time 22.00ms => 9.57ms
loop_time 23.00ms => 10.48ms
loop_time 24.00ms => 10.29ms
loop_time 25.00ms => 10.31ms
loop_time 26.00ms => 10.82ms
loop_time 27.00ms => 10.84ms
loop_time 28.00ms => 10.82ms
loop_time 29.00ms => 10.91ms
I made a plot of those numbers (the sleep_time
is max(0, loop_time - avg_duration)
):
When the program does not sleep at all, the payload needs approximately 3.3ms (as the first three measurements show). As soon as the loop starts to sleep after the payload, the payload duration rises! In fact, it increases to roughly 10.5ms where it stays. Sleeping even longer does not increase the payload time.
Why? Why does the execution time of the piece of code depend on something I do afterwards (or before)? That does not make sense to me! It looks like the CPU says "I'm gonna sleep afterwards anyway, so let's take it slowly". I thought about caching effects, especially of the instruction cache, but loading instruction data from the main memory does not take 7ms! Something else is going on here!
Is there a way to fix this? I.e. to make the payload execute as quickly as possible regardless of sleep time?
performance rust
Couldn't the compiler optmize outcount_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.
– Jens
Jul 7 at 13:25
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29
add a comment |
I noticed something really strange that I haven't seen before. The basic setup is described in this pseudo code:
TARGET_LOOP_TIME = X
loop forever:
before = now()
payload()
payload_time = now() - before
sleep(TARGET_LOOP_TIME - payload_time)
This setup is fairly common, e.g. for keeping the loop at 60 FPS. The interesting part is: the payload_time
depends on the sleep duration! If the TARGET_LOOP_TIME
is high and the program will thus sleep a lot, the payload_time
is way higher compared to when the program does not sleep at all.
To measure this, I wrote this program:
use std::time::Duration, Instant;
const ITERS: usize = 100;
fn main() n
/// Dummy function.
fn count_primes(up_to: u64) -> u64 n % d != 0))
.count() as u64
I iterate over different target loop times to test (1ms to 30ms) and iterate for ITERS
many times. I compiled this with cargo run --release
. On my machine (Ubuntu), the program outputs:
loop_time 1.00ms => 3.37ms
loop_time 2.00ms => 3.38ms
loop_time 3.00ms => 3.17ms
loop_time 4.00ms => 3.25ms
loop_time 5.00ms => 3.38ms
loop_time 6.00ms => 4.05ms
loop_time 7.00ms => 4.09ms
loop_time 8.00ms => 4.48ms
loop_time 9.00ms => 4.43ms
loop_time 10.00ms => 4.22ms
loop_time 11.00ms => 4.59ms
loop_time 12.00ms => 5.53ms
loop_time 13.00ms => 5.82ms
loop_time 14.00ms => 6.18ms
loop_time 15.00ms => 6.32ms
loop_time 16.00ms => 6.96ms
loop_time 17.00ms => 8.00ms
loop_time 18.00ms => 7.97ms
loop_time 19.00ms => 8.28ms
loop_time 20.00ms => 8.75ms
loop_time 21.00ms => 9.70ms
loop_time 22.00ms => 9.57ms
loop_time 23.00ms => 10.48ms
loop_time 24.00ms => 10.29ms
loop_time 25.00ms => 10.31ms
loop_time 26.00ms => 10.82ms
loop_time 27.00ms => 10.84ms
loop_time 28.00ms => 10.82ms
loop_time 29.00ms => 10.91ms
I made a plot of those numbers (the sleep_time
is max(0, loop_time - avg_duration)
):
When the program does not sleep at all, the payload needs approximately 3.3ms (as the first three measurements show). As soon as the loop starts to sleep after the payload, the payload duration rises! In fact, it increases to roughly 10.5ms where it stays. Sleeping even longer does not increase the payload time.
Why? Why does the execution time of the piece of code depend on something I do afterwards (or before)? That does not make sense to me! It looks like the CPU says "I'm gonna sleep afterwards anyway, so let's take it slowly". I thought about caching effects, especially of the instruction cache, but loading instruction data from the main memory does not take 7ms! Something else is going on here!
Is there a way to fix this? I.e. to make the payload execute as quickly as possible regardless of sleep time?
performance rust
I noticed something really strange that I haven't seen before. The basic setup is described in this pseudo code:
TARGET_LOOP_TIME = X
loop forever:
before = now()
payload()
payload_time = now() - before
sleep(TARGET_LOOP_TIME - payload_time)
This setup is fairly common, e.g. for keeping the loop at 60 FPS. The interesting part is: the payload_time
depends on the sleep duration! If the TARGET_LOOP_TIME
is high and the program will thus sleep a lot, the payload_time
is way higher compared to when the program does not sleep at all.
To measure this, I wrote this program:
use std::time::Duration, Instant;
const ITERS: usize = 100;
fn main() n
/// Dummy function.
fn count_primes(up_to: u64) -> u64 n % d != 0))
.count() as u64
I iterate over different target loop times to test (1ms to 30ms) and iterate for ITERS
many times. I compiled this with cargo run --release
. On my machine (Ubuntu), the program outputs:
loop_time 1.00ms => 3.37ms
loop_time 2.00ms => 3.38ms
loop_time 3.00ms => 3.17ms
loop_time 4.00ms => 3.25ms
loop_time 5.00ms => 3.38ms
loop_time 6.00ms => 4.05ms
loop_time 7.00ms => 4.09ms
loop_time 8.00ms => 4.48ms
loop_time 9.00ms => 4.43ms
loop_time 10.00ms => 4.22ms
loop_time 11.00ms => 4.59ms
loop_time 12.00ms => 5.53ms
loop_time 13.00ms => 5.82ms
loop_time 14.00ms => 6.18ms
loop_time 15.00ms => 6.32ms
loop_time 16.00ms => 6.96ms
loop_time 17.00ms => 8.00ms
loop_time 18.00ms => 7.97ms
loop_time 19.00ms => 8.28ms
loop_time 20.00ms => 8.75ms
loop_time 21.00ms => 9.70ms
loop_time 22.00ms => 9.57ms
loop_time 23.00ms => 10.48ms
loop_time 24.00ms => 10.29ms
loop_time 25.00ms => 10.31ms
loop_time 26.00ms => 10.82ms
loop_time 27.00ms => 10.84ms
loop_time 28.00ms => 10.82ms
loop_time 29.00ms => 10.91ms
I made a plot of those numbers (the sleep_time
is max(0, loop_time - avg_duration)
):
When the program does not sleep at all, the payload needs approximately 3.3ms (as the first three measurements show). As soon as the loop starts to sleep after the payload, the payload duration rises! In fact, it increases to roughly 10.5ms where it stays. Sleeping even longer does not increase the payload time.
Why? Why does the execution time of the piece of code depend on something I do afterwards (or before)? That does not make sense to me! It looks like the CPU says "I'm gonna sleep afterwards anyway, so let's take it slowly". I thought about caching effects, especially of the instruction cache, but loading instruction data from the main memory does not take 7ms! Something else is going on here!
Is there a way to fix this? I.e. to make the payload execute as quickly as possible regardless of sleep time?
performance rust
performance rust
edited Jul 7 at 13:00
Shepmaster
172k20 gold badges377 silver badges527 bronze badges
172k20 gold badges377 silver badges527 bronze badges
asked Jul 7 at 12:47
Lukas KalbertodtLukas Kalbertodt
28.9k4 gold badges71 silver badges135 bronze badges
28.9k4 gold badges71 silver badges135 bronze badges
Couldn't the compiler optmize outcount_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.
– Jens
Jul 7 at 13:25
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29
add a comment |
Couldn't the compiler optmize outcount_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.
– Jens
Jul 7 at 13:25
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29
Couldn't the compiler optmize out
count_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.– Jens
Jul 7 at 13:25
Couldn't the compiler optmize out
count_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.– Jens
Jul 7 at 13:25
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29
add a comment |
1 Answer
1
active
oldest
votes
I am quite sure that this is caused by the CPU throttling. When the OS scheduler detects there is little work to do, the CPU frequency lowers to save power.
When you do a lot of sleep
s, you are telling the scheduler that you are not in such a hurry, and the CPU can take it easy.
You can see that this is the case by running a CPU intensive task in another window, with low priority. For example, in Linux you can run:
$ nice bash -c 'while true ; do : ; done'
And at the same time, in another window run your program:
$ cargo run --release
loop_time 1.00ms => 3.13ms
loop_time 2.00ms => 3.17ms
loop_time 3.00ms => 3.19ms
loop_time 4.00ms => 3.13ms
loop_time 5.00ms => 3.16ms
loop_time 6.00ms => 3.22ms
loop_time 7.00ms => 3.14ms
loop_time 8.00ms => 3.15ms
loop_time 9.00ms => 3.13ms
loop_time 10.00ms => 3.18ms
loop_time 11.00ms => 3.14ms
loop_time 12.00ms => 3.17ms
loop_time 13.00ms => 3.15ms
...
Avoiding this depends on your OS. For example, in Linux, you can fiddle with sys/devices/system/cpu/*
options. I think that UPower
provides some functions to manage it from a non-root application. It would be nice if there were a crate that managed this cross-system, but I don't know of any.
An easy but hacky way to fix this, if you don't mind the wasted power, is just to run an idle thread with a busy loop.
std::thread::spawn(||
use thread_priority::*; //external crate thread-priority
let thread_id = thread_native_id();
set_thread_priority(
thread_id,
ThreadPriority::Min,
ThreadSchedulePolicy::Normal(NormalThreadSchedulePolicy::Idle),
)
.unwrap();
loop
);
Naturally, if you just want to avoid throttling in this piece of code, you can do a busy wait:
//if loop_time > elapsed
// std::thread::sleep(loop_time - elapsed);
//
// Busy-wait the remaining time, to avoid CPU throttling
while loop_time > before.elapsed()
//you may want to try both with and without yield
std::thread::yield_now();
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
@LukasKalbertodt These settings are OS-specific. On Linux, you canecho performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.
– Jens
Jul 7 at 14:53
|
show 3 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f56922289%2fwhy-does-sleeping-increase-the-execution-time-of-an-independent-piece-of-code-th%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
I am quite sure that this is caused by the CPU throttling. When the OS scheduler detects there is little work to do, the CPU frequency lowers to save power.
When you do a lot of sleep
s, you are telling the scheduler that you are not in such a hurry, and the CPU can take it easy.
You can see that this is the case by running a CPU intensive task in another window, with low priority. For example, in Linux you can run:
$ nice bash -c 'while true ; do : ; done'
And at the same time, in another window run your program:
$ cargo run --release
loop_time 1.00ms => 3.13ms
loop_time 2.00ms => 3.17ms
loop_time 3.00ms => 3.19ms
loop_time 4.00ms => 3.13ms
loop_time 5.00ms => 3.16ms
loop_time 6.00ms => 3.22ms
loop_time 7.00ms => 3.14ms
loop_time 8.00ms => 3.15ms
loop_time 9.00ms => 3.13ms
loop_time 10.00ms => 3.18ms
loop_time 11.00ms => 3.14ms
loop_time 12.00ms => 3.17ms
loop_time 13.00ms => 3.15ms
...
Avoiding this depends on your OS. For example, in Linux, you can fiddle with sys/devices/system/cpu/*
options. I think that UPower
provides some functions to manage it from a non-root application. It would be nice if there were a crate that managed this cross-system, but I don't know of any.
An easy but hacky way to fix this, if you don't mind the wasted power, is just to run an idle thread with a busy loop.
std::thread::spawn(||
use thread_priority::*; //external crate thread-priority
let thread_id = thread_native_id();
set_thread_priority(
thread_id,
ThreadPriority::Min,
ThreadSchedulePolicy::Normal(NormalThreadSchedulePolicy::Idle),
)
.unwrap();
loop
);
Naturally, if you just want to avoid throttling in this piece of code, you can do a busy wait:
//if loop_time > elapsed
// std::thread::sleep(loop_time - elapsed);
//
// Busy-wait the remaining time, to avoid CPU throttling
while loop_time > before.elapsed()
//you may want to try both with and without yield
std::thread::yield_now();
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
@LukasKalbertodt These settings are OS-specific. On Linux, you canecho performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.
– Jens
Jul 7 at 14:53
|
show 3 more comments
I am quite sure that this is caused by the CPU throttling. When the OS scheduler detects there is little work to do, the CPU frequency lowers to save power.
When you do a lot of sleep
s, you are telling the scheduler that you are not in such a hurry, and the CPU can take it easy.
You can see that this is the case by running a CPU intensive task in another window, with low priority. For example, in Linux you can run:
$ nice bash -c 'while true ; do : ; done'
And at the same time, in another window run your program:
$ cargo run --release
loop_time 1.00ms => 3.13ms
loop_time 2.00ms => 3.17ms
loop_time 3.00ms => 3.19ms
loop_time 4.00ms => 3.13ms
loop_time 5.00ms => 3.16ms
loop_time 6.00ms => 3.22ms
loop_time 7.00ms => 3.14ms
loop_time 8.00ms => 3.15ms
loop_time 9.00ms => 3.13ms
loop_time 10.00ms => 3.18ms
loop_time 11.00ms => 3.14ms
loop_time 12.00ms => 3.17ms
loop_time 13.00ms => 3.15ms
...
Avoiding this depends on your OS. For example, in Linux, you can fiddle with sys/devices/system/cpu/*
options. I think that UPower
provides some functions to manage it from a non-root application. It would be nice if there were a crate that managed this cross-system, but I don't know of any.
An easy but hacky way to fix this, if you don't mind the wasted power, is just to run an idle thread with a busy loop.
std::thread::spawn(||
use thread_priority::*; //external crate thread-priority
let thread_id = thread_native_id();
set_thread_priority(
thread_id,
ThreadPriority::Min,
ThreadSchedulePolicy::Normal(NormalThreadSchedulePolicy::Idle),
)
.unwrap();
loop
);
Naturally, if you just want to avoid throttling in this piece of code, you can do a busy wait:
//if loop_time > elapsed
// std::thread::sleep(loop_time - elapsed);
//
// Busy-wait the remaining time, to avoid CPU throttling
while loop_time > before.elapsed()
//you may want to try both with and without yield
std::thread::yield_now();
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
@LukasKalbertodt These settings are OS-specific. On Linux, you canecho performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.
– Jens
Jul 7 at 14:53
|
show 3 more comments
I am quite sure that this is caused by the CPU throttling. When the OS scheduler detects there is little work to do, the CPU frequency lowers to save power.
When you do a lot of sleep
s, you are telling the scheduler that you are not in such a hurry, and the CPU can take it easy.
You can see that this is the case by running a CPU intensive task in another window, with low priority. For example, in Linux you can run:
$ nice bash -c 'while true ; do : ; done'
And at the same time, in another window run your program:
$ cargo run --release
loop_time 1.00ms => 3.13ms
loop_time 2.00ms => 3.17ms
loop_time 3.00ms => 3.19ms
loop_time 4.00ms => 3.13ms
loop_time 5.00ms => 3.16ms
loop_time 6.00ms => 3.22ms
loop_time 7.00ms => 3.14ms
loop_time 8.00ms => 3.15ms
loop_time 9.00ms => 3.13ms
loop_time 10.00ms => 3.18ms
loop_time 11.00ms => 3.14ms
loop_time 12.00ms => 3.17ms
loop_time 13.00ms => 3.15ms
...
Avoiding this depends on your OS. For example, in Linux, you can fiddle with sys/devices/system/cpu/*
options. I think that UPower
provides some functions to manage it from a non-root application. It would be nice if there were a crate that managed this cross-system, but I don't know of any.
An easy but hacky way to fix this, if you don't mind the wasted power, is just to run an idle thread with a busy loop.
std::thread::spawn(||
use thread_priority::*; //external crate thread-priority
let thread_id = thread_native_id();
set_thread_priority(
thread_id,
ThreadPriority::Min,
ThreadSchedulePolicy::Normal(NormalThreadSchedulePolicy::Idle),
)
.unwrap();
loop
);
Naturally, if you just want to avoid throttling in this piece of code, you can do a busy wait:
//if loop_time > elapsed
// std::thread::sleep(loop_time - elapsed);
//
// Busy-wait the remaining time, to avoid CPU throttling
while loop_time > before.elapsed()
//you may want to try both with and without yield
std::thread::yield_now();
I am quite sure that this is caused by the CPU throttling. When the OS scheduler detects there is little work to do, the CPU frequency lowers to save power.
When you do a lot of sleep
s, you are telling the scheduler that you are not in such a hurry, and the CPU can take it easy.
You can see that this is the case by running a CPU intensive task in another window, with low priority. For example, in Linux you can run:
$ nice bash -c 'while true ; do : ; done'
And at the same time, in another window run your program:
$ cargo run --release
loop_time 1.00ms => 3.13ms
loop_time 2.00ms => 3.17ms
loop_time 3.00ms => 3.19ms
loop_time 4.00ms => 3.13ms
loop_time 5.00ms => 3.16ms
loop_time 6.00ms => 3.22ms
loop_time 7.00ms => 3.14ms
loop_time 8.00ms => 3.15ms
loop_time 9.00ms => 3.13ms
loop_time 10.00ms => 3.18ms
loop_time 11.00ms => 3.14ms
loop_time 12.00ms => 3.17ms
loop_time 13.00ms => 3.15ms
...
Avoiding this depends on your OS. For example, in Linux, you can fiddle with sys/devices/system/cpu/*
options. I think that UPower
provides some functions to manage it from a non-root application. It would be nice if there were a crate that managed this cross-system, but I don't know of any.
An easy but hacky way to fix this, if you don't mind the wasted power, is just to run an idle thread with a busy loop.
std::thread::spawn(||
use thread_priority::*; //external crate thread-priority
let thread_id = thread_native_id();
set_thread_priority(
thread_id,
ThreadPriority::Min,
ThreadSchedulePolicy::Normal(NormalThreadSchedulePolicy::Idle),
)
.unwrap();
loop
);
Naturally, if you just want to avoid throttling in this piece of code, you can do a busy wait:
//if loop_time > elapsed
// std::thread::sleep(loop_time - elapsed);
//
// Busy-wait the remaining time, to avoid CPU throttling
while loop_time > before.elapsed()
//you may want to try both with and without yield
std::thread::yield_now();
edited Jul 7 at 15:13
answered Jul 7 at 13:32
rodrigorodrigo
66.5k5 gold badges98 silver badges135 bronze badges
66.5k5 gold badges98 silver badges135 bronze badges
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
@LukasKalbertodt These settings are OS-specific. On Linux, you canecho performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.
– Jens
Jul 7 at 14:53
|
show 3 more comments
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
@LukasKalbertodt These settings are OS-specific. On Linux, you canecho performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.
– Jens
Jul 7 at 14:53
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
Thanks! The CPU frequency thing makes sense. Regarding the solution: instead of a busy thread I could probably just busy wait instead of sleeping in my main thread.
– Lukas Kalbertodt
Jul 7 at 13:59
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
I have an additional question though (in case someone can answer this): according to this paper (if I understand it correctly) the latency to change between two CPU frequency is fairly low (<100µs). So I would expect the CPU to increase the frequency fairly quickly after my payload started. Why isn't that happening?
– Lukas Kalbertodt
Jul 7 at 14:00
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
@LukasKalbertodt: It is not so easy. You seem to think that the CPU is going from 100% power to 0% and then to 100% again, and the time you lose is because of the latency in the switch. But while the latency in the switch is definitely a thing, your main issue is the frequency throttling, that is that the OS scheduler sees that your code is only e.g. 30% of the time busy, so it can reduce the CPU frequency to 30% of its maximum without losing overall performance, while saving a lot in power.
– rodrigo
Jul 7 at 14:05
1
1
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
@LukasKalbertodt: In fact, in your graph you can read that the OS is actually trying to predict the time you are going to sleep, and go to the lowers power that makes you still sleep just a bit. Up to a maximum of course, the CPU frequency has a minimum limit.
– rodrigo
Jul 7 at 14:07
1
1
@LukasKalbertodt These settings are OS-specific. On Linux, you can
echo performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.– Jens
Jul 7 at 14:53
@LukasKalbertodt These settings are OS-specific. On Linux, you can
echo performance > /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
, on Windows you select something the power saving settings. There is also Turbo Boost which dynamically increases the max. frequency of a core if there is thermal budget. I would recommand to disable power scaling and turbo boost in the BIOS for benchmarking.– Jens
Jul 7 at 14:53
|
show 3 more comments
Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.
Got a question that you can’t ask on public Stack Overflow? Learn more about sharing private information with Stack Overflow for Teams.
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f56922289%2fwhy-does-sleeping-increase-the-execution-time-of-an-independent-piece-of-code-th%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Couldn't the compiler optmize out
count_primes
? It seems to be a pure function and is called with the same parameters every iteration. It should also be possible to compute x once at compile time.– Jens
Jul 7 at 13:25
@Jens That's a good point. It doesn't seem like it's optimized out, but yeah, I should fix that.
– Lukas Kalbertodt
Jul 7 at 13:29
Note for everyone: it seems like this is due to CPU frequency scaling and it doesn't happen on all machines. This is still not a full answer however and I hope someone could provide more information.
– Lukas Kalbertodt
Jul 7 at 13:29