Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scx_bpfland: improve cpufreq awareness #600

Merged
merged 4 commits into from
Sep 5, 2024
Merged

scx_bpfland: improve cpufreq awareness #600

merged 4 commits into from
Sep 5, 2024

Conversation

arighi
Copy link
Contributor

@arighi arighi commented Sep 3, 2024

Provide a basic cpufreq control, based on the selected EPP and scheduler's performance profile.

Improve turbo boost CPU selection logic.

Add hints for the cpufreq governor based on the selected scheduler's
performance profile and the current energy performance preference (EPP).

With this change applied the scheduler works as following:

scheduler profile (--primary-domain option):
  - default:
    - use all cores
    - cpufreq: use default scaling factor
  - powersave:
    - use E-cores
    - cpufreq: use min scaling factor
  - performance:
    - use P-cores
    - cpufreq: use max scaling factor
  - auto:
    - EPP: power, powersave
      - use E-cores
      - cpufreq: use min scaling factor
    - EPP: balance_power (typically battery-powered systems)
      - use E-cores
      - cpufreq: use default scaling factor
    - EPP: balance_performance, performance
      - use P-cores
      - cpufreq: use max scaling factor

Signed-off-by: Andrea Righi <[email protected]>
When tasks are changing CPU affinity it is pointless to try to find an
optimal idle CPU. In this case just skip the the idle CPU selection step
and let the task being dispatched to a global DSQ if needed.

Signed-off-by: Andrea Righi <[email protected]>
Always consider the turbo domain when running in "auto" mode.

Additionally, when the turbo domain is used, split the CPU idle
selection logic into two stages:
 1) in ops.select_cpu(), provide the task with a second opportunity to
    remain within the same LLC
 2) in ops.enqueue(), perform another check for an idle CPU, allowing
    the task to move to a different LLC if an idle CPU within the same
    LLC is not available.

This allows tasks to stick more on turbo-boosted CPUs and CPUs within
the same LLC.

Signed-off-by: Andrea Righi <[email protected]>
});

res.unwrap_or_else(|_| "none".to_string())
let energy_pref_path =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point maybe we can pull some of the cpu frequency governor into the topology helpers.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a good idea. I'll send a separate PR for that.

BTW, unrelated but similar topic, it'd be nice to include also amd_pstate_highest_perf to the turbo cores logic.

@htejun
Copy link
Contributor

htejun commented Sep 3, 2024

There's no "auto" mode when SCX scheduler is loaded and schedutil governor is used (there's no signal that the kernel side can use or even know). If you don't change anything, it'll just remain at the last setting.

@arighi
Copy link
Contributor Author

arighi commented Sep 3, 2024

Ah.. I was missing that, then I always need to set a value, I can't just ignore it (unless I actually want to keep the previous value). Ok, thanks @htejun I'll change the logic to take that aspect into account.

In auto mode, rather than keeping the previous fixed cpuperf factor,
dynamically calculate it based on CPU utilization and apply it before a
task runs within its allocated time slot.

Interactive tasks consistently receive the maximum scaling factor to
ensure optimal performance.

Signed-off-by: Andrea Righi <[email protected]>
@arighi arighi merged commit afc7b54 into main Sep 5, 2024
1 check failed
@arighi arighi deleted the bpfland-cpufreq branch September 5, 2024 05:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants