Skip to content

Commit 429a64f

Browse files
athira-rajeevmpe
authored andcommitted
powerpc/perf: Only define power_pmu_wants_prompt_pmi() for CONFIG_PPC64
power_pmu_wants_prompt_pmi() is used to decide if PMIs should be taken promptly. This is valid only for ppc64 and is used only if CONFIG_PPC_BOOK3S_64=y. Hence include the function under config check for PPC64. Fixes warning for 32-bit compilation: arch/powerpc/perf/core-book3s.c:2455:6: warning: no previous prototype for 'power_pmu_wants_prompt_pmi' 2455 | bool power_pmu_wants_prompt_pmi(void) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Fixes: 5a7745b ("powerpc/64s/perf: add power_pmu_wants_prompt_pmi to say whether perf wants PMIs to be soft-NMI") Reported-by: kernel test robot <[email protected]> Signed-off-by: Athira Rajeev <[email protected]> Reviewed-by: Nicholas Piggin <[email protected]> [mpe: Move inside existing CONFIG_PPC64 ifdef block] Signed-off-by: Michael Ellerman <[email protected]> Link: https://lore.kernel.org/r/[email protected]
1 parent d37823c commit 429a64f

File tree

1 file changed

+28
-30
lines changed

1 file changed

+28
-30
lines changed

arch/powerpc/perf/core-book3s.c

Lines changed: 28 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -776,6 +776,34 @@ static void pmao_restore_workaround(bool ebb)
776776
mtspr(SPRN_PMC6, pmcs[5]);
777777
}
778778

779+
/*
780+
* If the perf subsystem wants performance monitor interrupts as soon as
781+
* possible (e.g., to sample the instruction address and stack chain),
782+
* this should return true. The IRQ masking code can then enable MSR[EE]
783+
* in some places (e.g., interrupt handlers) that allows PMI interrupts
784+
* through to improve accuracy of profiles, at the cost of some performance.
785+
*
786+
* The PMU counters can be enabled by other means (e.g., sysfs raw SPR
787+
* access), but in that case there is no need for prompt PMI handling.
788+
*
789+
* This currently returns true if any perf counter is being used. It
790+
* could possibly return false if only events are being counted rather than
791+
* samples being taken, but for now this is good enough.
792+
*/
793+
bool power_pmu_wants_prompt_pmi(void)
794+
{
795+
struct cpu_hw_events *cpuhw;
796+
797+
/*
798+
* This could simply test local_paca->pmcregs_in_use if that were not
799+
* under ifdef KVM.
800+
*/
801+
if (!ppmu)
802+
return false;
803+
804+
cpuhw = this_cpu_ptr(&cpu_hw_events);
805+
return cpuhw->n_events;
806+
}
779807
#endif /* CONFIG_PPC64 */
780808

781809
static void perf_event_interrupt(struct pt_regs *regs);
@@ -2438,36 +2466,6 @@ static void perf_event_interrupt(struct pt_regs *regs)
24382466
perf_sample_event_took(sched_clock() - start_clock);
24392467
}
24402468

2441-
/*
2442-
* If the perf subsystem wants performance monitor interrupts as soon as
2443-
* possible (e.g., to sample the instruction address and stack chain),
2444-
* this should return true. The IRQ masking code can then enable MSR[EE]
2445-
* in some places (e.g., interrupt handlers) that allows PMI interrupts
2446-
* though to improve accuracy of profiles, at the cost of some performance.
2447-
*
2448-
* The PMU counters can be enabled by other means (e.g., sysfs raw SPR
2449-
* access), but in that case there is no need for prompt PMI handling.
2450-
*
2451-
* This currently returns true if any perf counter is being used. It
2452-
* could possibly return false if only events are being counted rather than
2453-
* samples being taken, but for now this is good enough.
2454-
*/
2455-
bool power_pmu_wants_prompt_pmi(void)
2456-
{
2457-
struct cpu_hw_events *cpuhw;
2458-
2459-
/*
2460-
* This could simply test local_paca->pmcregs_in_use if that were not
2461-
* under ifdef KVM.
2462-
*/
2463-
2464-
if (!ppmu)
2465-
return false;
2466-
2467-
cpuhw = this_cpu_ptr(&cpu_hw_events);
2468-
return cpuhw->n_events;
2469-
}
2470-
24712469
static int power_pmu_prepare_cpu(unsigned int cpu)
24722470
{
24732471
struct cpu_hw_events *cpuhw = &per_cpu(cpu_hw_events, cpu);

0 commit comments

Comments
 (0)