Skip to content

Commit ae28d1a

Browse files
Fenghua Yusuryasaimadhu
authored andcommitted
x86/resctrl: Use an IPI instead of task_work_add() to update PQR_ASSOC MSR
Currently, when moving a task to a resource group the PQR_ASSOC MSR is updated with the new closid and rmid in an added task callback. If the task is running, the work is run as soon as possible. If the task is not running, the work is executed later in the kernel exit path when the kernel returns to the task again. Updating the PQR_ASSOC MSR as soon as possible on the CPU a moved task is running is the right thing to do. Queueing work for a task that is not running is unnecessary (the PQR_ASSOC MSR is already updated when the task is scheduled in) and causing system resource waste with the way in which it is implemented: Work to update the PQR_ASSOC register is queued every time the user writes a task id to the "tasks" file, even if the task already belongs to the resource group. This could result in multiple pending work items associated with a single task even if they are all identical and even though only a single update with most recent values is needed. Specifically, even if a task is moved between different resource groups while it is sleeping then it is only the last move that is relevant but yet a work item is queued during each move. This unnecessary queueing of work items could result in significant system resource waste, especially on tasks sleeping for a long time. For example, as demonstrated by Shakeel Butt in [1] writing the same task id to the "tasks" file can quickly consume significant memory. The same problem (wasted system resources) occurs when moving a task between different resource groups. As pointed out by Valentin Schneider in [2] there is an additional issue with the way in which the queueing of work is done in that the task_struct update is currently done after the work is queued, resulting in a race with the register update possibly done before the data needed by the update is available. To solve these issues, update the PQR_ASSOC MSR in a synchronous way right after the new closid and rmid are ready during the task movement, only if the task is running. If a moved task is not running nothing is done since the PQR_ASSOC MSR will be updated next time the task is scheduled. This is the same way used to update the register when tasks are moved as part of resource group removal. [1] https://lore.kernel.org/lkml/CALvZod7E9zzHwenzf7objzGKsdBmVwTgEJ0nPgs0LUFU3SN5Pw@mail.gmail.com/ [2] https://lore.kernel.org/lkml/[email protected] [ bp: Massage commit message and drop the two update_task_closid_rmid() variants. ] Fixes: e02737d ("x86/intel_rdt: Add tasks files") Reported-by: Shakeel Butt <[email protected]> Reported-by: Valentin Schneider <[email protected]> Signed-off-by: Fenghua Yu <[email protected]> Signed-off-by: Reinette Chatre <[email protected]> Signed-off-by: Borislav Petkov <[email protected]> Reviewed-by: Tony Luck <[email protected]> Reviewed-by: James Morse <[email protected]> Reviewed-by: Valentin Schneider <[email protected]> Cc: [email protected] Link: https://lkml.kernel.org/r/17aa2fb38fc12ce7bb710106b3e7c7b45acb9e94.1608243147.git.reinette.chatre@intel.com
1 parent cb7f4a8 commit ae28d1a

File tree

1 file changed

+43
-69
lines changed

1 file changed

+43
-69
lines changed

arch/x86/kernel/cpu/resctrl/rdtgroup.c

Lines changed: 43 additions & 69 deletions
Original file line numberDiff line numberDiff line change
@@ -525,89 +525,63 @@ static void rdtgroup_remove(struct rdtgroup *rdtgrp)
525525
kfree(rdtgrp);
526526
}
527527

528-
struct task_move_callback {
529-
struct callback_head work;
530-
struct rdtgroup *rdtgrp;
531-
};
532-
533-
static void move_myself(struct callback_head *head)
528+
static void _update_task_closid_rmid(void *task)
534529
{
535-
struct task_move_callback *callback;
536-
struct rdtgroup *rdtgrp;
537-
538-
callback = container_of(head, struct task_move_callback, work);
539-
rdtgrp = callback->rdtgrp;
540-
541530
/*
542-
* If resource group was deleted before this task work callback
543-
* was invoked, then assign the task to root group and free the
544-
* resource group.
531+
* If the task is still current on this CPU, update PQR_ASSOC MSR.
532+
* Otherwise, the MSR is updated when the task is scheduled in.
545533
*/
546-
if (atomic_dec_and_test(&rdtgrp->waitcount) &&
547-
(rdtgrp->flags & RDT_DELETED)) {
548-
current->closid = 0;
549-
current->rmid = 0;
550-
rdtgroup_remove(rdtgrp);
551-
}
552-
553-
if (unlikely(current->flags & PF_EXITING))
554-
goto out;
555-
556-
preempt_disable();
557-
/* update PQR_ASSOC MSR to make resource group go into effect */
558-
resctrl_sched_in();
559-
preempt_enable();
534+
if (task == current)
535+
resctrl_sched_in();
536+
}
560537

561-
out:
562-
kfree(callback);
538+
static void update_task_closid_rmid(struct task_struct *t)
539+
{
540+
if (IS_ENABLED(CONFIG_SMP) && task_curr(t))
541+
smp_call_function_single(task_cpu(t), _update_task_closid_rmid, t, 1);
542+
else
543+
_update_task_closid_rmid(t);
563544
}
564545

565546
static int __rdtgroup_move_task(struct task_struct *tsk,
566547
struct rdtgroup *rdtgrp)
567548
{
568-
struct task_move_callback *callback;
569-
int ret;
570-
571-
callback = kzalloc(sizeof(*callback), GFP_KERNEL);
572-
if (!callback)
573-
return -ENOMEM;
574-
callback->work.func = move_myself;
575-
callback->rdtgrp = rdtgrp;
576-
577549
/*
578-
* Take a refcount, so rdtgrp cannot be freed before the
579-
* callback has been invoked.
550+
* Set the task's closid/rmid before the PQR_ASSOC MSR can be
551+
* updated by them.
552+
*
553+
* For ctrl_mon groups, move both closid and rmid.
554+
* For monitor groups, can move the tasks only from
555+
* their parent CTRL group.
580556
*/
581-
atomic_inc(&rdtgrp->waitcount);
582-
ret = task_work_add(tsk, &callback->work, TWA_RESUME);
583-
if (ret) {
584-
/*
585-
* Task is exiting. Drop the refcount and free the callback.
586-
* No need to check the refcount as the group cannot be
587-
* deleted before the write function unlocks rdtgroup_mutex.
588-
*/
589-
atomic_dec(&rdtgrp->waitcount);
590-
kfree(callback);
591-
rdt_last_cmd_puts("Task exited\n");
592-
} else {
593-
/*
594-
* For ctrl_mon groups move both closid and rmid.
595-
* For monitor groups, can move the tasks only from
596-
* their parent CTRL group.
597-
*/
598-
if (rdtgrp->type == RDTCTRL_GROUP) {
599-
tsk->closid = rdtgrp->closid;
557+
558+
if (rdtgrp->type == RDTCTRL_GROUP) {
559+
tsk->closid = rdtgrp->closid;
560+
tsk->rmid = rdtgrp->mon.rmid;
561+
} else if (rdtgrp->type == RDTMON_GROUP) {
562+
if (rdtgrp->mon.parent->closid == tsk->closid) {
600563
tsk->rmid = rdtgrp->mon.rmid;
601-
} else if (rdtgrp->type == RDTMON_GROUP) {
602-
if (rdtgrp->mon.parent->closid == tsk->closid) {
603-
tsk->rmid = rdtgrp->mon.rmid;
604-
} else {
605-
rdt_last_cmd_puts("Can't move task to different control group\n");
606-
ret = -EINVAL;
607-
}
564+
} else {
565+
rdt_last_cmd_puts("Can't move task to different control group\n");
566+
return -EINVAL;
608567
}
609568
}
610-
return ret;
569+
570+
/*
571+
* Ensure the task's closid and rmid are written before determining if
572+
* the task is current that will decide if it will be interrupted.
573+
*/
574+
barrier();
575+
576+
/*
577+
* By now, the task's closid and rmid are set. If the task is current
578+
* on a CPU, the PQR_ASSOC MSR needs to be updated to make the resource
579+
* group go into effect. If the task is not current, the MSR will be
580+
* updated when the task is scheduled in.
581+
*/
582+
update_task_closid_rmid(tsk);
583+
584+
return 0;
611585
}
612586

613587
static bool is_closid_match(struct task_struct *t, struct rdtgroup *r)

0 commit comments

Comments
 (0)