Skip to content

Commit 02b670c

Browse files
torvaldsIngo Molnar
authored andcommitted
x86/mm: Remove broken vsyscall emulation code from the page fault code
The syzbot-reported stack trace from hell in this discussion thread actually has three nested page faults: https://lore.kernel.org/r/[email protected] ... and I think that's actually the important thing here: - the first page fault is from user space, and triggers the vsyscall emulation. - the second page fault is from __do_sys_gettimeofday(), and that should just have caused the exception that then sets the return value to -EFAULT - the third nested page fault is due to _raw_spin_unlock_irqrestore() -> preempt_schedule() -> trace_sched_switch(), which then causes a BPF trace program to run, which does that bpf_probe_read_compat(), which causes that page fault under pagefault_disable(). It's quite the nasty backtrace, and there's a lot going on. The problem is literally the vsyscall emulation, which sets current->thread.sig_on_uaccess_err = 1; and that causes the fixup_exception() code to send the signal *despite* the exception being caught. And I think that is in fact completely bogus. It's completely bogus exactly because it sends that signal even when it *shouldn't* be sent - like for the BPF user mode trace gathering. In other words, I think the whole "sig_on_uaccess_err" thing is entirely broken, because it makes any nested page-faults do all the wrong things. Now, arguably, I don't think anybody should enable vsyscall emulation any more, but this test case clearly does. I think we should just make the "send SIGSEGV" be something that the vsyscall emulation does on its own, not this broken per-thread state for something that isn't actually per thread. The x86 page fault code actually tried to deal with the "incorrect nesting" by having that: if (in_interrupt()) return; which ignores the sig_on_uaccess_err case when it happens in interrupts, but as shown by this example, these nested page faults do not need to be about interrupts at all. IOW, I think the only right thing is to remove that horrendously broken code. The attached patch looks like the ObviouslyCorrect(tm) thing to do. NOTE! This broken code goes back to this commit in 2011: 4fc3490 ("x86-64: Set siginfo and context on vsyscall emulation faults") ... and back then the reason was to get all the siginfo details right. Honestly, I do not for a moment believe that it's worth getting the siginfo details right here, but part of the commit says: This fixes issues with UML when vsyscall=emulate. ... and so my patch to remove this garbage will probably break UML in this situation. I do not believe that anybody should be running with vsyscall=emulate in 2024 in the first place, much less if you are doing things like UML. But let's see if somebody screams. Reported-and-tested-by: [email protected] Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Ingo Molnar <[email protected]> Tested-by: Jiri Olsa <[email protected]> Acked-by: Andy Lutomirski <[email protected]> Link: https://lore.kernel.org/r/CAHk-=wh9D6f7HUkDgZHKmDCHUQmp+Co89GP+b8+z+G56BKeyNg@mail.gmail.com
1 parent 720a22f commit 02b670c

File tree

3 files changed

+3
-59
lines changed

3 files changed

+3
-59
lines changed

arch/x86/entry/vsyscall/vsyscall_64.c

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -98,11 +98,6 @@ static int addr_to_vsyscall_nr(unsigned long addr)
9898

9999
static bool write_ok_or_segv(unsigned long ptr, size_t size)
100100
{
101-
/*
102-
* XXX: if access_ok, get_user, and put_user handled
103-
* sig_on_uaccess_err, this could go away.
104-
*/
105-
106101
if (!access_ok((void __user *)ptr, size)) {
107102
struct thread_struct *thread = &current->thread;
108103

@@ -120,10 +115,8 @@ static bool write_ok_or_segv(unsigned long ptr, size_t size)
120115
bool emulate_vsyscall(unsigned long error_code,
121116
struct pt_regs *regs, unsigned long address)
122117
{
123-
struct task_struct *tsk;
124118
unsigned long caller;
125119
int vsyscall_nr, syscall_nr, tmp;
126-
int prev_sig_on_uaccess_err;
127120
long ret;
128121
unsigned long orig_dx;
129122

@@ -172,8 +165,6 @@ bool emulate_vsyscall(unsigned long error_code,
172165
goto sigsegv;
173166
}
174167

175-
tsk = current;
176-
177168
/*
178169
* Check for access_ok violations and find the syscall nr.
179170
*
@@ -234,12 +225,8 @@ bool emulate_vsyscall(unsigned long error_code,
234225
goto do_ret; /* skip requested */
235226

236227
/*
237-
* With a real vsyscall, page faults cause SIGSEGV. We want to
238-
* preserve that behavior to make writing exploits harder.
228+
* With a real vsyscall, page faults cause SIGSEGV.
239229
*/
240-
prev_sig_on_uaccess_err = current->thread.sig_on_uaccess_err;
241-
current->thread.sig_on_uaccess_err = 1;
242-
243230
ret = -EFAULT;
244231
switch (vsyscall_nr) {
245232
case 0:
@@ -262,23 +249,12 @@ bool emulate_vsyscall(unsigned long error_code,
262249
break;
263250
}
264251

265-
current->thread.sig_on_uaccess_err = prev_sig_on_uaccess_err;
266-
267252
check_fault:
268253
if (ret == -EFAULT) {
269254
/* Bad news -- userspace fed a bad pointer to a vsyscall. */
270255
warn_bad_vsyscall(KERN_INFO, regs,
271256
"vsyscall fault (exploit attempt?)");
272-
273-
/*
274-
* If we failed to generate a signal for any reason,
275-
* generate one here. (This should be impossible.)
276-
*/
277-
if (WARN_ON_ONCE(!sigismember(&tsk->pending.signal, SIGBUS) &&
278-
!sigismember(&tsk->pending.signal, SIGSEGV)))
279-
goto sigsegv;
280-
281-
return true; /* Don't emulate the ret. */
257+
goto sigsegv;
282258
}
283259

284260
regs->ax = ret;

arch/x86/include/asm/processor.h

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -472,7 +472,6 @@ struct thread_struct {
472472
unsigned long iopl_emul;
473473

474474
unsigned int iopl_warn:1;
475-
unsigned int sig_on_uaccess_err:1;
476475

477476
/*
478477
* Protection Keys Register for Userspace. Loaded immediately on

arch/x86/mm/fault.c

Lines changed: 1 addition & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -723,39 +723,8 @@ kernelmode_fixup_or_oops(struct pt_regs *regs, unsigned long error_code,
723723
WARN_ON_ONCE(user_mode(regs));
724724

725725
/* Are we prepared to handle this kernel fault? */
726-
if (fixup_exception(regs, X86_TRAP_PF, error_code, address)) {
727-
/*
728-
* Any interrupt that takes a fault gets the fixup. This makes
729-
* the below recursive fault logic only apply to a faults from
730-
* task context.
731-
*/
732-
if (in_interrupt())
733-
return;
734-
735-
/*
736-
* Per the above we're !in_interrupt(), aka. task context.
737-
*
738-
* In this case we need to make sure we're not recursively
739-
* faulting through the emulate_vsyscall() logic.
740-
*/
741-
if (current->thread.sig_on_uaccess_err && signal) {
742-
sanitize_error_code(address, &error_code);
743-
744-
set_signal_archinfo(address, error_code);
745-
746-
if (si_code == SEGV_PKUERR) {
747-
force_sig_pkuerr((void __user *)address, pkey);
748-
} else {
749-
/* XXX: hwpoison faults will set the wrong code. */
750-
force_sig_fault(signal, si_code, (void __user *)address);
751-
}
752-
}
753-
754-
/*
755-
* Barring that, we can do the fixup and be happy.
756-
*/
726+
if (fixup_exception(regs, X86_TRAP_PF, error_code, address))
757727
return;
758-
}
759728

760729
/*
761730
* AMD erratum #91 manifests as a spurious page fault on a PREFETCH

0 commit comments

Comments
 (0)