Skip to content

Commit 7a46ec0

Browse files
keesIngo Molnar
authored andcommitted
locking/refcounts, x86/asm: Implement fast refcount overflow protection
This implements refcount_t overflow protection on x86 without a noticeable performance impact, though without the fuller checking of REFCOUNT_FULL. This is done by duplicating the existing atomic_t refcount implementation but with normally a single instruction added to detect if the refcount has gone negative (e.g. wrapped past INT_MAX or below zero). When detected, the handler saturates the refcount_t to INT_MIN / 2. With this overflow protection, the erroneous reference release that would follow a wrap back to zero is blocked from happening, avoiding the class of refcount-overflow use-after-free vulnerabilities entirely. Only the overflow case of refcounting can be perfectly protected, since it can be detected and stopped before the reference is freed and left to be abused by an attacker. There isn't a way to block early decrements, and while REFCOUNT_FULL stops increment-from-zero cases (which would be the state _after_ an early decrement and stops potential double-free conditions), this fast implementation does not, since it would require the more expensive cmpxchg loops. Since the overflow case is much more common (e.g. missing a "put" during an error path), this protection provides real-world protection. For example, the two public refcount overflow use-after-free exploits published in 2016 would have been rendered unexploitable: http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ http://cyseclabs.com/page?n=02012016 This implementation does, however, notice an unchecked decrement to zero (i.e. caller used refcount_dec() instead of refcount_dec_and_test() and it resulted in a zero). Decrements under zero are noticed (since they will have resulted in a negative value), though this only indicates that a use-after-free may have already happened. Such notifications are likely avoidable by an attacker that has already exploited a use-after-free vulnerability, but it's better to have them reported than allow such conditions to remain universally silent. On first overflow detection, the refcount value is reset to INT_MIN / 2 (which serves as a saturation value) and a report and stack trace are produced. When operations detect only negative value results (such as changing an already saturated value), saturation still happens but no notification is performed (since the value was already saturated). On the matter of races, since the entire range beyond INT_MAX but before 0 is negative, every operation at INT_MIN / 2 will trap, leaving no overflow-only race condition. As for performance, this implementation adds a single "js" instruction to the regular execution flow of a copy of the standard atomic_t refcount operations. (The non-"and_test" refcount_dec() function, which is uncommon in regular refcount design patterns, has an additional "jz" instruction to detect reaching exactly zero.) Since this is a forward jump, it is by default the non-predicted path, which will be reinforced by dynamic branch prediction. The result is this protection having virtually no measurable change in performance over standard atomic_t operations. The error path, located in .text.unlikely, saves the refcount location and then uses UD0 to fire a refcount exception handler, which resets the refcount, handles reporting, and returns to regular execution. This keeps the changes to .text size minimal, avoiding return jumps and open-coded calls to the error reporting routine. Example assembly comparison: refcount_inc() before: .text: ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp) refcount_inc() after: .text: ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp) ffffffff8154614d: 0f 88 80 d5 17 00 js ffffffff816c36d3 ... .text.unlikely: ffffffff816c36d3: 48 8d 4d f4 lea -0xc(%rbp),%rcx ffffffff816c36d7: 0f ff (bad) These are the cycle counts comparing a loop of refcount_inc() from 1 to INT_MAX and back down to 0 (via refcount_dec_and_test()), between unprotected refcount_t (atomic_t), fully protected REFCOUNT_FULL (refcount_t-full), and this overflow-protected refcount (refcount_t-fast): 2147483646 refcount_inc()s and 2147483647 refcount_dec_and_test()s: cycles protections atomic_t 82249267387 none refcount_t-fast 82211446892 overflow, untested dec-to-zero refcount_t-full 144814735193 overflow, untested dec-to-zero, inc-from-zero This code is a modified version of the x86 PAX_REFCOUNT atomic_t overflow defense from the last public patch of PaX/grsecurity, based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Thanks to PaX Team for various suggestions for improvement for repurposing this code to be a refcount-only protection. Signed-off-by: Kees Cook <[email protected]> Reviewed-by: Josh Poimboeuf <[email protected]> Cc: Alexey Dobriyan <[email protected]> Cc: Andrew Morton <[email protected]> Cc: Arnd Bergmann <[email protected]> Cc: Christoph Hellwig <[email protected]> Cc: David S. Miller <[email protected]> Cc: Davidlohr Bueso <[email protected]> Cc: Elena Reshetova <[email protected]> Cc: Eric Biggers <[email protected]> Cc: Eric W. Biederman <[email protected]> Cc: Greg KH <[email protected]> Cc: Hans Liljestrand <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jann Horn <[email protected]> Cc: Linus Torvalds <[email protected]> Cc: Manfred Spraul <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rik van Riel <[email protected]> Cc: Serge E. Hallyn <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Cc: linux-arch <[email protected]> Link: http://lkml.kernel.org/r/20170815161924.GA133115@beast Signed-off-by: Ingo Molnar <[email protected]>
1 parent 907dc16 commit 7a46ec0

File tree

8 files changed

+193
-0
lines changed

8 files changed

+193
-0
lines changed

arch/Kconfig

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -931,6 +931,18 @@ config STRICT_MODULE_RWX
931931
config ARCH_WANT_RELAX_ORDER
932932
bool
933933

934+
config ARCH_HAS_REFCOUNT
935+
bool
936+
help
937+
An architecture selects this when it has implemented refcount_t
938+
using open coded assembly primitives that provide an optimized
939+
refcount_t implementation, possibly at the expense of some full
940+
refcount state checks of CONFIG_REFCOUNT_FULL=y.
941+
942+
The refcount overflow check behavior, however, must be retained.
943+
Catching overflows is the primary security concern for protecting
944+
against bugs in reference counts.
945+
934946
config REFCOUNT_FULL
935947
bool "Perform full reference count validation at the expense of speed"
936948
help

arch/x86/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,7 @@ config X86
5555
select ARCH_HAS_KCOV if X86_64
5656
select ARCH_HAS_MMIO_FLUSH
5757
select ARCH_HAS_PMEM_API if X86_64
58+
select ARCH_HAS_REFCOUNT
5859
select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
5960
select ARCH_HAS_SET_MEMORY
6061
select ARCH_HAS_SG_CHAIN

arch/x86/include/asm/asm.h

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -74,6 +74,9 @@
7474
# define _ASM_EXTABLE_EX(from, to) \
7575
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
7676

77+
# define _ASM_EXTABLE_REFCOUNT(from, to) \
78+
_ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
79+
7780
# define _ASM_NOKPROBE(entry) \
7881
.pushsection "_kprobe_blacklist","aw" ; \
7982
_ASM_ALIGN ; \
@@ -123,6 +126,9 @@
123126
# define _ASM_EXTABLE_EX(from, to) \
124127
_ASM_EXTABLE_HANDLE(from, to, ex_handler_ext)
125128

129+
# define _ASM_EXTABLE_REFCOUNT(from, to) \
130+
_ASM_EXTABLE_HANDLE(from, to, ex_handler_refcount)
131+
126132
/* For C file, we already have NOKPROBE_SYMBOL macro */
127133
#endif
128134

arch/x86/include/asm/refcount.h

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
#ifndef __ASM_X86_REFCOUNT_H
2+
#define __ASM_X86_REFCOUNT_H
3+
/*
4+
* x86-specific implementation of refcount_t. Based on PAX_REFCOUNT from
5+
* PaX/grsecurity.
6+
*/
7+
#include <linux/refcount.h>
8+
9+
/*
10+
* This is the first portion of the refcount error handling, which lives in
11+
* .text.unlikely, and is jumped to from the CPU flag check (in the
12+
* following macros). This saves the refcount value location into CX for
13+
* the exception handler to use (in mm/extable.c), and then triggers the
14+
* central refcount exception. The fixup address for the exception points
15+
* back to the regular execution flow in .text.
16+
*/
17+
#define _REFCOUNT_EXCEPTION \
18+
".pushsection .text.unlikely\n" \
19+
"111:\tlea %[counter], %%" _ASM_CX "\n" \
20+
"112:\t" ASM_UD0 "\n" \
21+
ASM_UNREACHABLE \
22+
".popsection\n" \
23+
"113:\n" \
24+
_ASM_EXTABLE_REFCOUNT(112b, 113b)
25+
26+
/* Trigger refcount exception if refcount result is negative. */
27+
#define REFCOUNT_CHECK_LT_ZERO \
28+
"js 111f\n\t" \
29+
_REFCOUNT_EXCEPTION
30+
31+
/* Trigger refcount exception if refcount result is zero or negative. */
32+
#define REFCOUNT_CHECK_LE_ZERO \
33+
"jz 111f\n\t" \
34+
REFCOUNT_CHECK_LT_ZERO
35+
36+
/* Trigger refcount exception unconditionally. */
37+
#define REFCOUNT_ERROR \
38+
"jmp 111f\n\t" \
39+
_REFCOUNT_EXCEPTION
40+
41+
static __always_inline void refcount_add(unsigned int i, refcount_t *r)
42+
{
43+
asm volatile(LOCK_PREFIX "addl %1,%0\n\t"
44+
REFCOUNT_CHECK_LT_ZERO
45+
: [counter] "+m" (r->refs.counter)
46+
: "ir" (i)
47+
: "cc", "cx");
48+
}
49+
50+
static __always_inline void refcount_inc(refcount_t *r)
51+
{
52+
asm volatile(LOCK_PREFIX "incl %0\n\t"
53+
REFCOUNT_CHECK_LT_ZERO
54+
: [counter] "+m" (r->refs.counter)
55+
: : "cc", "cx");
56+
}
57+
58+
static __always_inline void refcount_dec(refcount_t *r)
59+
{
60+
asm volatile(LOCK_PREFIX "decl %0\n\t"
61+
REFCOUNT_CHECK_LE_ZERO
62+
: [counter] "+m" (r->refs.counter)
63+
: : "cc", "cx");
64+
}
65+
66+
static __always_inline __must_check
67+
bool refcount_sub_and_test(unsigned int i, refcount_t *r)
68+
{
69+
GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", REFCOUNT_CHECK_LT_ZERO,
70+
r->refs.counter, "er", i, "%0", e);
71+
}
72+
73+
static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
74+
{
75+
GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", REFCOUNT_CHECK_LT_ZERO,
76+
r->refs.counter, "%0", e);
77+
}
78+
79+
static __always_inline __must_check
80+
bool refcount_add_not_zero(unsigned int i, refcount_t *r)
81+
{
82+
int c, result;
83+
84+
c = atomic_read(&(r->refs));
85+
do {
86+
if (unlikely(c == 0))
87+
return false;
88+
89+
result = c + i;
90+
91+
/* Did we try to increment from/to an undesirable state? */
92+
if (unlikely(c < 0 || c == INT_MAX || result < c)) {
93+
asm volatile(REFCOUNT_ERROR
94+
: : [counter] "m" (r->refs.counter)
95+
: "cc", "cx");
96+
break;
97+
}
98+
99+
} while (!atomic_try_cmpxchg(&(r->refs), &c, result));
100+
101+
return c != 0;
102+
}
103+
104+
static __always_inline __must_check bool refcount_inc_not_zero(refcount_t *r)
105+
{
106+
return refcount_add_not_zero(1, r);
107+
}
108+
109+
#endif

arch/x86/mm/extable.c

Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,48 @@ bool ex_handler_fault(const struct exception_table_entry *fixup,
3636
}
3737
EXPORT_SYMBOL_GPL(ex_handler_fault);
3838

39+
/*
40+
* Handler for UD0 exception following a failed test against the
41+
* result of a refcount inc/dec/add/sub.
42+
*/
43+
bool ex_handler_refcount(const struct exception_table_entry *fixup,
44+
struct pt_regs *regs, int trapnr)
45+
{
46+
/* First unconditionally saturate the refcount. */
47+
*(int *)regs->cx = INT_MIN / 2;
48+
49+
/*
50+
* Strictly speaking, this reports the fixup destination, not
51+
* the fault location, and not the actually overflowing
52+
* instruction, which is the instruction before the "js", but
53+
* since that instruction could be a variety of lengths, just
54+
* report the location after the overflow, which should be close
55+
* enough for finding the overflow, as it's at least back in
56+
* the function, having returned from .text.unlikely.
57+
*/
58+
regs->ip = ex_fixup_addr(fixup);
59+
60+
/*
61+
* This function has been called because either a negative refcount
62+
* value was seen by any of the refcount functions, or a zero
63+
* refcount value was seen by refcount_dec().
64+
*
65+
* If we crossed from INT_MAX to INT_MIN, OF (Overflow Flag: result
66+
* wrapped around) will be set. Additionally, seeing the refcount
67+
* reach 0 will set ZF (Zero Flag: result was zero). In each of
68+
* these cases we want a report, since it's a boundary condition.
69+
*
70+
*/
71+
if (regs->flags & (X86_EFLAGS_OF | X86_EFLAGS_ZF)) {
72+
bool zero = regs->flags & X86_EFLAGS_ZF;
73+
74+
refcount_error_report(regs, zero ? "hit zero" : "overflow");
75+
}
76+
77+
return true;
78+
}
79+
EXPORT_SYMBOL_GPL(ex_handler_refcount);
80+
3981
bool ex_handler_ext(const struct exception_table_entry *fixup,
4082
struct pt_regs *regs, int trapnr)
4183
{

include/linux/kernel.h

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -277,6 +277,13 @@ extern int oops_may_print(void);
277277
void do_exit(long error_code) __noreturn;
278278
void complete_and_exit(struct completion *, long) __noreturn;
279279

280+
#ifdef CONFIG_ARCH_HAS_REFCOUNT
281+
void refcount_error_report(struct pt_regs *regs, const char *err);
282+
#else
283+
static inline void refcount_error_report(struct pt_regs *regs, const char *err)
284+
{ }
285+
#endif
286+
280287
/* Internal, do not use. */
281288
int __must_check _kstrtoul(const char *s, unsigned int base, unsigned long *res);
282289
int __must_check _kstrtol(const char *s, unsigned int base, long *res);

include/linux/refcount.h

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,6 +53,9 @@ extern __must_check bool refcount_sub_and_test(unsigned int i, refcount_t *r);
5353
extern __must_check bool refcount_dec_and_test(refcount_t *r);
5454
extern void refcount_dec(refcount_t *r);
5555
#else
56+
# ifdef CONFIG_ARCH_HAS_REFCOUNT
57+
# include <asm/refcount.h>
58+
# else
5659
static inline __must_check bool refcount_add_not_zero(unsigned int i, refcount_t *r)
5760
{
5861
return atomic_add_unless(&r->refs, i, 0);
@@ -87,6 +90,7 @@ static inline void refcount_dec(refcount_t *r)
8790
{
8891
atomic_dec(&r->refs);
8992
}
93+
# endif /* !CONFIG_ARCH_HAS_REFCOUNT */
9094
#endif /* CONFIG_REFCOUNT_FULL */
9195

9296
extern __must_check bool refcount_dec_if_one(refcount_t *r);

kernel/panic.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@
2626
#include <linux/nmi.h>
2727
#include <linux/console.h>
2828
#include <linux/bug.h>
29+
#include <linux/ratelimit.h>
2930

3031
#define PANIC_TIMER_STEP 100
3132
#define PANIC_BLINK_SPD 18
@@ -601,6 +602,17 @@ EXPORT_SYMBOL(__stack_chk_fail);
601602

602603
#endif
603604

605+
#ifdef CONFIG_ARCH_HAS_REFCOUNT
606+
void refcount_error_report(struct pt_regs *regs, const char *err)
607+
{
608+
WARN_RATELIMIT(1, "refcount_t %s at %pB in %s[%d], uid/euid: %u/%u\n",
609+
err, (void *)instruction_pointer(regs),
610+
current->comm, task_pid_nr(current),
611+
from_kuid_munged(&init_user_ns, current_uid()),
612+
from_kuid_munged(&init_user_ns, current_euid()));
613+
}
614+
#endif
615+
604616
core_param(panic, panic_timeout, int, 0644);
605617
core_param(pause_on_oops, pause_on_oops, int, 0644);
606618
core_param(panic_on_warn, panic_on_warn, int, 0644);

0 commit comments

Comments
 (0)