Skip to content

Soundness fix on 32-bit to cache.rs #811

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 61 additions & 37 deletions crates/std_detect/src/detect/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,6 @@ const fn unset_bit(x: u64, bit: u32) -> u64 {
x & !(1 << bit)
}

/// Maximum number of features that can be cached.
const CACHE_CAPACITY: u32 = 63;

/// This type is used to initialize the cache
#[derive(Copy, Clone)]
pub(crate) struct Initializer(u64);
Expand Down Expand Up @@ -90,30 +87,42 @@ static CACHE: Cache = Cache::uninitialized();
#[cfg(target_pointer_width = "64")]
struct Cache(AtomicU64);

#[cfg(target_pointer_width = "64")]
const UNINITIALIZED_VALUE: u64 = u64::max_value();

#[cfg(target_pointer_width = "64")]
const UNINITIALIZED_BIT: u32 = 63;

/// Maximum number of features that can be cached.
/// We reserve a single bit to check if the cache is initialized
#[cfg(target_pointer_width = "64")]
const CACHE_CAPACITY: u32 = 63;

#[cfg(target_pointer_width = "64")]
#[allow(clippy::use_self)]
impl Cache {
/// Creates an uninitialized cache.
#[allow(clippy::declare_interior_mutable_const)]
const fn uninitialized() -> Self {
Cache(AtomicU64::new(u64::max_value()))
}
/// Is the cache uninitialized?
#[inline]
pub(crate) fn is_uninitialized(&self) -> bool {
self.0.load(Ordering::Relaxed) == u64::max_value()
Cache(AtomicU64::new(UNINITIALIZED_VALUE))
}

/// Is the `bit` in the cache set?
#[inline]
pub(crate) fn test(&self, bit: u32) -> bool {
test_bit(CACHE.0.load(Ordering::Relaxed), bit)
fn test(&self, bit: u32) -> bool {
let cache_value = CACHE.0.load(Ordering::Relaxed);
if !test_bit(cache_value, UNINITIALIZED_BIT) {
test_bit(cache_value, bit)
} else {
initialize(bit)
}
}

/// Initializes the cache.
#[inline]
pub(crate) fn initialize(&self, value: Initializer) {
self.0.store(value.0, Ordering::Relaxed);
fn initialize(&self, value: Initializer) {
let value = unset_bit(value.0, UNINITIALIZED_BIT);
self.0.store(value, Ordering::Relaxed);
}
}

Expand All @@ -124,61 +133,82 @@ impl Cache {
#[cfg(target_pointer_width = "32")]
struct Cache(AtomicU32, AtomicU32);

#[cfg(target_pointer_width = "32")]
const UNINITIALIZED_VALUE: u32 = u32::max_value();

#[cfg(target_pointer_width = "32")]
const UNINITIALIZED_BIT: u32 = 31;

/// Maximum number of features that can be cached.
/// We reserve a single bit in each of the two atomic values
/// to check if the cache is initialized
#[cfg(target_pointer_width = "32")]
const CACHE_CAPACITY: u32 = 62;

#[cfg(target_pointer_width = "32")]
impl Cache {
/// Creates an uninitialized cache.
const fn uninitialized() -> Self {
Cache(
AtomicU32::new(u32::max_value()),
AtomicU32::new(u32::max_value()),
AtomicU32::new(UNINITIALIZED_VALUE),
AtomicU32::new(UNINITIALIZED_VALUE),
)
}
/// Is the cache uninitialized?
#[inline]
pub(crate) fn is_uninitialized(&self) -> bool {
self.1.load(Ordering::Relaxed) == u32::max_value()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So IIUC the problem was here right? This checks whether the second atomic has been initialized, and if that is the case, it assumes both atomics to be initialized. However, the function that initializes the two atomics below performs two relaxes writes, and while this test returns true if the second write has happened, there is no guarantee that the first write has happened, and therefore the UB. Is that correct?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If so, wouldn't it suffice to have the first write happen before the second write ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This could also be fixed by using Acquire here and Release when doing self.1.store further down. However doing it this way also allows you to get away with just a single atomic load on the fast path.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you wish to do it with Release semantics, then the fast path would contain a single load with Release semantics and a single load with Relaxed semantics. This PR only uses a single Relaxed load on the fast path.

Copy link
Contributor

@gnzlbg gnzlbg Sep 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We currently have this code:

// thread 1
self.0.store(rel);
self.1.store(rel); 

// thread 2
self.1.load(rel); // assumes self.0.store(rel) has happened

Is there a way to have self.0.store happen-before self.1.store such that a relaxed load in thread 2 of self.1 can only observe the initialized state if both stores have happened ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot get my example to work either. I think it was broken.

You are right that the value is never reset in the current code. I only reset it to run the experiment multiple times.

The point I am trying to make is that there is currently nothing introduces a synchronization point across threads. So even though it works on x86 which has a very strong memory model, I think the code can give the wrong result when only relying on the LLVM memory model.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point I am trying to make is that there is currently nothing introduces a synchronization point across threads

Yes, I am able to follow this much. I also understand that Release+Acquire would suffice here.

What I'm still not sure about is why do we need the Acquire at all? IIUC having a relaxed store followed by a release store, and then doing a relaxed load on the second store, should be enough to guarantee that both stores happen before the relaxed load that observes the initialized cache.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as the C11 memory model (and thus presumably the primary target of the llvm memory model) goes, a release without an acquire is meaningless. As a practical example, the problematic code is actually something like

if y.load(relaxed) == 0xFFFFFFFF) {
      x.store(relaxed) = foo; 
      y.store(release) = bar; //..never 0xFFFFFFFF..
}
let r = x.load(relaxed);

and the x.load(relaxed) could be moved before the if statement, so long as it's corrected for the write inside of the branch, ie it could be:

let mut r = x.load(relaxed);
if y.load(relaxed) == 0xFFFFFFFF) {
      r = foo;
      x.store(relaxed) = r; 
      y.store(release) = bar; //..never 0xFFFFFFFF..
}

making it be y.load(acquire) prevents this

Copy link
Member

@RalfJung RalfJung Oct 9, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd personally always use release/acquire unless it is either blatantly obvious that relaxed is correct (that's clearly not the case here) or we have benchmarks showing that release/acquire is too expensive.

I don't have time at the moment for an in-depth review of weak-memory concurrency code (and a superficial review is no good here). But by default I'd expect the work required for that review to not be worth the effort. Most of the time, release/acquire and relaxed will even generate the same x86 assembly (LLVM is not terribly good at exploiting the optimization potential granted by relaxed accesses).

}

/// Is the `bit` in the cache set?
#[inline]
pub(crate) fn test(&self, bit: u32) -> bool {
if bit < 32 {
test_bit(CACHE.0.load(Ordering::Relaxed) as u64, bit)
fn test(&self, bit: u32) -> bool {
if bit < 31 {
let cache_value = CACHE.0.load(Ordering::Relaxed);
if !test_bit(cache_value, UNINITIALIZED_BIT) {
test_bit(cache_value, bit)
} else {
initializer(bit)
}
} else {
test_bit(CACHE.1.load(Ordering::Relaxed) as u64, bit - 32)
let cache_value = CACHE.1.load(Ordering::Relaxed);
if !test_bit(cache_value, UNINITIALIZED_BIT) {
test_bit(cache_value, bit - 31)
} else {
initializer(bit)
}
}
}

/// Initializes the cache.
#[inline]
pub(crate) fn initialize(&self, value: Initializer) {
let lo: u32 = value.0 as u32;
let hi: u32 = (value.0 >> 32) as u32;
fn initialize(&self, value: Initializer) {
let lo: u32 = unset_bit(value.0, UNINITIALIZED_BIT) as u32;
let hi: u32 = unset_bit(value.0 >> 31, UNINITIALIZED_BIT) as u32;
self.0.store(lo, Ordering::Relaxed);
self.1.store(hi, Ordering::Relaxed);
}
}
cfg_if! {
if #[cfg(feature = "std_detect_env_override")] {
#[inline(never)]
fn initialize(mut value: Initializer) {
fn initialize(bit: u32) -> bool {
let mut value = crate::detect::os::detect_features();
if let Ok(disable) = crate::env::var("RUST_STD_DETECT_UNSTABLE") {
for v in disable.split(" ") {
let _ = super::Feature::from_str(v).map(|v| value.unset(v as u32));
}
}
CACHE.initialize(value);
test_bit(value.0, bit)
}
} else {
#[inline]
fn initialize(value: Initializer) {
fn initialize(bit: u32) -> bool {
let value = crate::detect::os::detect_features();
CACHE.initialize(value);
test_bit(value.0, bit)
}
}
}

/// Tests the `bit` of the storage. If the storage has not been initialized,
/// initializes it with the result of `f()`.
/// initializes it with the result of `os::detect_features()`.
///
/// On its first invocation, it detects the CPU features and caches them in the
/// `CACHE` global variable as an `AtomicU64`.
Expand All @@ -190,12 +220,6 @@ cfg_if! {
/// variable `RUST_STD_DETECT_UNSTABLE` and uses its its content to disable
/// Features that would had been otherwise detected.
#[inline]
pub(crate) fn test<F>(bit: u32, f: F) -> bool
where
F: FnOnce() -> Initializer,
{
if CACHE.is_uninitialized() {
initialize(f());
}
pub(crate) fn test(bit: u32) -> bool {
CACHE.test(bit)
}
2 changes: 1 addition & 1 deletion crates/std_detect/src/detect/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ cfg_if! {
/// Performs run-time feature detection.
#[inline]
fn check_for(x: Feature) -> bool {
cache::test(x as u32, self::os::detect_features)
cache::test(x as u32)
}

/// Returns an `Iterator<Item=(&'static str, bool)>` where
Expand Down