Skip to content

Repo sync for protected branch #5100

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 16 commits into from
Sep 13, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 16 additions & 10 deletions docs/standard-library/execution.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
description: "Learn more about: <execution>"
title: "<execution>"
ms.date: "08/17/2021"
ms.date: 09/11/2024
f1_keywords: ["<execution>", "execution/std::execution", "std::execution"]
helpviewer_keywords: ["execution header"]
---
Expand All @@ -22,24 +22,30 @@ namespace std::execution {
}
```

### Classes and Structs
### Classes and structs

|Name|Description|
|-|-|
|[`is_execution_policy` Struct](is-execution-policy-struct.md)|Detects execution policies to exclude certain function signatures from otherwise ambiguous overload resolution participation.|
|[`parallel_policy` Class](parallel-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized.|
|[`parallel_unsequenced_policy` Class](parallel-unsequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized and vectorized.|
|[`sequenced_policy` Class](sequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Specifies that a parallel algorithm's execution may not be parallelized.|
|[`parallel_policy` class](parallel-policy-class.md)|Used to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized.|
|[`parallel_unsequenced_policy` class](parallel-unsequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized and vectorized.|
|[`sequenced_policy` class](sequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Specifies that a parallel algorithm's execution may not be parallelized.|

### Microsoft Specific

When `parallel_policy` or `parallel_unsequenced_policy` cause the algorithm to be parallelized, the parallel execution uses Windows Thread Pool; see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default (currently 500). The number of threads concurrently executing on hardware is currently limited by the number of logical processors in the current process's processor group, so it is effectively limited to 64; see [Processor Groups](/windows/win32/procthread/processor-groups). The maximum number of chunks for data partitioning is also currently based on the number of logical processors in the current process's processor group.
### Microsoft specific

Parallel algorithms execute on an unspecified number of threads and divide the work into an unspecified number of data partitioning "chunks." The Windows thread pool manages the number of threads. The implementation tries to make use of the available logical processors, which corresponds to the number of hardware threads that can execute simultaneously.

Specifying `parallel_policy` or `parallel_unsequenced_policy` causes standard library algorithms to run in parallel using the Windows Thread Pool. The number of concurrent threads, and thus the number of "chunks" for data partitioning, is limited to 500 threads because that's the default number of thread pool threads. For more information, see [Thread Pools](/windows/win32/procthread/thread-pools).

Before Windows 11 and Windows Server 2022, applications were limited by default to a single processor group having at most 64 logical processors. This limited the number of concurrently executing threads to 64. For more information, see [Processor Groups](/windows/win32/procthread/processor-groups).

Starting with Windows 11 and Windows Server 2022, processes and their threads have processor affinities that by default span all processors in the system and across multiple groups on machines with more than 64 processors. The limit on the number of concurrent threads is now the total number of logical processors in the system.

## Requirements

**Header:** \<execution>
**Header:** `<execution>`

**Namespace:** std
**Namespace:** `std`

## See also

Expand Down
26 changes: 15 additions & 11 deletions docs/standard-library/future-functions.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
description: "Learn more about: <future> functions"
title: "<future> functions"
ms.date: "08/17/2021"
ms.date: 09/11/2024
f1_keywords: ["future/std::async", "future/std::future_category", "future/std::make_error_code", "future/std::make_error_condition", "future/std::swap"]
helpviewer_keywords: ["std::async [C++]", "std::future_category [C++]", "std::make_error_code [C++]", "std::make_error_condition [C++]", "std::swap [C++]"]
---
Expand Down Expand Up @@ -29,7 +29,7 @@ future<typename result_of<Fn(ArgTypes...)>::type>

### Parameters

*policy*\
*`policy`*\
A [`launch`](../standard-library/future-enums.md#launch) value.

### Remarks
Expand All @@ -48,24 +48,28 @@ The second function returns a `future<Ty>` object whose *associated asynchronous

Unless `decay<Fn>::type` is a type other than launch, the second function doesn't participate in overload resolution.

The C++ standard states that if the policy is `launch::async`, the function behaves as if it invokes the callable object in a new thread. This means that while it typically results in creating a new thread, the implementation may use other mechanisms to achieve equivalent behavior. However, the Microsoft implementation currently does not conform strictly to this behavior. It obtains its threads from the Windows ThreadPool, which may provide a recycled thread rather than a new one. This means that the `launch::async` policy is effectively implemented as `launch::async|launch::deferred`. Another implication of the ThreadPool-based implementation is that there's no guarantee that thread-local variables will be destroyed when the thread completes. If the thread is recycled and provided to a new call to `async`, the old variables will still exist. We recommend that you avoid using thread-local variables with `async`.
The C++ standard states that if the policy is `launch::async`, the function behaves as if it invokes the callable object in a new thread. This means that while it typically results in creating a new thread, the implementation may use other mechanisms to achieve equivalent behavior. However, the Microsoft implementation currently doesn't conform strictly to this behavior. It obtains threads from the Windows ThreadPool, which may provide a recycled thread rather than a new one. This means that the `launch::async` policy is effectively implemented as `launch::async|launch::deferred`. Another implication of the ThreadPool-based implementation is that there's no guarantee that thread-local variables are destroyed when the thread completes. If the thread is recycled and provided to a new call to `async`, the old variables still exist. We recommend that you avoid using thread-local variables with `async`.

If *policy* is `launch::deferred`, the function marks its associated asynchronous state as holding a *deferred function* and returns. The first call to any non-timed function that waits for the associated asynchronous state to be ready in effect calls the deferred function by evaluating `INVOKE(dfn, dargs..., Ty)`.
If *`policy`* is `launch::deferred`, the function marks its associated asynchronous state as holding a *deferred function* and returns. The first call to any nontimed function that waits for the associated asynchronous state to be ready in effect calls the deferred function by evaluating `INVOKE(dfn, dargs..., Ty)`.

In all cases, the associated asynchronous state of the `future` object isn't set to *ready* until the evaluation of `INVOKE(dfn, dargs..., Ty)` completes, either by throwing an exception or by returning normally. The result of the associated asynchronous state is an exception if one was thrown, or any value that's returned by the evaluation.
In all cases, the associated asynchronous state of the `future` object isn't set to *ready* until the evaluation of `INVOKE(dfn, dargs..., Ty)` completes, either by throwing an exception or by returning normally. The result of the associated asynchronous state is an exception if one was thrown, or the value the evaluation returns.

> [!NOTE]
> For a `future`—or the last [`shared_future`](../standard-library/shared-future-class.md)—that's attached to a task started with `std::async`, the destructor blocks if the task has not completed; that is, it blocks if this thread did not yet call `.get()` or `.wait()` and the task is still running. If a `future` obtained from `std::async` is moved outside the local scope, other code that uses it must be aware that its destructor may block for the shared state to become ready.

The pseudo-function `INVOKE` is defined in [`<functional>`](../standard-library/functional.md).

### Microsoft Specific
**Microsoft specific**

When the passed function is executed asynchronously, it's executed on Windows Thread Pool; see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default (currently 500). The number of threads concurrently executing on hardware is currently limited by the number of logical processor in the process's processor group, so it's effectively limited to 64; see [Processor Groups](/windows/win32/procthread/processor-groups).
When the passed function is executed asynchronously, it executes on the Windows Thread Pool. For more information, see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default, which is 500 threads.

Before Windows 11 and Windows Server 2022, applications were limited by default to a single processor group having at most 64 logical processors. This limited the number of concurrently executing threads to 64. For more information, see [Processor Groups](/windows/win32/procthread/processor-groups).

Starting with Windows 11 and Windows Server 2022, processes and their threads have processor affinities that by default span all processors in the system and across multiple groups on machines with more than 64 processors. The limit on the number of concurrent threads is now the total number of logical processors in the system.

## <a name="future_category"></a> `future_category`

Returns a reference to the [error_category](../standard-library/error-category-class.md) object that characterizes errors that are associated with `future` objects.
Returns a reference to the [`error_category`](../standard-library/error-category-class.md) object that characterizes errors that are associated with `future` objects.

```cpp
const error_category& future_category() noexcept;
Expand All @@ -82,15 +86,15 @@ inline error_code make_error_code(future_errc Errno) noexcept;
### Parameters

*`Errno`*\
A [future_errc](../standard-library/future-enums.md#future_errc) value that identifies the reported error.
A [`future_errc`](../standard-library/future-enums.md#future_errc) value that identifies the reported error.

### Return Value

`error_code(static_cast<int>(Errno), future_category());`

## <a name="make_error_condition"></a> `make_error_condition`

Creates an [error_condition](../standard-library/error-condition-class.md) together with the [error_category](../standard-library/error-category-class.md) object that characterizes [future](../standard-library/future-class.md) errors.
Creates an [`error_condition`](../standard-library/error-condition-class.md) together with the [`error_category`](../standard-library/error-category-class.md) object that characterizes [`future`](../standard-library/future-class.md) errors.

```cpp
inline error_condition make_error_condition(future_errc Errno) noexcept;
Expand All @@ -99,7 +103,7 @@ inline error_condition make_error_condition(future_errc Errno) noexcept;
### Parameters

*`Errno`*\
A [future_errc](../standard-library/future-enums.md#future_errc) value that identifies the reported error.
A [`future_errc`](../standard-library/future-enums.md#future_errc) value that identifies the reported error.

### Return Value

Expand Down
38 changes: 20 additions & 18 deletions docs/standard-library/thread-class.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
description: "Learn more about: thread Class"
title: "thread Class"
ms.date: 06/20/2022
ms.date: 09/11/2024
f1_keywords: ["thread/std::thread", "thread/std::thread::id Class", "thread/std::thread::thread", "thread/std::thread::detach", "thread/std::thread::get_id", "thread/std::thread::hardware_concurrency", "thread/std::thread::join", "thread/std::thread::joinable", "thread/std::thread::native_handle", "thread/std::thread::swap"]
helpviewer_keywords: ["std::thread [C++]", "std::thread [C++], thread", "std::thread [C++], detach", "std::thread [C++], get_id", "std::thread [C++], hardware_concurrency", "std::thread [C++], join", "std::thread [C++], joinable", "std::thread [C++], native_handle", "std::thread [C++], swap"]
ms.custom: devdivchpfy22
Expand Down Expand Up @@ -73,9 +73,9 @@ void detach();

After a call to `detach`, subsequent calls to [`get_id`](#get_id) return [`id`](#id_class).

If the thread that's associated with the calling object isn't joinable, the function throws a [`system_error`](../standard-library/system-error-class.md) that has an error code of `invalid_argument`.
If the thread associated with the calling object isn't joinable, the function throws a [`system_error`](../standard-library/system-error-class.md) that has an error code of `invalid_argument`.

If the thread that's associated with the calling object is invalid, the function throws a `system_error` that has an error code of `no_such_process`.
If the thread associated with the calling object is invalid, the function throws a `system_error` that has an error code of `no_such_process`.

## <a name="get_id"></a> `get_id`

Expand All @@ -85,9 +85,9 @@ Returns a unique identifier for the associated thread.
id get_id() const noexcept;
```

### Return Value
### Return value

A [`id`](#id_class) object that uniquely identifies the associated thread, or `id()` if no thread is associated with the object.
An [`id`](#id_class) object that uniquely identifies the associated thread, or `id()` if no thread is associated with the object.

## <a name="hardware_concurrency"></a> `hardware_concurrency`

Expand All @@ -97,15 +97,17 @@ Static method that returns an estimate of the number of hardware thread contexts
static unsigned int hardware_concurrency() noexcept;
```

### Return Value
### Return value

An estimate of the number of hardware thread contexts. If the value can't be computed or isn't well defined, this method returns 0.

### Microsoft Specific
**Microsoft specific**

`hardware_concurrency` is currently defined to return the number of logical processors, which corresponds to the number of hardware threads that can execute simultaneously. It takes into account the number of physical processors, the number of cores in each physical processor, and simultaneous multithreading on each single core.

However, on systems with more than 64 logical processors this number is capped by the number of logical processors in a single group; see [Processor Groups](/windows/win32/procthread/processor-groups).
`hardware_concurrency` returns the number of logical processors, which corresponds to the number of hardware threads that can execute simultaneously. It takes into account the number of physical processors, the number of cores in each physical processor, and simultaneous multithreading on each single core.

Before Windows 11 and Windows Server 2022, applications were limited by default to a single processor group, having at most 64 logical processors. This limited the number of concurrently executing threads to 64. For more information, see [Processor Groups](/windows/win32/procthread/processor-groups).

Starting with Windows 11 and Windows Server 2022, processes and their threads have processor affinities that by default span all processors in the system and across multiple groups on machines with more than 64 processors. The limit on the number of concurrent threads is now the total number of logical processors in the system.

## <a name="id_class"></a> `id` class

Expand All @@ -125,7 +127,7 @@ All default-constructed `thread::id` objects compare equal.

## <a name="join"></a> `join`

Blocks until the thread of execution that's associated with the calling object completes.
Blocks until the thread of execution associated with the calling object completes.

```cpp
void join();
Expand All @@ -143,7 +145,7 @@ Specifies whether the associated thread is joinable.
bool joinable() const noexcept;
```

### Return Value
### Return value

**`true`** if the associated thread is joinable; otherwise, **`false`**.

Expand All @@ -159,9 +161,9 @@ Returns the implementation-specific type that represents the thread handle. The
native_handle_type native_handle();
```

### Return Value
### Return value

`native_handle_type` is defined as a Win32 `HANDLE` that's cast as `void *`.
`native_handle_type` is defined as a Win32 `HANDLE` cast as `void *`.

## <a name="op_eq"></a> `thread::operator=`

Expand All @@ -176,7 +178,7 @@ thread& operator=(thread&& Other) noexcept;
*`Other`*\
A `thread` object.

### Return Value
### Return value

`*this`

Expand Down Expand Up @@ -214,7 +216,7 @@ thread(thread&& Other) noexcept;
### Parameters

*`F`*\
An application-defined function to be executed by the thread.
An application-defined function to execute on the thread.

*`A`*\
A list of arguments to be passed to *`F`*.
Expand All @@ -224,9 +226,9 @@ An existing `thread` object.

### Remarks

The first constructor constructs an object that's not associated with a thread of execution. The value that's returned by a call to `get_id` for the constructed object is `thread::id()`.
The first constructor constructs an object that's not associated with a thread of execution. The value returned by `get_id` for the constructed object is `thread::id()`.

The second constructor constructs an object that's associated with a new thread of execution and executes the pseudo-function `INVOKE` that's defined in [`<functional>`](../standard-library/functional.md). If not enough resources are available to start a new thread, the function throws a [`system_error`](../standard-library/system-error-class.md) object that has an error code of `resource_unavailable_try_again`. If the call to *`F`* terminates with an uncaught exception, [`terminate`](../standard-library/exception-functions.md#terminate) is called.
The second constructor constructs an object that's associated with a new thread of execution. It executes the pseudo-function `INVOKE` defined in [`<functional>`](../standard-library/functional.md). If not enough resources are available to start a new thread, the function throws a [`system_error`](../standard-library/system-error-class.md) object that has an error code of `resource_unavailable_try_again`. If the call to *`F`* terminates with an uncaught exception, [`terminate`](../standard-library/exception-functions.md#terminate) is called.

The third constructor constructs an object that's associated with the thread that's associated with `Other`. `Other` is then set to a default-constructed state.

Expand Down