You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -50,6 +50,7 @@ The options listed in the following table:
50
50
|**`/I`***`pathname`*| Sets path for include file. A maximum of 10 **`/I`** options is allowed. |
51
51
|**`/nologo`**| Suppresses messages for successful assembly. |
52
52
|**`/omf`**| Generates object module file format (OMF) type of object module. **`/omf`** implies **`/c`**. ML.exe doesn't support linking OMF objects.<br /> Not available in ml64.exe. |
53
+
|**`/quiet`**| Suppresses 'Assembling' message. Available in Visual Studio 17.6 and later. |
53
54
|**`/Sa`**| Turns on listing of all available information. |
54
55
|**`/safeseh`**| Marks the object file: either it contains no exception handlers, or it contains exception handlers that are all declared with [`.SAFESEH`](dot-safeseh.md).<br /> Not available in ml64.exe. |
55
56
|**`/Sf`**| Adds the first-pass listing to the listing file. |
|[`is_execution_policy` Struct](is-execution-policy-struct.md)|Detects execution policies to exclude certain function signatures from otherwise ambiguous overload resolution participation.|
30
-
|[`parallel_policy` Class](parallel-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized.|
31
-
|[`parallel_unsequenced_policy` Class](parallel-unsequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized and vectorized.|
32
-
|[`sequenced_policy` Class](sequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Specifies that a parallel algorithm's execution may not be parallelized.|
30
+
|[`parallel_policy` class](parallel-policy-class.md)|Used to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized.|
31
+
|[`parallel_unsequenced_policy` class](parallel-unsequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Indicates that a parallel algorithm's execution may be parallelized and vectorized.|
32
+
|[`sequenced_policy` class](sequenced-policy-class.md)|Used as a unique type to disambiguate parallel algorithm overloading. Specifies that a parallel algorithm's execution may not be parallelized.|
33
33
34
-
### Microsoft Specific
35
-
36
-
When `parallel_policy` or `parallel_unsequenced_policy` cause the algorithm to be parallelized, the parallel execution uses Windows Thread Pool; see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default (currently 500). The number of threads concurrently executing on hardware is currently limited by the number of logical processors in the current process's processor group, so it is effectively limited to 64; see [Processor Groups](/windows/win32/procthread/processor-groups). The maximum number of chunks for data partitioning is also currently based on the number of logical processors in the current process's processor group.
34
+
### Microsoft specific
35
+
36
+
Parallel algorithms execute on an unspecified number of threads and divide the work into an unspecified number of data partitioning "chunks." The Windows thread pool manages the number of threads. The implementation tries to make use of the available logical processors, which corresponds to the number of hardware threads that can execute simultaneously.
37
+
38
+
Specifying `parallel_policy` or `parallel_unsequenced_policy` causes standard library algorithms to run in parallel using the Windows Thread Pool. The number of concurrent threads, and thus the number of "chunks" for data partitioning, is limited to 500 threads because that's the default number of thread pool threads. For more information, see [Thread Pools](/windows/win32/procthread/thread-pools).
39
+
40
+
Before Windows 11 and Windows Server 2022, applications were limited by default to a single processor group having at most 64 logical processors. This limited the number of concurrently executing threads to 64. For more information, see [Processor Groups](/windows/win32/procthread/processor-groups).
41
+
42
+
Starting with Windows 11 and Windows Server 2022, processes and their threads have processor affinities that by default span all processors in the system and across multiple groups on machines with more than 64 processors. The limit on the number of concurrent threads is now the total number of logical processors in the system.
A [`launch`](../standard-library/future-enums.md#launch) value.
34
34
35
35
### Remarks
@@ -48,24 +48,28 @@ The second function returns a `future<Ty>` object whose *associated asynchronous
48
48
49
49
Unless `decay<Fn>::type` is a type other than launch, the second function doesn't participate in overload resolution.
50
50
51
-
The C++ standard states that if the policy is `launch::async`, the function behaves as if it invokes the callable object in a new thread. This means that while it typically results in creating a new thread, the implementation may use other mechanisms to achieve equivalent behavior. However, the Microsoft implementation currently does not conform strictly to this behavior. It obtains its threads from the Windows ThreadPool, which may provide a recycled thread rather than a new one. This means that the `launch::async` policy is effectively implemented as `launch::async|launch::deferred`. Another implication of the ThreadPool-based implementation is that there's no guarantee that thread-local variables will be destroyed when the thread completes. If the thread is recycled and provided to a new call to `async`, the old variables will still exist. We recommend that you avoid using thread-local variables with `async`.
51
+
The C++ standard states that if the policy is `launch::async`, the function behaves as if it invokes the callable object in a new thread. This means that while it typically results in creating a new thread, the implementation may use other mechanisms to achieve equivalent behavior. However, the Microsoft implementation currently doesn't conform strictly to this behavior. It obtains threads from the Windows ThreadPool, which may provide a recycled thread rather than a new one. This means that the `launch::async` policy is effectively implemented as `launch::async|launch::deferred`. Another implication of the ThreadPool-based implementation is that there's no guarantee that thread-local variables are destroyed when the thread completes. If the thread is recycled and provided to a new call to `async`, the old variables still exist. We recommend that you avoid using thread-local variables with `async`.
52
52
53
-
If *policy* is `launch::deferred`, the function marks its associated asynchronous state as holding a *deferred function* and returns. The first call to any non-timed function that waits for the associated asynchronous state to be ready in effect calls the deferred function by evaluating `INVOKE(dfn, dargs..., Ty)`.
53
+
If *`policy`* is `launch::deferred`, the function marks its associated asynchronous state as holding a *deferred function* and returns. The first call to any nontimed function that waits for the associated asynchronous state to be ready in effect calls the deferred function by evaluating `INVOKE(dfn, dargs..., Ty)`.
54
54
55
-
In all cases, the associated asynchronous state of the `future` object isn't set to *ready* until the evaluation of `INVOKE(dfn, dargs..., Ty)` completes, either by throwing an exception or by returning normally. The result of the associated asynchronous state is an exception if one was thrown, or any value that's returned by the evaluation.
55
+
In all cases, the associated asynchronous state of the `future` object isn't set to *ready* until the evaluation of `INVOKE(dfn, dargs..., Ty)` completes, either by throwing an exception or by returning normally. The result of the associated asynchronous state is an exception if one was thrown, or the value the evaluation returns.
56
56
57
57
> [!NOTE]
58
58
> For a `future`—or the last [`shared_future`](../standard-library/shared-future-class.md)—that's attached to a task started with `std::async`, the destructor blocks if the task has not completed; that is, it blocks if this thread did not yet call `.get()` or `.wait()` and the task is still running. If a `future` obtained from `std::async` is moved outside the local scope, other code that uses it must be aware that its destructor may block for the shared state to become ready.
59
59
60
60
The pseudo-function `INVOKE` is defined in [`<functional>`](../standard-library/functional.md).
61
61
62
-
### Microsoft Specific
62
+
**Microsoft specific**
63
63
64
-
When the passed function is executed asynchronously, it's executed on Windows Thread Pool; see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default (currently 500). The number of threads concurrently executing on hardware is currently limited by the number of logical processor in the process's processor group, so it's effectively limited to 64; see [Processor Groups](/windows/win32/procthread/processor-groups).
64
+
When the passed function is executed asynchronously, it executes on the Windows Thread Pool. For more information, see [Thread Pools](/windows/win32/procthread/thread-pools). The number of concurrent threads is limited to the thread pool default, which is 500 threads.
65
+
66
+
Before Windows 11 and Windows Server 2022, applications were limited by default to a single processor group having at most 64 logical processors. This limited the number of concurrently executing threads to 64. For more information, see [Processor Groups](/windows/win32/procthread/processor-groups).
67
+
68
+
Starting with Windows 11 and Windows Server 2022, processes and their threads have processor affinities that by default span all processors in the system and across multiple groups on machines with more than 64 processors. The limit on the number of concurrent threads is now the total number of logical processors in the system.
Returns a reference to the [error_category](../standard-library/error-category-class.md) object that characterizes errors that are associated with `future` objects.
72
+
Returns a reference to the [`error_category`](../standard-library/error-category-class.md) object that characterizes errors that are associated with `future` objects.
Creates an [error_condition](../standard-library/error-condition-class.md) together with the [error_category](../standard-library/error-category-class.md) object that characterizes [future](../standard-library/future-class.md) errors.
97
+
Creates an [`error_condition`](../standard-library/error-condition-class.md) together with the [`error_category`](../standard-library/error-category-class.md) object that characterizes [`future`](../standard-library/future-class.md) errors.
0 commit comments