Skip to content

Spelling stdlib/public/concurrency #42443

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Apr 21, 2022
10 changes: 5 additions & 5 deletions stdlib/public/Concurrency/Actor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ void swift::runJobInEstablishedExecutorContext(Job *job) {
task->runInFullyEstablishedContext();

assert(ActiveTask::get() == nullptr &&
"active task wasn't cleared before susspending?");
"active task wasn't cleared before suspending?");
} else {
// There's no extra bookkeeping to do for simple jobs besides swapping in
// the voucher.
Expand Down Expand Up @@ -492,7 +492,7 @@ class JobRef {
return { job, 0 };
}

/// Return a reference to a job that hasn't been preprocesssed yet.
/// Return a reference to a job that hasn't been preprocessed yet.
static JobRef getUnpreprocessed(Job *job) {
assert(job && "passing a null job");
return { job, NeedsPreprocessing };
Expand Down Expand Up @@ -989,7 +989,7 @@ class DefaultActorImpl : public HeapObject {

/// Schedule an inline processing job. This can generally only be
/// done if we know nobody else is trying to do it at the same time,
/// e.g. if this thread just sucessfully transitioned the actor from
/// e.g. if this thread just successfully transitioned the actor from
/// Idle to Scheduled.
void scheduleActorProcessJob(JobPriority priority,
bool hasActiveInlineJob);
Expand Down Expand Up @@ -1455,7 +1455,7 @@ Job * DefaultActorImpl::drainOne() {
// Dequeue the first job and set up a new head
newState = newState.withFirstJob(getNextJobInQueue(firstJob));
if (_status().compare_exchange_weak(oldState, newState,
/* sucess */ std::memory_order_release,
/* success */ std::memory_order_release,
/* failure */ std::memory_order_acquire)) {
SWIFT_TASK_DEBUG_LOG("Drained first job %p from actor %p", firstJob, this);
traceActorStateTransition(this, oldState, newState);
Expand Down Expand Up @@ -1725,7 +1725,7 @@ static void runOnAssumedThread(AsyncTask *task, ExecutorRef executor,
// Note that this doesn't change the active task and so doesn't
// need to either update ActiveTask or flagAsRunning/flagAsSuspended.

// If there's alreaady tracking info set up, just change the executor
// If there's already tracking info set up, just change the executor
// there and tail-call the task. We don't want these frames to
// potentially accumulate linearly.
if (oldTracking) {
Expand Down
2 changes: 1 addition & 1 deletion stdlib/public/Concurrency/AsyncFlatMapSequence.swift
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ extension AsyncSequence {
/// - Parameter transform: A mapping closure. `transform` accepts an element
/// of this sequence as its parameter and returns an `AsyncSequence`.
/// - Returns: A single, flattened asynchronous sequence that contains all
/// elements in all the asychronous sequences produced by `transform`.
/// elements in all the asynchronous sequences produced by `transform`.
@preconcurrency
@inlinable
public __consuming func flatMap<SegmentOfResult: AsyncSequence>(
Expand Down
4 changes: 2 additions & 2 deletions stdlib/public/Concurrency/AsyncIteratorProtocol.swift
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ import Swift
/// conforms to `AsyncIteratorProtocol`. The following example shows a `Counter`
/// type that uses an inner iterator to monotonically generate `Int` values
/// until reaching a `howHigh` value. While this example isn't itself
/// asychronous, it shows the shape of a custom sequence and iterator, and how
/// asynchronous, it shows the shape of a custom sequence and iterator, and how
/// to use it as if it were asynchronous:
///
/// struct Counter : AsyncSequence {
Expand All @@ -37,7 +37,7 @@ import Swift
/// let howHigh: Int
/// var current = 1
/// mutating func next() async -> Int? {
/// // A genuinely asychronous implementation uses the `Task`
/// // A genuinely asynchronous implementation uses the `Task`
/// // API to check for cancellation here and return early.
/// guard current <= howHigh else {
/// return nil
Expand Down
4 changes: 2 additions & 2 deletions stdlib/public/Concurrency/AsyncStream.swift
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ public struct AsyncStream<Element> {
let storage: _Storage

/// Resume the task awaiting the next iteration point by having it return
/// nomally from its suspension point with a given element.
/// normally from its suspension point with a given element.
///
/// - Parameter value: The value to yield from the continuation.
/// - Returns: A `YieldResult` that indicates the success or failure of the
Expand Down Expand Up @@ -305,7 +305,7 @@ public struct AsyncStream<Element> {
/// stream.
/// - onCancel: A closure to execute when canceling the stream's task.
///
/// Use this convenience initializer when you have an asychronous function
/// Use this convenience initializer when you have an asynchronous function
/// that can produce elements for the stream, and don't want to invoke
/// a continuation manually. This initializer "unfolds" your closure into
/// an asynchronous stream. The created stream handles conformance
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ extension AsyncSequence {
/// accepts an element of this sequence as its parameter and returns an
/// `AsyncSequence`. If `transform` throws an error, the sequence ends.
/// - Returns: A single, flattened asynchronous sequence that contains all
/// elements in all the asychronous sequences produced by `transform`. The
/// sequence ends either when the the last sequence created from the last
/// elements in all the asynchronous sequences produced by `transform`. The
/// sequence ends either when the last sequence created from the last
/// element from base sequence ends, or when `transform` throws an error.
@preconcurrency
@inlinable
Expand Down
14 changes: 7 additions & 7 deletions stdlib/public/Concurrency/AsyncThrowingStream.swift
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ import Swift
/// `Quake` instances every time it detects an earthquake. To receive callbacks,
/// callers set a custom closure as the value of the monitor's
/// `quakeHandler` property, which the monitor calls back as necessary. Callers
/// can also set an `errorHandler` to receive asychronous error notifications,
/// can also set an `errorHandler` to receive asynchronous error notifications,
/// such as the monitor service suddenly becoming unavailable.
///
/// class QuakeMonitor {
Expand Down Expand Up @@ -201,7 +201,7 @@ public struct AsyncThrowingStream<Element, Failure: Error> {
let storage: _Storage

/// Resume the task awaiting the next iteration point by having it return
/// nomally from its suspension point with a given element.
/// normally from its suspension point with a given element.
///
/// - Parameter value: The value to yield from the continuation.
/// - Returns: A `YieldResult` that indicates the success or failure of the
Expand Down Expand Up @@ -283,7 +283,7 @@ public struct AsyncThrowingStream<Element, Failure: Error> {
/// elements to the stream and terminate the stream when finished.
///
/// The `AsyncStream.Continuation` received by the `build` closure is
/// appopriate for use in concurrent contexts. It is thread safe to send and
/// appropriate for use in concurrent contexts. It is thread safe to send and
/// finish; all calls are to the continuation are serialized. However, calling
/// this from multiple concurrent contexts could result in out-of-order
/// delivery.
Expand All @@ -292,7 +292,7 @@ public struct AsyncThrowingStream<Element, Failure: Error> {
/// initializer that produces 100 random numbers on a one-second interval,
/// calling `yield(_:)` to deliver each element to the awaiting call point.
/// When the `for` loop exits, the stream finishes by calling the
/// continuation's `finish()` method. If the random number is divisble by 5
/// continuation's `finish()` method. If the random number is divisible by 5
/// with no remainder, the stream throws a `MyRandomNumberError`.
///
/// let stream = AsyncThrowingStream<Int, Error>(Int.self,
Expand Down Expand Up @@ -338,7 +338,7 @@ public struct AsyncThrowingStream<Element, Failure: Error> {
/// - produce: A closure that asynchronously produces elements for the
/// stream.
///
/// Use this convenience initializer when you have an asychronous function
/// Use this convenience initializer when you have an asynchronous function
/// that can produce elements for the stream, and don't want to invoke
/// a continuation manually. This initializer "unfolds" your closure into
/// a full-blown asynchronous stream. The created stream handles adherence to
Expand All @@ -347,7 +347,7 @@ public struct AsyncThrowingStream<Element, Failure: Error> {
///
/// The following example shows an `AsyncThrowingStream` created with this
/// initializer that produces random numbers on a one-second interval. If the
/// random number is divisble by 5 with no remainder, the stream throws a
/// random number is divisible by 5 with no remainder, the stream throws a
/// `MyRandomNumberError`.
///
/// let stream = AsyncThrowingStream<Int, Error> {
Expand Down Expand Up @@ -455,7 +455,7 @@ extension AsyncThrowingStream.Continuation {
}

/// Resume the task awaiting the next iteration point by having it return
/// nomally from its suspension point.
/// normally from its suspension point.
///
/// - Returns: A `YieldResult` that indicates the success or failure of the
/// yield operation.
Expand Down
2 changes: 1 addition & 1 deletion stdlib/public/Concurrency/GlobalExecutor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
/// threads from growing without limit.
///
/// Second, executors may own dedicated threads, or they may schedule
/// work onto some some underlying executor. Dedicated threads can
/// work onto some underlying executor. Dedicated threads can
/// improve the responsiveness of a subsystem *locally*, but they impose
/// substantial costs which can drive down performance *globally*
/// if not used carefully. When an executor relies on running work
Expand Down
10 changes: 5 additions & 5 deletions stdlib/public/Concurrency/Task.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -106,15 +106,15 @@ FutureFragment::Status AsyncTask::waitFuture(AsyncTask *waitingTask,
auto fragment = futureFragment();

auto queueHead = fragment->waitQueue.load(std::memory_order_acquire);
bool contextIntialized = false;
bool contextInitialized = false;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a change to the code, not just to docs or comments, so someone other than me should review it.

while (true) {
switch (queueHead.getStatus()) {
case Status::Error:
case Status::Success:
SWIFT_TASK_DEBUG_LOG("task %p waiting on task %p, completed immediately",
waitingTask, this);
_swift_tsan_acquire(static_cast<Job *>(this));
if (contextIntialized) waitingTask->flagAsRunning();
if (contextInitialized) waitingTask->flagAsRunning();
// The task is done; we don't need to wait.
return queueHead.getStatus();

Expand All @@ -128,8 +128,8 @@ FutureFragment::Status AsyncTask::waitFuture(AsyncTask *waitingTask,
break;
}

if (!contextIntialized) {
contextIntialized = true;
if (!contextInitialized) {
contextInitialized = true;
auto context =
reinterpret_cast<TaskFutureWaitAsyncContext *>(waitingTaskContext);
context->errorResult = nullptr;
Expand Down Expand Up @@ -1100,7 +1100,7 @@ static void swift_continuation_awaitImpl(ContinuationAsyncContext *context) {
return context->ResumeParent(context);
}

// Load the current task (we alreaady did this in assertions builds).
// Load the current task (we already did this in assertions builds).
#ifdef NDEBUG
auto task = swift_task_getCurrent();
#endif
Expand Down
6 changes: 3 additions & 3 deletions stdlib/public/Concurrency/TaskGroup.swift
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ import Swift
/// =======================
///
/// You can cancel a task group and all of its child tasks
/// by calling the `cancellAll()` method on the task group,
/// by calling the `cancelAll()` method on the task group,
/// or by canceling the task in which the group is running.
///
/// If you call `async(priority:operation:)` to create a new task in a canceled group,
Expand Down Expand Up @@ -118,7 +118,7 @@ public func withTaskGroup<ChildTaskResult, GroupResult>(
/// =======================
///
/// You can cancel a task group and all of its child tasks
/// by calling the `cancellAll()` method on the task group,
/// by calling the `cancelAll()` method on the task group,
/// or by canceling the task in which the group is running.
///
/// If you call `async(priority:operation:)` to create a new task in a canceled group,
Expand Down Expand Up @@ -802,7 +802,7 @@ extension ThrowingTaskGroup: AsyncSequence {
/// it's valid to make a new iterator for the task group,
/// which you can use to iterate over the results of new tasks you add to the group.
/// You can also make a new iterator to resume iteration
/// after a child task thows an error.
/// after a child task throws an error.
/// For example:
///
/// group.addTask { 1 }
Expand Down
4 changes: 2 additions & 2 deletions stdlib/public/Concurrency/TaskPrivate.h
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ class TaskFutureWaitAsyncContext : public AsyncContext {
///
/// 32 bit systems with SWIFT_CONCURRENCY_ENABLE_PRIORITY_ESCALATION=1
///
/// Flags Exeuction Lock Unused TaskStatusRecord *
/// Flags Execution Lock Unused TaskStatusRecord *
/// |----------------------|----------------------|----------------------|-------------------|
/// 32 bits 32 bits 32 bits 32 bits
///
Expand Down Expand Up @@ -732,7 +732,7 @@ retry:;
/// task. Otherwise, if we reset the voucher and priority escalation too early, the
/// thread may be preempted immediately before we can finish the enqueue of the
/// high priority task to the next location. We will then have a priority inversion
/// of waiting for a low priority thread to enqueue a high priorty task.
/// of waiting for a low priority thread to enqueue a high priority task.
///
/// In order to do this correctly, we need enqueue-ing of a task to the next
/// executor, to have a "hand-over-hand locking" type of behaviour - until the
Expand Down
2 changes: 1 addition & 1 deletion stdlib/public/Distributed/DistributedActor.swift
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ extension DistributedActor {

/// Executes the passed 'body' only when the distributed actor is local instance.
///
/// The `Self` passed to the the body closure is isolated, meaning that the
/// The `Self` passed to the body closure is isolated, meaning that the
/// closure can be used to call non-distributed functions, or even access actor
/// state.
///
Expand Down
2 changes: 1 addition & 1 deletion stdlib/public/Distributed/DistributedActorSystem.swift
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ public protocol DistributedActorSystem: Sendable {
Act.ID == ActorID

/// Called during when a distributed actor is deinitialized, or fails to initialize completely (e.g. by throwing
/// out of an `init` that did not completely initialize all of the the actors stored properties yet).
/// out of an `init` that did not completely initialize all of the actors stored properties yet).
///
/// This method is guaranteed to be called at-most-once for a given id (assuming IDs are unique,
/// and not re-cycled by the system), i.e. if it is called during a failure to initialize completely,
Expand Down