Skip to content

Spelling benchmark #42457

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 33 commits into from
Apr 25, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
61cf97c
spelling: approximate
jsoref Apr 17, 2022
64e2f7b
spelling: available
jsoref Apr 17, 2022
59a1b26
spelling: benchmarks
jsoref Apr 17, 2022
a249504
spelling: between
jsoref Apr 17, 2022
a252802
spelling: calculation
jsoref Apr 17, 2022
89ace02
spelling: characterization
jsoref Apr 17, 2022
6d87366
spelling: coefficient
jsoref Apr 17, 2022
49284e4
spelling: computation
jsoref Apr 17, 2022
8a0bb3f
spelling: deterministic
jsoref Apr 17, 2022
f019ebc
spelling: divisor
jsoref Apr 17, 2022
afb246e
spelling: encounter
jsoref Apr 17, 2022
8d88062
spelling: expected
jsoref Apr 17, 2022
ea8b681
spelling: fibonacci
jsoref Apr 17, 2022
b7a5ef3
spelling: fulfill
jsoref Apr 17, 2022
c10c85a
spelling: implements
jsoref Apr 19, 2022
d8ba65d
spelling: into
jsoref Apr 17, 2022
922cb24
spelling: intrinsic
jsoref Apr 17, 2022
8de3525
spelling: markdown
jsoref Apr 17, 2022
ea55670
spelling: measure
jsoref Apr 17, 2022
7a25a0e
spelling: occurrences
jsoref Apr 17, 2022
ccd8151
spelling: omitted
jsoref Apr 17, 2022
24a42b9
spelling: partition
jsoref Apr 17, 2022
4958754
spelling: performance
jsoref Apr 17, 2022
ffe6a79
spelling: practice
jsoref Apr 17, 2022
2f8c3a3
spelling: preemptive
jsoref Apr 17, 2022
59c6a3e
spelling: repeated
jsoref Apr 17, 2022
6f79ab4
spelling: requirements
jsoref Apr 17, 2022
6b6e9ba
spelling: requires
jsoref Apr 17, 2022
fa1db32
spelling: response
jsoref Apr 17, 2022
95d9a25
spelling: supports
jsoref Apr 17, 2022
af6b0c6
spelling: unknown
jsoref Apr 17, 2022
614eb29
spelling: utilities
jsoref Apr 17, 2022
d5f86bb
spelling: verbose
jsoref Apr 17, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion benchmark/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ set(SWIFT_BENCHMARK_EXTRA_FLAGS "" CACHE STRING
"Extra options to pass to swiftc when building the benchmarks")

set(SWIFT_BENCHMARK_UNOPTIMIZED_DRIVER NO CACHE BOOL
"Build the benchmark driver utilites without optimization (default: no)")
"Build the benchmark driver utilities without optimization (default: no)")

if (SWIFT_BENCHMARK_BUILT_STANDALONE)
# This option's value must match the value of the same option used when
Expand Down
6 changes: 3 additions & 3 deletions benchmark/scripts/Benchmark_Driver
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Example:

Use `Benchmark_Driver -h` for help on available commands and options.

class `BenchmarkDriver` runs performance tests and impements the `run` COMMAND.
class `BenchmarkDriver` runs performance tests and implements the `run` COMMAND.
class `BenchmarkDoctor` analyzes performance tests, implements `check` COMMAND.

"""
Expand Down Expand Up @@ -544,7 +544,7 @@ class BenchmarkDoctor(object):
caveat = "" if setup == 0 else " (excluding the setup overhead)"
log("'%s' execution took at least %d μs%s.", name, runtime, caveat)

def factor(base): # suitable divisior that's integer power of base
def factor(base): # suitable divisor that's integer power of base
return int(
pow(base, math.ceil(math.log(runtime / float(threshold), base)))
)
Expand Down Expand Up @@ -718,7 +718,7 @@ class BenchmarkDoctor(object):
return measurements

def analyze(self, benchmark_measurements):
"""Analyze whether benchmark fullfills all requirtements."""
"""Analyze whether benchmark fulfills all requirements."""
self.log.debug("Analyzing %s", benchmark_measurements["name"])
for rule in self.requirements:
rule(benchmark_measurements)
Expand Down
16 changes: 8 additions & 8 deletions benchmark/scripts/compare_perf_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class `PerformanceTestSamples` is collection of `Sample`s and their statistics.
class `PerformanceTestResult` is a summary of performance test execution.
class `LogParser` converts log files into `PerformanceTestResult`s.
class `ResultComparison` compares new and old `PerformanceTestResult`s.
class `TestComparator` analyzes changes betweeen the old and new test results.
class `TestComparator` analyzes changes between the old and new test results.
class `ReportFormatter` creates the test comparison report in specified format.

"""
Expand Down Expand Up @@ -111,7 +111,7 @@ def exclude_outliers(self, top_only=False):

Experimentally, this rule seems to perform well-enough on the
benchmark runtimes in the microbenchmark range to filter out
the environment noise caused by preemtive multitasking.
the environment noise caused by preemptive multitasking.
"""
lo = (
0
Expand Down Expand Up @@ -205,7 +205,7 @@ def running_mean_variance(stats, x):

@property
def cv(self):
"""Coeficient of Variation (%)."""
"""Coefficient of Variation (%)."""
return (self.sd / self.mean) if self.mean else 0

@property
Expand All @@ -225,7 +225,7 @@ class PerformanceTestResult(object):
Reported by the test driver (Benchmark_O, Benchmark_Onone, Benchmark_Osize
or Benchmark_Driver).

It suppors 2 log formats emitted by the test driver. Legacy format with
It supports 2 log formats emitted by the test driver. Legacy format with
statistics for normal distribution (MEAN, SD):
#,TEST,SAMPLES,MIN(μs),MAX(μs),MEAN(μs),SD(μs),MEDIAN(μs),MAX_RSS(B)
And new quantiles format with variable number of columns:
Expand Down Expand Up @@ -311,7 +311,7 @@ def merge(self, r):
"""Merge two results.

Recomputes min, max and mean statistics. If all `samples` are
avaliable, it recomputes all the statistics.
available, it recomputes all the statistics.
The use case here is comparing test results parsed from concatenated
log files from multiple runs of benchmark driver.
"""
Expand Down Expand Up @@ -514,12 +514,12 @@ def results_from_file(log_file):


class TestComparator(object):
"""Analyzes changes betweeen the old and new test results.
"""Analyzes changes between the old and new test results.

It determines which tests were `added`, `removed` and which can be
compared. It then splits the `ResultComparison`s into 3 groups according to
the `delta_threshold` by the change in performance: `increased`,
`descreased` and `unchanged`. Whole computaion is performed during
`descreased` and `unchanged`. Whole computation is performed during
initialization and results are provided as properties on this object.

The lists of `added`, `removed` and `unchanged` tests are sorted
Expand Down Expand Up @@ -576,7 +576,7 @@ def partition(items, p):


class ReportFormatter(object):
"""Creates the report from perfromance test comparison in specified format.
"""Creates the report from performance test comparison in specified format.

`ReportFormatter` formats the `PerformanceTestResult`s and
`ResultComparison`s provided by `TestComparator` into report table.
Expand Down
4 changes: 2 additions & 2 deletions benchmark/scripts/run_smoke_bench
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
#
# Performs a very fast check which benchmarks regressed and improved.
#
# Initially runs the benchmars with a low sample count and just re-runs those
# Initially runs the benchmarks with a low sample count and just re-runs those
# benchmarks which differ.
# Also reports code size differences.
#
Expand Down Expand Up @@ -224,7 +224,7 @@ def measure(driver, tests, i):


def merge(results, other_results):
""""Merge the other PreformanceTestResults into the first dictionary."""
""""Merge the other PerformanceTestResults into the first dictionary."""
for test, result in other_results.items():
results[test].merge(result)
return results
Expand Down
10 changes: 5 additions & 5 deletions benchmark/scripts/test_Benchmark_Driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -135,12 +135,12 @@ def test_output_dir(self):
self.assertIsNone(parse_args(["run"]).output_dir)
self.assertEqual(parse_args(["run", "--output-dir", "/log"]).output_dir, "/log")

def test_check_supports_vebose_output(self):
def test_check_supports_verbose_output(self):
self.assertFalse(parse_args(["check"]).verbose)
self.assertTrue(parse_args(["check", "-v"]).verbose)
self.assertTrue(parse_args(["check", "--verbose"]).verbose)

def test_check_supports_mardown_output(self):
def test_check_supports_markdown_output(self):
self.assertFalse(parse_args(["check"]).markdown)
self.assertTrue(parse_args(["check", "-md"]).markdown)
self.assertTrue(parse_args(["check", "--markdown"]).markdown)
Expand Down Expand Up @@ -486,7 +486,7 @@ def assert_log_written(out, log_file, content):

shutil.rmtree(temp_dir)

def test_deterministing_hashing(self):
def test_deterministic_hashing(self):
cmd = ["printenv", "SWIFT_DETERMINISTIC_HASHING"]
driver = BenchmarkDriver(["no args"], tests=["ignored"])
self.assertEqual(driver._invoke(cmd).strip(), "1")
Expand Down Expand Up @@ -839,7 +839,7 @@ def test_benchmark_runtime_range(self):
quantum used by macos scheduler. Linux scheduler's quantum is 6ms.
Driver yielding the process before the 10ms quantum elapses helped
a lot, but benchmarks with runtimes under 1ms usually exhibit a strong
mode which is best for accurate performance charaterization.
mode which is best for accurate performance characterization.
To minimize the number of involuntary context switches that corrupt our
measurements, we should strive to stay in the microbenchmark range.

Expand Down Expand Up @@ -999,7 +999,7 @@ def test_benchmark_has_constant_memory_use(self):
doctor.analyze(
{
# The threshold of 15 pages was estimated from previous
# measurements. The normal range should be probably aproximated
# measurements. The normal range should be probably approximate
# by a function instead of a simple constant.
# TODO: re-evaluate normal range from whole SBS
"name": "ConstantMemory",
Expand Down
6 changes: 3 additions & 3 deletions benchmark/scripts/test_compare_perf_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ def test_init_quantiles(self):
def test_init_delta_quantiles(self):
# #,TEST,SAMPLES,MIN(μs),𝚫MEDIAN,𝚫MAX
# 2-quantile from 2 samples in repeated min, when delta encoded,
# the difference is 0, which is ommited -- only separator remains
# the difference is 0, which is omitted -- only separator remains
log = "202,DropWhileArray,2,265,,22"
r = PerformanceTestResult(log.split(","), quantiles=True, delta=True)
self.assertEqual((r.num_samples, r.min, r.median, r.max), (2, 265, 265, 287))
Expand All @@ -257,7 +257,7 @@ def test_init_delta_quantiles(self):
def test_init_oversampled_quantiles(self):
"""When num_samples is < quantile + 1, some of the measurements are
repeated in the report summary. Samples should contain only true
values, discarding the repetated artifacts from quantile estimation.
values, discarding the repeated artifacts from quantile estimation.

The test string is slightly massaged output of the following R script:
subsample <- function(x, q) {
Expand Down Expand Up @@ -517,7 +517,7 @@ def assert_report_contains(self, texts, report):

class TestLogParser(unittest.TestCase):
def test_parse_results_csv(self):
"""Ignores uknown lines, extracts data from supported formats."""
"""Ignores unknown lines, extracts data from supported formats."""
log = """#,TEST,SAMPLES,MIN(us),MAX(us),MEAN(us),SD(us),MEDIAN(us)
7,Array.append.Array.Int?,20,10,10,10,0,10
21,Bridging.NSArray.as!.Array.NSString,20,11,11,11,0,11
Expand Down
4 changes: 2 additions & 2 deletions benchmark/scripts/test_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def __init__(self, responses=None):
def expect(self, call_args, response):
"""Expect invocation of tested method with given arguments.

Stores the canned reponse in the `respond` dictionary.
Stores the canned response in the `respond` dictionary.
"""
call_args = tuple(call_args)
self.expected.append(call_args)
Expand All @@ -83,7 +83,7 @@ def assert_called_with(self, expected_args):
)

def assert_called_all_expected(self):
"""Verify that all expeced invocations of tested method were called."""
"""Verify that all expected invocations of tested method were called."""
assert self.calls == self.expected, "\nExpected: {0}, \n Called: {1}".format(
self.expected, self.calls
)
Expand Down
2 changes: 1 addition & 1 deletion benchmark/single-source/Breadcrumbs.swift
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ class UTF16ToIdxRange: BenchmarkBase {
}

/// Split a string into `count` random slices and convert their index ranges
/// into into UTF-16 offset pairs.
/// into UTF-16 offset pairs.
class IdxToUTF16Range: BenchmarkBase {
let count: Int
var inputIndices: [Range<String.Index>] = []
Expand Down
2 changes: 1 addition & 1 deletion benchmark/single-source/LuhnAlgoEager.swift
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ extension MapSomeSequenceView: Sequence {
}

// now extend a lazy collection to return that view
// from a call to mapSome. In pracice, when doing this,
// from a call to mapSome. In practice, when doing this,
// you should do it for all the lazy wrappers
// (i.e. random-access, forward and sequence)
extension LazyCollectionProtocol {
Expand Down
4 changes: 2 additions & 2 deletions benchmark/single-source/LuhnAlgoLazy.swift
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ extension MapSomeSequenceView: Sequence {
}

// now extend a lazy collection to return that view
// from a call to mapSome. In pracice, when doing this,
// from a call to mapSome. In practice, when doing this,
// you should do it for all the lazy wrappers
// (i.e. random-access, forward and sequence)
extension LazyCollectionProtocol {
Expand Down Expand Up @@ -219,7 +219,7 @@ let combineDoubleDigits = {
(10...18).contains($0) ? $0-9 : $0
}

// first, the lazy version of checksum calcuation
// first, the lazy version of checksum calculation
let lazychecksum = { (ccnum: String) -> Bool in
ccnum.lazy
|> reverse
Expand Down
2 changes: 1 addition & 1 deletion benchmark/single-source/SIMDReduceInteger.swift
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ public func run_SIMDReduceInt32x4_init(_ n: Int) {
@inline(never)
public func run_SIMDReduceInt32x4_cast(_ n: Int) {
// Morally it seems like we "should" be able to use withMemoryRebound
// to SIMD4<Int32>, but that function requries that the sizes match in
// to SIMD4<Int32>, but that function requires that the sizes match in
// debug builds, so this is pretty ugly. The following "works" for now,
// but is probably in violation of the formal model (the exact rules
// for "assumingMemoryBound" are not clearly documented). We need a
Expand Down
2 changes: 1 addition & 1 deletion benchmark/single-source/SortArrayInClass.swift
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
// See https://swift.org/LICENSE.txt for license information
// See https://swift.org/CONTRIBUTORS.txt for the list of Swift project authors
//
// This benchmark is derived from user code that encoutered a major
// This benchmark is derived from user code that encountered a major
// performance problem in normal usage. Contributed by Saleem
// Abdulrasool (compnerd).
//
Expand Down
4 changes: 2 additions & 2 deletions benchmark/single-source/SortIntPyramids.swift
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
import TestsUtils

// This benchmark aims to measuare heapSort path of stdlib sorting function.
// This benchmark aims to measure heapSort path of stdlib sorting function.
// Datasets in this benchmark are influenced by stdlib partition function,
// therefore if stdlib partion implementation changes we should correct these
// therefore if stdlib partition implementation changes we should correct these
// datasets or disable/skip this benchmark
public let benchmarks = [
BenchmarkInfo(
Expand Down
2 changes: 1 addition & 1 deletion benchmark/single-source/WordCount.swift
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ public func run_WordCountUniqueUTF16(_ n: Int) {
}

/// Returns an array of all words in the supplied string, along with their
/// number of occurances. The array is sorted by decreasing frequency.
/// number of occurrences. The array is sorted by decreasing frequency.
/// (Words are case-sensitive and only support a limited subset of Unicode.)
@inline(never)
func histogram<S: Sequence>(for words: S) -> [(String, Int)]
Expand Down
4 changes: 2 additions & 2 deletions benchmark/utils/TestsUtils.swift
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ public enum BenchmarkCategory : String {
case exclusivity, differentiation

// Algorithms are "micro" that test some well-known algorithm in isolation:
// sorting, searching, hashing, fibonaci, crypto, etc.
// sorting, searching, hashing, fibonacci, crypto, etc.
case algorithm

// Miniapplications are contrived to mimic some subset of application behavior
Expand All @@ -55,7 +55,7 @@ public enum BenchmarkCategory : String {
// counterproductive.
case unstable

// CPU benchmarks represent instrinsic Swift performance. They are useful for
// CPU benchmarks represent intrinsic Swift performance. They are useful for
// measuring a fully baked Swift implementation across different platforms and
// hardware. The benchmark should also be reasonably applicable to real Swift
// code--it should exercise a known performance critical area. Typically these
Expand Down