Skip to content

[benchmark] Reduce unreasonable setup times #20123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 30, 2018

Conversation

palimondo
Copy link
Contributor

@palimondo palimondo commented Oct 29, 2018

This PR reduces setup time of 3 benchmarks that took unreasonably long to prepare their workloads. It brings them in compliance with the new BenchmarkDoctor rule: setUpFunction should take reasonable time (no more than 200ms). More details in the individual commit descriptions.

The DictionaryKeysContains used unreasonably large dictionary to demonstrate the failed O(n) instead of O(1) performance in case of Cocoa Dictionary. The setup of 1M element dictionary took 8 seconds on my old machine!

The old pathological behavior can be equaly well demonstrated with a much smaller dictionary. (Validated by modifying `public func _customContainsEquatableElement` in `Dictionary.swift` to `return _variant.index(forKey: element) != nil`)

The reported performance with correct O(1) behavior is  unchanged.
Reduced the time to run the setUpFunction from 2.2s to 380ms on my ancient computer… This should fit well under 200ms on more modern machines.
@palimondo
Copy link
Contributor Author

@eeckstein Please review 🙏

@eeckstein
Copy link
Contributor

@swift-ci benchmark

@swift-ci
Copy link
Contributor

Build comment file:

Code size: -O

TEST OLD NEW DELTA RATIO
Improvement
DictionaryKeysContains.o 15075 12323 -18.3% 1.22x

Performance: -Osize

TEST OLD NEW DELTA RATIO
Regression
DataCount 37 40 +8.1% 0.93x (?)

Code size: -Osize

TEST OLD NEW DELTA RATIO
Improvement
DictionaryKeysContains.o 14155 11827 -16.4% 1.20x
How to read the data The tables contain differences in performance which are larger than 8% and differences in code size which are larger than 1%.

If you see any unexpected regressions, you should consider fixing the regressions before you merge the PR.

Noise: Sometimes the performance results (not code size!) contain false alarms. Unexpected regressions which are marked with '(?)' are probably noise. If you see regressions which you cannot explain you can try to run the benchmarks again. If regressions still show up, please consult with the performance team (@eeckstein).

Hardware Overview
  Model Name: Mac Pro
  Model Identifier: MacPro6,1
  Processor Name: 12-Core Intel Xeon E5
  Processor Speed: 2.7 GHz
  Number of Processors: 1
  Total Number of Cores: 12
  L2 Cache (per Core): 256 KB
  L3 Cache: 30 MB
  Memory: 64 GB

@eeckstein
Copy link
Contributor

@swift-ci smoke test

@eeckstein eeckstein merged commit bd59bf1 into swiftlang:master Oct 30, 2018
@palimondo palimondo deleted the just-eyes branch October 30, 2018 20:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants