Skip to content

Commit c8acadf

Browse files
committed
Merge branch 'master' of github.com:JuliaSIMD/LoopVectorization.jl
2 parents f59fc5e + 19dc445 commit c8acadf

File tree

76 files changed

+723
-594
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

76 files changed

+723
-594
lines changed

Project.toml

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,42 +1,46 @@
11
name = "LoopVectorization"
22
uuid = "bdcacae8-1622-11e9-2a5c-532679323890"
33
authors = ["Chris Elrod <[email protected]>"]
4-
version = "0.12.90"
4+
version = "0.12.100"
55

66
[deps]
77
ArrayInterface = "4fba245c-0d91-5ea0-9b3e-6abc04ee57a9"
88
CPUSummary = "2a0fbf3d-bb9c-48f3-b0a9-814d99fd7ab9"
9+
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
910
CloseOpenIntervals = "fb6a15b2-703c-40df-9091-08a04967cfa9"
1011
DocStringExtensions = "ffbed154-4ef7-542d-bbb7-c09d3a79fcae"
12+
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
1113
HostCPUFeatures = "3e5b6fbb-0976-4d2c-9146-d79de83f2fb0"
1214
IfElse = "615f187c-cbe4-4ef1-ba3b-2fcf58d6d173"
1315
LayoutPointers = "10f19ff3-798f-405d-979b-55457f8fc047"
1416
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
1517
OffsetArrays = "6fe1bfb0-de20-5000-8ca7-80f57d26f881"
1618
PolyesterWeave = "1d0040c9-8b98-4ee7-8388-3f51789ca0ad"
17-
Requires = "ae029012-a4dd-5104-9daa-d747884805df"
1819
SIMDDualNumbers = "3cdde19b-5bb0-4aaf-8931-af3e248e098b"
1920
SLEEFPirates = "476501e8-09a2-5ece-8869-fb82de89a1fa"
21+
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
2022
Static = "aedffcd0-7271-4cad-89d0-dc628f76c6d3"
2123
ThreadingUtilities = "8290d209-cae3-49c0-8002-c8c24d57dab5"
2224
UnPack = "3a884ed6-31ef-47d7-9d2a-63182c4928ed"
2325
VectorizationBase = "3d5dd08c-fd9d-11e8-17fa-ed2836048c2f"
2426

2527
[compat]
26-
ArrayInterface = "3.1.32"
28+
ArrayInterface = "3.1.32, 3.2.1"
2729
CPUSummary = "0.1.3"
30+
ChainRulesCore = "1"
2831
CloseOpenIntervals = "0.1.2"
2932
DocStringExtensions = "0.8"
33+
ForwardDiff = "0.9, 0.10"
3034
HostCPUFeatures = "0.1.3"
3135
IfElse = "0.1"
3236
LayoutPointers = "0.1.2"
3337
OffsetArrays = "1.4.1"
3438
PolyesterWeave = "0.1"
35-
Requires = "1"
3639
SIMDDualNumbers = "0.1"
3740
SLEEFPirates = "0.6.23"
41+
SpecialFunctions = "1, 2"
3842
Static = "0.3.3, 0.4"
3943
ThreadingUtilities = "0.4.5"
4044
UnPack = "1"
41-
VectorizationBase = "0.21.4"
45+
VectorizationBase = "0.21.21"
4246
julia = "1.5"

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -184,8 +184,8 @@ Test Passed
184184
julia> 2e-9M*K*N ./ (111.722e-6, 4.891e-3, 117.221e-6)
185185
(110.50516460500171, 2.524199141279902, 105.32121377568868)
186186
```
187-
It can produce a good macro kernel. An implementation of matrix multiplication able to handle large matrices would need to be perform blocking and packing of arrays to prevent the operations from being memory bottle-necked.
188-
Some day, LoopVectorization may itself may try to model the costs of memory movement in the L1 and L2 cache, and use these to generate loops around the macro kernel following the work of [Low, et al. (2016)](http://www.cs.utexas.edu/users/flame/pubs/TOMS-BLIS-Analytical.pdf).
187+
It can produce a good macro kernel. An implementation of matrix multiplication able to handle large matrices would need to perform blocking and packing of arrays to prevent the operations from being memory bottle-necked.
188+
Some day, LoopVectorization may itself try to model the costs of memory movement in the L1 and L2 cache, and use these to generate loops around the macro kernel following the work of [Low, et al. (2016)](http://www.cs.utexas.edu/users/flame/pubs/TOMS-BLIS-Analytical.pdf).
189189

190190
But for now, you should view it as a tool for generating efficient computational kernels, leaving tasks of parallelization and cache efficiency to you.
191191

@@ -236,7 +236,7 @@ julia> AmulBtest2!(X2, B, C, D, view(A,1,:))
236236
julia> @test X1 X2
237237
Test Passed
238238
```
239-
The lazy matrix multiplication operator `` escapes broadcasts and fuses, making it easy to write code that avoids intermediates. However, I would recomend always checking if splitting the operation into pieces, or at least isolating the matrix multiplication, increases performance. That will often be the case, especially if the matrices are large, where a separate multiplication can leverage BLAS (and perhaps take advantage of threads).
239+
The lazy matrix multiplication operator `` escapes broadcasts and fuses, making it easy to write code that avoids intermediates. However, I would recommend always checking if splitting the operation into pieces, or at least isolating the matrix multiplication, increases performance. That will often be the case, especially if the matrices are large, where a separate multiplication can leverage BLAS (and perhaps take advantage of threads).
240240
This may improve as the optimizations within LoopVectorization improve.
241241

242242
Note that loops will be faster than broadcasting in general. This is because the behavior of broadcasts is determined by runtime information (i.e., dimensions other than the leading dimension of size `1` will be broadcasted; it is not known which these will be at compile time).

0 commit comments

Comments
 (0)