Skip to content

Commit edce0e8

Browse files
authored
Merge pull request #24 from mcabbott/doc
Tweaks to readme
2 parents ed542eb + 06af486 commit edce0e8

File tree

2 files changed

+40
-21
lines changed

2 files changed

+40
-21
lines changed

Project.toml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,15 @@
11
name = "VML"
22
uuid = "c8ce9da6-5d36-5c03-b118-5a70151be7bc"
3+
version = "0.2.0"
34

45
[deps]
56
CpuId = "adafc99b-e345-5852-983c-f28acb93d879"
67
Libdl = "8f399da3-3557-5675-b5ff-fb832c97cbdb"
78
SpecialFunctions = "276daf66-3868-5448-9aa4-cd146d93841b"
89

910
[compat]
10-
julia = "≥ 0.7 1.0"
11+
julia = "0.7 1.0"
12+
CpuId = "0.2"
1113

1214
[extras]
1315
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"

README.md

Lines changed: 37 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -9,50 +9,67 @@ arithmetic and transcendental functions. Especially for large vectors it is ofte
99

1010
To use VML.jl, you must have the shared libraries of the Intel Vector Math Library avilable on your system.
1111
The easiest option is to use [MKL.jl](https://github.com/JuliaComputing/MKL.jl) via
12-
```
12+
```julia
1313
julia> ] add https://github.com/JuliaComputing/MKL.jl.git
1414
```
1515
Alternatively you can install MKL directly [from intel](https://software.intel.com/en-us/mkl/choose-download).
1616

1717
Note that intel MKL has a separate license, which you may want to check for commercial projects (see [FAQ]( https://software.intel.com/en-us/mkl/license-faq)).
1818

1919
To install VML.jl run
20-
```
21-
julia> ] add https://github.com/Crown421/VML.jl
20+
```julia
21+
julia> ] add https://github.com/JuliaMath/VML.jl
2222
```
2323

2424
## Using VML
2525
After loading `VML`, you have the supported function listed below available to call, i.e. `VML.sin(rand(100))`. This should provide a significant speed-up over broadcasting the Base functions.
26-
```
27-
julia> using VML
28-
julia> a = rand(10000);
29-
julia>@time sin.(a);
30-
0.159878 seconds (583.25 k allocations: 30.720 MiB, 2.78% gc time)
31-
julia> @time VML.sin(a);
32-
0.000465 seconds (6 allocations: 781.484 KiB)
33-
```
26+
```julia
27+
julia> using VML, BenchmarkTools
28+
29+
julia> a = randn(10^4);
30+
31+
julia> @btime sin.($a); # apply Base.sin to each element
32+
102.128 μs (2 allocations: 78.20 KiB)
33+
34+
julia> @btime VML.sin($a); # apply VML.sin to the whole array
35+
20.900 μs (2 allocations: 78.20 KiB)
3436

35-
Most function do currently (julia 1.x) not have a vectorized form, meaning that i.e. `sin(rand(10))` will not work. If you would like to extend the Base function with this functionality you can overload them with the `@overload` macro:
37+
julia> b = similar(a);
38+
39+
julia> @btime VML.sin!(b, a); # in-place version
40+
20.008 μs (0 allocations: 0 bytes)
3641
```
37-
julia> @overload sin
38-
julia> @time sin(a);
39-
0.000485 seconds (6 allocations: 781.484 KiB)
42+
43+
Most Julia functions do not automatically apply to all elements of an array, thus `sin(a)` gives a MethodError. If you would like to extend the Base function with this functionality, you can add methods to them with the `@overload` macro:
44+
```julia
45+
julia> @overload sin cos tan;
46+
47+
julia> @btime sin($a);
48+
20.944 μs (2 allocations: 78.20 KiB)
49+
50+
julia> ans sin.(a)
51+
true
4052
```
41-
Note the lack of the broadcasting dot`.` Now calling i.e. `sin` with an array as input will call the VML functions.
53+
Calling `sin` on an array now calls the a VML function, while its action on scalars is unchanged.
4254

4355
#### Note:
44-
Some functions like `exp` and `log` do operate on matrices from Base and refer to the [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential) and logarithm. Using `@overload exp` will overwrite this behaviour with element-wise exponentiation/ logarithm.
45-
```
46-
julia> exp([1 1; 1 1.0])
56+
57+
Some Julia functions like `exp` and `log` do operate on matrices, and refer to the [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential) and logarithm. Using `@overload exp` will overwrite this behaviour with element-wise exponentiation/ logarithm.
58+
```julia
59+
julia> exp(ones(2,2))
4760
2×2 Array{Float64,2}:
4861
4.19453 3.19453
4962
3.19453 4.19453
5063

51-
julia> VML.exp([1 1; 1 1.0])
64+
julia> VML.exp(ones(2,2))
5265
2×2 Array{Float64,2}:
5366
2.71828 2.71828
5467
2.71828 2.71828
68+
69+
julia> ans == exp.(ones(2,2))
70+
true
5571
```
72+
If your code, or any code you call, uses matrix exponentiation, then `@overload exp` may silently lead to incorrect results. This caution applies to all trigonometric functions, too, since they have matrix forms defined by matrix exponential.
5673

5774
### Accuracy
5875

0 commit comments

Comments
 (0)