- We are going to use a top to bottom approach but first I need to know more from you
- Who are you? What is your background?
- What are you building?
- Have you used erlang? When? For how much time? What did you build?
- Federico Carrone
- LambdaClass: software company specialized in data science and distributed systems
- BuzzConf, Zruput, Papers We Love Buenos Aires,
- notamonadtutorial.com
- twitter: @unbalancedparen
- github: unbalancedparentheses
- History: build software without downtime
- The mess we are in managing state: A C program with six 32-bit integers can have more states than the number of atoms on the planet
- Soft real time systems: Througput-latency, expected value-standard deviation tradeoff
- Do one thing and only one thing well
- Practicality over theoretical elegance or coherence (ETS)
- Building blocks: redundancy -> distribution -> networking -> concurrency -> isolation -> no shared mutable state + preemption -> message passing
- Redundancy is good, over optimization is bad
- Let it crash: Isolation + automatic restart to a correct state: writing code that only deals with the problem
-
- Binaries: much easier it is to deal with binaries in erlang than in C
- Socket handling, specially when things go wrong
- Dialyzer: sweet practical optional typing
- Deep inspection
The unit is the process. Process are isolated, they can be running on different nodes. Process communicate using message passing. Message passing is asynchronous and the message is copied. Each process has its own mailbox where it receives messages.
Processes are isolated, but messaging between processes creates a dependency. Monitor and links are the tool that links them. Instead Erlang gives us two mechanisms to deal with this: monitors and links.
To invoke the Erlang shell use the erl
command.
user@mypc ~/erlang-workshop> erl
Erlang/OTP 22 [erts-10.4] [source] [64-bit] [smp:4:4] [ds:4:4:10] [async-threads:1] [hipe]
Eshell V10.4 (abort with ^G)
1>
Once in the shell you can use help().
to get all shell commands
There is an additional command invoked with ^G
(Ctrl+G), invoking it and pressing h
will show its help text.
1>
User switch command
--> h
c [nn] - connect to job
i [nn] - interrupt job
k [nn] - kill job
j - list all jobs
s [shell] - start local shell
r [node [shell]] - start remote shell
q - quit erlang
? | h - this message
-->
- term: any data type
- integer
- float
- atom: a literal, constant with a name (e.g:
foo
) - string
- bit strings:
<<"foo bar">>
- pid
- list
- tuple
- record
- map
- references
In Erlang you don’t have the usual assignment operation for variables, instead you bound them using pattern matching.
%% We pattern match right-hand term to left-hand pattern
Variable = foo
%% Varibale is bound to the value foo
[FirstElement | _] = [1,2,3]
%% FirstElement is bound to the value 1
Foo = 0,
Foo = 1. % ** exception error: no match of right hand side value 1
The unit of code in Erlang is the module. Functions need to be written inside a module
-module(geometry).
-export([area/1]).
area({rectangle, Width, Height}) -> Width * Height;
area({circle, R}) -> 3.14 * R * R.
A function declaration is conformed by one or more function clauses, this are made up by a head (signature) and its body.
foo(Arg1, [] = ListArg) ->
magic;
foo(Arg1, [ListArg1, | _] = ListArg) ->
magic2.
Also, function clauses can have a guard to validate that it is the correct match
greater_thant_2(Num) when num > 2 ->
true;
greater_thant_2(_) ->
false;
function greet(Gender,Name)
if Gender == male then
print("Hello, Mr. %s!", Name)
else if Gender == female then
print("Hello, Mrs. %s!", Name)
else
print("Hello, %s!", Name)
end
greet(male, Name) ->
io:format("Hello, Mr. ~s!", [Name]);
greet(female, Name) ->
io:format("Hello, Mrs. ~s!", [Name]);
greet(_, Name) ->
io:format("Hello, ~s!", [Name]).
Write a function to calculate the factorial of N
Solution:
factorial(1) ->
1;
factorial(Num) ->
Num * factorial(Num - 1).
How to create (spawn) a process:
spawn(Module, Fun, Args) -> pid()
spawn
creates a new process and returns its PID. The new process will start executing in Module:Fun(Arg0, ..., ArgN)
lists:map(fun(_)->
spawn(fun() ->
timer:sleep(100000)
end)
end, lists:seq(1, 100000)).
observer:start().
Instead of addressing a process by its PID we can register it with a name and refer to it using that. The name must be an atom and will be automatically unregistered when the process terminates.
%% Associates name Name with process Pid
register(Name, Pid) -> true
%% Returns a list of names that have been registered
registered()
%% Returns the Pid registered under Name, or undefined if the name is not registered
whereis(Name) -> Pid | undefined
Processes can communicate by sending messages between them. Each process has a mailbox (queue) from which it will look for new messages that match the receiving pattern or for timeout to happen.
To send a message we use the send operator !
:
%% Send by PID
Pid ! Msg.
self() ! [blob].
pid(0, 118, 0) ! bar.
%% Send by registered name
Name ! {Stuff1, Stuff2}.
proc1 ! foo.
To receive a message (pop it from mailbox) we call receive
, optionally we can also use after
for a timeout:
receive
Pattern1 [when Guard1] ->
Body1;
.
.
.
PatternN [when GuardN] ->
BodyN
after
60000 ->
BodyAfter
end
$ erl -sname a
Erlang/OTP 22 [erts-10.5.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace]
Eshell V10.5.2 (abort with ^G)
(a@maki)1> node().
a@maki
(a@maki)2> net_kernel:connect_node('b@maki').
true
(a@maki)3> nodes().
[b@maki]
(a@maki)4> register(shell_a, self()).
true
(a@maki)5> flush().
Shell got hello_world
ok
$ erl -sname b
Erlang/OTP 22 [erts-10.5.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace]
Eshell V10.5.2 (abort with ^G)
(b@maki)1> node().
b@maki
(b@maki)2> nodes().
[a@maki]
(b@maki)3> {shell_a, 'a@maki'} ! hello_world.
hello_world
A link is a specific kind of relationship that can be created between two processes. When that relationship is set up and one of the processes dies from an unexpected throw, error or exit (see Errors and Exceptions), the other linked process also dies.
Erlang/OTP 22 [erts-10.5.2] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [hipe] [dtrace]
Eshell V10.5.2 (abort with ^G)
1> self().
<0.78.0>
2> spawn(fun() -> throw(problem) end).
<0.81.0>
=ERROR REPORT==== 11-Dec-2019::00:45:36.890690 ===
Error in process <0.81.0> with exit value:
{{nocatch,problem},[{shell,apply_fun,3,[{file,"shell.erl"},{line,904}]}]}
3> self().
<0.78.0>
4> spawn_link(fun() -> throw(problem) end).
=ERROR REPORT==== 11-Dec-2019::00:45:42.976240 ===
Error in process <0.84.0> with exit value:
{{nocatch,problem},[{shell,apply_fun,3,[{file,"shell.erl"},{line,904}]}]}
** exception exit: {nocatch,problem}
5> self().
<0.85.0>
Monitors are what you want when a process wants to know what’s going on with a second process, but neither of them really are vital to each other. Monitors are unidirectional and they can be stacked.
1> erlang:monitor(process, spawn(fun() -> timer:sleep(500) end)).
#Ref<0.0.0.77>
2> flush().
Shell got {'DOWN',#Ref<0.0.0.77>,process,<0.63.0>,normal}
ok
Each process has its own dictionary, which you can access using the following BIFs:
%% Returns the entire process dictionary.
get() -> [{Key1, Val1}, ...]
%% Returns the item associated with Key or ~undefined~
get(Key) -> Item | undefined
%% Returns a list of all keys whose associated value is Value.
get_keys(Value) -> [...]
%% Associate Value with Key. Returns the old value associated with Key or ~undefined~ if no such association exists.
put(Key, Value) -> OldValue | undefined
%% Erases the entire process dictionary. Returns the entire process dictionary before it was erased.
erase() -> [{Key1, Val1}, ...]
%% Erases the value associated with Key. Returns the old value associated with Key or undefined if no such association exists.
erase(Key) -> OldValue | undefined
Erlang comes with OTP (Open Telecom Platform), this is a framework that groups repeating and essentials tasks into librarires.
This libraries work by using an abstraction presented by Erlang/OTP called behaviors
, this allow you to have generic code and then specify needed callbacks for the module that wants to implement the behavior.
The main behaviors you will most likely used are:
- gen_*
- gen_server
- gen_event
- gen_statem
- supervisor
- application
The gen_server behavior provides what you need for a generic server in a process.
To implement it in your module you need the following callbacks:
init/1
: It initializes the server process and returns one of the following:{ok, State}
{ok, State, Timeout}
{ok, State, hibernate}
{stop, Reason}
ignore
handle_call/3
: Used to handle synchronous messages sent through the gen_server interface. Its 3 parameters are:Request
,From
,State
.{reply, Reply, NewState}
{reply, Reply, NewState, Timeout}
{reply, Reply, NewState, hibernate}
{noreply, NewState}
{noreply, NewState, Timeout}
{noreply, NewState, hibernate}
{stop, Reason, Reply, NewState}
{stop, Reason, NewState}
handle_cast/2
: Used to handle asynchronous messages sent through the gen_server interface. Its 2 parameters are:Message
,State
.{noreply, NewState}
{noreply, NewState, Timeout}
{noreply, NewState, hibernate}
{stop, Reason, NewState}
handle_info/2
: Similar tohandle_cast/2
, but for messages sent without using gen_server’s interface (!
, exit signals, etc).{noreply, NewState}
{noreply, NewState, Timeout}
{noreply, NewState, hibernate}
{stop, Reason, NewState}
Any OTP application will have the following directories:
src
: Erlang source files for your applicationinclude
: Erlang header filespriv
: Miscellaneous files needed by your applicationebin
: Compiled filestest
: Test files
The next thing would be to setup the application resource file, this tells the Erlang VM all the information it needs to run our application.
The structure of the file is as follows
%% {application, ApplicationName, Properties}
%% Properties is a list of {Key, Value} tuples used by OTP
%% to figure out your application
{application, hello,
[{description, "An OTP application"},
{vsn, "0.1.0"},
{registered, []},
{mod, {hello_app, []}},
{applications,
[kernel,
stdlib,
cowboy
]},
{env,[]},
{modules, []},
{licenses, ["Apache 2.0"]},
{links, []}
]}.
The final thing you’ll need is to define a module that implements the application behavior, which needs two callbacks:
start/2
: The function initialises everything for your app and only needs to return the PID of the application’s top-level supervisor in one of the two following forms:{ok, Pid}
or{ok, Pid, SomeState}
.stop/1
: function takes the state returned bystart/2
as an argument. It runs after the application is done running and only does the necessary cleanup.
-module(hello_app).
-behaviour(application).
-export([start/2, stop/1]).
start(_StartType, _StartArgs) ->
hello_sup:start_link().
stop(_State) ->
ok.
A link is a relationship between two processes in which whenever either dies in an unexpected way the other one dies also.
You can prevent a linked process from dying when the other dies unexpectedly by trapping exit signals (process_flag(trap_exit, true)
)
This will make the exit signlas received by the trapping process become messages instead ({'EXIT', FromPid, Reason}
).
Erlang/OTP applications work by using a supervisor tree to supervise all the processes (well, the important ones) in case any one fails and restart it.
Basically a root supervisor (process) which spawns either workers or supervisors processes, those supervisors processes can further spawn other workes or supervisors.
This is done using the supervisor behavior
, this behavior just needs one single callback init/1
that returns {ok, {{RestartStrategy, MaxRestart, MaxTime}, [ChildSpecs]}}.
.
Let’s explain those return values:
RestartStrategy
: one_for_one, one_for_all, rest_for_one, simple_one_for_oneMaxRestart
and MaxTime: if more thanMaxRestart
happen inMaxTime
the supervisor gives up and kills itself.- ChildSpec:
{ChildId, StartFunc, Restart, Shutdown, Type, Modules}
- ChildId: Internal name used by the supervisor
- StartFunc:
{M, F, A}
to start the child with - Restart: How to react when the child dies:
permanent
,temporary
, ortransient
- Shutdown: Timeout for child shutdown
- Type:
worker
orsupervisor
- Modules: is a list of one element, the name of the callback module used by the child behavior, or
dynamic
if not known.
init(_) ->
{ok, {{one_for_all, 5, 60},
[{fake_id,
{fake_mod, start_link, [SomeArg]},
permanent,
5000,
worker,
[fake_mod]},
{other_id,
{event_manager_mod, start_link, []},
transient,
infinity,
worker,
dynamic}]}}.
The go to build tool for erlang projects right now is rebar3
rebar3 new <template> <project-name>
rebar3 compile
rebar3 shell
rebar as <profile> tar
rebar3 eunit
rebar3 ct
{deps, [
{cowboy, "2.1.0"},
{syn, "1.6.1"},
{redbug, {git, "https://github.com/massemanet/redbug.git", {tag, "1.2.1"}}},
]}.
{relx, [{release, {exampleapp, "1"}, [exampleapp]},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true},
{overlay_vars, "conf/local_vars.config"},
{overlay, [{template, "conf/sys.config", "releases/{{default_release_version}}/sys.config"},
{template, "conf/vm.args", "releases/{{default_release_version}}/vm.args"}]}
]}.
{profiles, [{test, [{erl_opts, [nowarn_export_all]},
{relx, [{overlay_vars, "conf/test_vars.config"}]}]},
{prod, [{relx, [{dev_mode, false},
{overlay_vars, "conf/prod_server_vars.config"},
{include_src, false},
{vm_args, "./conf/vm.args"},
{extended_start_script, true}]}]}]}.
- The network is reliable
- There is no latency
- Bandwidth is infinite
- The network is secure
- Topology doesn’t change
- There is only one administrator
- Transport cost is zero
- The network is homogeneous
Choose 2:
- Consistency
- Availability
- Partition tolerance
Erlang is designed with distribution in mind. A distributed Erlang system consist (cluster) on a number of Erlang runtime systems (nodes) communicating with each other.
All features learned for local system using a PID work on a distributed system, except for registering a name for a PID, that’s local for each node.
A node is started by giving the Erlang runtime a name, either a short name (-sname) or a long name (
-name~). Keep in mind a short named node can connect to a long named one and vice versa.
%% erl -name [email protected]
(dilbert@127.0.0.1)1> node().
'[email protected]'
%% erl -sname dilbert
(dilbert@domain)1> node().
dilbert@domain
Nodes in a cluster are loosely connected. The first time an interaction with another node is invoked (e.g. spawn(Node,M,F,A)
) the connection attempt is done.
Connections are by default transitive. If node A connects to node B and then node B connects to node C, a connection between node A and C is established.
If a node goes down all connections to that node are removed.
Ranch is a socket acceptor pool for TCP protocols.
Ranch aims to provide everything you need to accept TCP connections with a small code base and low latency.
Ranch provides a modular design, letting you choose which transport and protocol are going to be used for a particular listener.
Listeners accept and manage connections on one port, and include facilities to limit the number of concurrent connections.
Connections are sorted into pools, each pool having a different configurable limit.
Ranch also allows you to upgrade the acceptor pool without having to close any of the currently opened sockets.
Small, fast, modular HTTP server.
Cowboy aims to provide a complete HTTP stack in a small code base. It is optimized for low latency and low memory usage, in part because it uses binary strings.
Cowboy provides routing capabilities, selectively dispatching requests to handlers written in Erlang.
Because it uses Ranch for managing connections, Cowboy can easily be embedded in any other application.
Let’s use Cowboy to implement an echo server with the following endpoints
/echo/:word
: returns:word
, will use basic Cowboy handler/echo_rest/:word
: returns:word
, will use Cowboy’s REST handler/echo_ws/:word
: returns:word
, will use Cowboy’s websocket handler
First create the project using rebar3 new app hello
, add cowboy ({cowboy, "2.7.0"}
) as a dependency, and include it in your application.
Next we create the routes and initialize cowboy in our app:
Dispatch = cowboy_router:compile([{'_', [{"/echo/:word", echo_handler, []},
{"/echo_rest/:word", echo_rest_handler, []},
{"/echo_ws/", echo_ws_handler, []}]}]),
{ok, _} = cowboy:start_clear(http, [{port, 8080}], #{env => #{dispatch => Dispatch}}),
Then we just need to create each of the module handlers we specified and the needed callbacks.
For the simple handler we just need to implement the init/2
callback.
-module(echo_handler).
-export([init/2]).
init(Req0, State) ->
Word = cowboy_req:binding(word, Req0),
Req = cowboy_req:reply(200,
#{<<"content-type">> => <<"text/plain">>},
Word,
Req0),
{ok, Req, State}.
The REST handler is more complicated, but usually its defaults are pretty good. So you just need to implement the callbacks that you need.
-module(echo_rest_handler).
-export([init/2,
allowed_methods/2,
content_types_provided/2,
to_plain/2]).
init(Req, State) ->
{cowboy_rest, Req, State}.
allowed_methods(Req, State) ->
{[<<"GET">>], Req, State}.
content_types_provided(Req, State) ->
{[{<<"text/plain">>, to_plain}], Req, State}.
to_plain(Req, State) ->
Word = cowboy_req:binding(word, Req),
{Word, Req, State}.
The websocket handler looks like this:
-module(echo_ws_handler).
-export([init/2,
websocket_init/1,
websocket_handle/2]).
init(Req, Opts) ->
{cowboy_websocket, Req, Opts}.
websocket_init(State) ->
{ok, State}.
websocket_handle(Frame = {text, _}, State) ->
{reply, Frame, State};
websocket_handle(_Frame, State) ->
{ok, State}.
This module implements process groups. Each message can be sent to one, some, or all group members.
There are no special functions for sending a message to the group. Instead, client functions are to be written using get_members/1
and get_local_members/1
to get the processes and send messages to them.
1> pg2:create(echos).
ok
2> pg2:join(echos, self()).
ok
3> pg2:get_members(echos).
[<0.78.0>]
4> pg2:leave(echos, self()).
ok
5> pg2:get_members(echos).
[]
Syn is a global Process Registry and Process Group manager for Erlang and Elixir.
Syn automatically manages addition/removal of nodes from the cluster, and is also able to recover from net splits.
%% Process Registry
1> syn:register(hello_proc, self()).
ok
2> syn:whereis(hello_proc).
<0.155.0>
%% Process Group
3> syn:join(echos, self()).
ok
4> syn:get_members(echos).
[<0.155.0>]
6> syn:publish(echos, something).
{ok,1}
7> flush().
Shell got something
ok
Based on Erlings: Shortly
Create an OTP
application using rebar3
and cowboy that is capable of receiving long links and returning shorts ones:
- Receive a
HTTP POST
athttp://localhost:8080/<LONG_URL>
returning a shortened link. - Receive a
HTTP GET
athttp://localhost:8080/<SHORT_URL>
returning the original long link. - Accept websocket connections at
http://localhost:8080/news
and notify every time a new link is shortened.
BONUS: Create similar endpoints (GET
and POST
), but using cowboy_rest
handler.
EUnit is a unit testing framework for Erlang. It relies on many preprocessor macros that have been designed to be as nonintrusive as possible (avoid collisions with your code) and make testing easier.
To write tests first create a module in the test
folder that includes Eunit’s header -include_lib("eunit/include/eunit.hrl").
Then we can write test by making functions that end with _test
. This will be recognized by Eunit and automatically called without params.
A test is marked as failed if it throws an exception, anything else is a success.
Finally we can run them by doing rebar3 eunit
.
-module(hello_test).
-include_lib("eunit/include/eunit.hrl").
hello_world_test() ->
<<"hello world">> = hello:hello().
Common Test (CT) is a more robust testing framework in Erlang/OTP that allows more complex test cases than Eunit.
In CT you have test suites (modules) that define test cases (functions) to be executed.
As in Eunit a failed test is caused by a runtime error, usually in the way of a badmatch
in a pattern match.
Anything else is a success. However, a few return values have special meaning:
{skip,Reason}
: indicates that the test case is skipped.{comment,Comment}
: prints a comment in the log for the test case.{save_config,Config}
: makes the Common Test test server pass Config to the next test case.
Also CT provides the following optional callbacks for setup/teardown:
init_per_suite(Config)
andend_per_suite(Config)
init_per_group(GroupName, Config)
andend_per_group(GroupName, Config)
init_per_testcase(TestCase, Config)
andend_per_testcase(TestCase, Config)
-module(hello_SUITE).
-include_lib("common_test/include/ct.hrl").
-export([all/0]).
-export([hello_world/1]).
all() ->
[hello_world].
hello_world(_Config) ->
<<"hello world">> = hello:hello().
echo(_Config) ->
<<"hello world">> = hello:hello().
Erlang offers a powerful way of debugging called tracing.
The Erlang module dbg
offers the functions needed to trace anything, but it’s a bit overcomplicated to use.
In general you’ll want to use the library Redbug, it’s really easy to use and very powerful.
1> redbug:start("erlang:demonitor").
{30,2}
15:39:00 <{erlang,apply,2}> {erlang,demonitor,[#Ref<0.0.0.21493>]}
15:39:00 <{erlang,apply,2}> {erlang,demonitor,[#Ref<0.0.0.21499>]}
15:39:00 <{erlang,apply,2}> {erlang,demonitor,[#Ref<0.0.0.21500>]}
redbug done, timeout - 3
%% Trace on messages that the shell process receives.
2> redbug:start('receive',[{procs,[self()]}]).
{1,0}
15:15:47 <{erlang,apply,2}> <<< {running,1,0}
15:17:49 <{erlang,apply,2}> <<< timeout
redbug done, timeout - 2
A simple Call Count Profiling Tool using breakpoints for minimal runtime performance impact.
The cprof module is used to profile a program to find out how many times different functions are called.
Using it consist of:
- cprof:start/0..3
- Mod:fun(…)
- cprof:pause/0..3
- cprof:analyse/0..2
- cprof:restart/0..3
- cprof:stop/0..3
1> cprof:start(), cprof:pause(). % Stop counters just after start
3476
2> cprof:analyse().
{30,
[{erl_eval,11,
[{{erl_eval,expr,3},3},
{{erl_eval,'-merge_bindings/2-fun-0-',2},2},
{{erl_eval,expand_module_name,2},1},
{{erl_eval,merge_bindings,2},1},
{{erl_eval,binding,2},1},
{{erl_eval,expr_list,5},1},
{{erl_eval,expr_list,3},1},
{{erl_eval,exprs,4},1}]},
{orddict,8,
[{{orddict,find,2},6},
{{orddict,dict_to_list,1},1},
{{orddict,to_list,1},1}]},
{packages,7,[{{packages,is_segmented_1,1},6},
{{packages,is_segmented,1},1}]},
{lists,4,[{{lists,foldl,3},3},{{lists,reverse,1},1}]}]}
3> cprof:analyse(cprof).
{cprof,3,[{{cprof,tr,2},2},{{cprof,pause,0},1}]}
4> cprof:stop().
3476
The module eprof provides a set of functions for time profiling of Erlang programs to find out how the execution time is used.
1> eprof:start().
{ok,<0.80.0>}
2> eprof:start_profiling([self()]).
profiling
3> spawn(lists, reverse, [[1,2,3]]).
<0.83.0>
4> eprof:stop_profiling().
profiling_stopped
5> eprof:analyze(total).
FUNCTION CALLS % TIME [uS / CALLS]
-------- ----- ------- ---- [----------]
lists:map/2 2 0.00 0 [ 0.00]
lists:rumergel/3 2 0.00 0 [ 0.00]
io_lib_pretty:write_atom/2 1 0.00 0 [ 0.00]
erl_lint:check_module_name/3 1 0.00 0 [ 0.00]
gb_sets:is_member/2 2 0.00 0 [ 0.00]
erlang:min/2 2 0.00 0 [ 0.00]
io:getopts/1 2 0.05 1 [ 0.50]
io:default_input/0 2 0.05 1 [ 0.50]
io:io_requests/2 2 0.05 1 [ 0.50]
erl_scan:reserved_word/1 1 0.05 1 [ 1.00]
gen:do_for_proc/2 1 0.05 1 [ 1.00]
...
...
lists:usort/1 12 1.63 35 [ 2.92]
dict:on_bucket/3 2 1.72 37 [ 18.50]
erl_anno:anno_info/2 17 1.72 37 [ 2.18]
lists:keyfind/3 35 1.76 38 [ 1.09]
erl_lint:bool_option/4 30 2.14 46 [ 1.53]
erlang:tuple_to_list/1 34 2.32 50 [ 1.47]
erlang:monitor/2 7 2.83 61 [ 8.71]
erl_parse:modify_anno1/3 30 3.07 66 [ 2.20]
shell:used_records/4 120 3.30 71 [ 0.59]
shell:used_records/1 120 3.34 72 [ 0.60]
shell:prep_check/1 141 3.76 81 [ 0.57]
lists:foldl/3 127 3.95 85 [ 0.67]
erl_anno:is_settable/2 17 3.99 86 [ 5.06]
------------------------------------------ ----- ------- ---- [----------]
Total: 1556 100.00% 2153 [ 1.38]
ok
This module is used to profile a program to find out how the execution time is used. Trace to file is used to minimize runtime performance impact.
Profiling is essentially done in 3 steps:
- Tracing; to file, as mentioned in the previous paragraph.
- Profiling; the trace file is read and raw profile data is collected into an internal RAM storage on the node. During this step the trace data may be dumped in text format to file or console.
- Analysing; the raw profile data is sorted and dumped in text format either to file or console.
Profiling can be done in 3 ways:
- From source code by adding
fprof:trace(start)
andfprof:trace(stop)
before and after the code to profile - From a function by using
fprof:apply(Module, Function, Args)
orfprof:apply(Module, Function, Args, [continue | OtherOpts])
if tracing should continue after function returns - Immediately doing:
1> {ok, Tracer} = fprof:profile(start).
{ok,<0.81.0>}
2> fprof:trace([start, {tracer, Tracer}]).
Reading trace data...
ok
%% Code to profile
3> lists:reverse([1,2,3,4]).
[4,3,2,1]
4> fprof:trace(stop).
.
End of trace!
ok