Skip to content

Commit 4ef9c5f

Browse files
committed
fix: CI not running (#9)
1 parent 137e507 commit 4ef9c5f

File tree

2 files changed

+115
-71
lines changed

2 files changed

+115
-71
lines changed

.github/workflow/elixir.yaml

Lines changed: 2 additions & 67 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
name: Elixir CI
22

33
on:
4+
push:
5+
branches: ["main"]
46
pull_request:
57
branches: ["main"]
6-
78
env:
89
MIX_ENV: test
910

@@ -32,84 +33,18 @@ jobs:
3233
otp: ["25.0.4"] # Define the OTP version [required]
3334
elixir: ["1.14.1"] # Define the elixir version [required]
3435
steps:
35-
# Step: Setup Elixir + Erlang image as the base.
3636
- name: Set up Elixir
3737
uses: erlef/setup-beam@v1
3838
with:
3939
otp-version: ${{matrix.otp}}
4040
elixir-version: ${{matrix.elixir}}
41-
# Cache key based on Erlang/Elixir version and the mix.lock hash
42-
- name: Restore PLT cache
43-
id: plt_cache
44-
uses: actions/cache/restore@v3
45-
with:
46-
key: |
47-
plt-${{ runner.os }}-${{ steps.beam.outputs.otp-version }}-${{ steps.beam.outputs.elixir-version }}-${{ hashFiles('**/mix.lock') }}
48-
restore-keys: |
49-
plt-${{ runner.os }}-${{ steps.beam.outputs.otp-version }}-${{ steps.beam.outputs.elixir-version }}-
50-
path: |
51-
priv/plts
52-
53-
# By default, the GitHub Cache action will only save the cache if all steps in the job succeed,
54-
# so we separate the cache restore and save steps in case running dialyzer fails.
55-
- name: Save PLT cache
56-
id: plt_cache_save
57-
uses: actions/cache/save@v3
58-
if: steps.plt_cache.outputs.cache-hit != 'true'
59-
with:
60-
key: |
61-
plt-${{ runner.os }}-${{ steps.beam.outputs.otp-version }}-${{ steps.beam.outputs.elixir-version }}-${{ hashFiles('**/mix.lock') }}
62-
path: |
63-
priv/plts
64-
# Step: Check out the code.
6541
- name: Checkout code
6642
uses: actions/checkout@v3
67-
68-
# Step: Define how to cache deps. Restores existing cache if present.
69-
- name: Cache deps
70-
id: cache-deps
71-
uses: actions/cache@v3
72-
env:
73-
cache-name: cache-elixir-deps
74-
with:
75-
path: deps
76-
key: ${{ runner.os }}-mix-${{ env.cache-name }}-${{ hashFiles('**/mix.lock') }}
77-
restore-keys: |
78-
${{ runner.os }}-mix-${{ env.cache-name }}-
79-
80-
# Step: Define how to cache the `_build` directory. After the first run,
81-
# this speeds up tests runs a lot. This includes not re-compiling our
82-
# project's downloaded deps every run.
83-
- name: Cache compiled build
84-
id: cache-build
85-
uses: actions/cache@v3
86-
env:
87-
cache-name: cache-compiled-build
88-
with:
89-
path: _build
90-
key: ${{ runner.os }}-mix-${{ env.cache-name }}-${{ hashFiles('**/mix.lock') }}
91-
restore-keys: |
92-
${{ runner.os }}-mix-${{ env.cache-name }}-
93-
${{ runner.os }}-mix-
94-
95-
# Step: Download project dependencies. If unchanged, uses
96-
# the cached version.
9743
- name: Install dependencies
9844
run: mix deps.get
99-
100-
# Step: Compile the project treating any warnings as errors.
10145
- name: Compiles without warnings
10246
run: mix compile --warnings-as-errors
103-
104-
# Create PLTs if no cache was found
105-
- name: Create PLTs
106-
if: steps.plt_cache.outputs.cache-hit != 'true'
107-
run: mix dialyzer --plt
108-
109-
# Step: Check that the checked in code has already been formatted.
11047
- name: Check Formatting
11148
run: mix format --check-formatted
112-
113-
# Step: Execute the tests.
11449
- name: Run tests
11550
run: mix test

README.md

Lines changed: 113 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,10 +21,6 @@ def deps do
2121
end
2222
```
2323

24-
## How it works
25-
26-
We connect to a Postgres instance using Postgrex. With the [Postgrex.Notifications](https://hexdocs.pm/postgrex/Postgrex.Notifications.html) module we will track for `LISTEN` events on the configured channel. We'll also use `NOTIFY` queries to send the node's information.
27-
2824
## How to use it
2925

3026
To use it, set your configuration file with the informations for your database:
@@ -65,6 +61,119 @@ defmodule MyApp do
6561
end
6662
end
6763
```
64+
### Why do we need a distributed Erlang Cluster?
65+
66+
At Supabase, we use clustering in all of our Elixir projects which include [Logflare](https://github.com/Logflare/logflare), [Supavisor](https://github.com/supabase/supavisor) and [Realtime](https://github.com/supabase/realtime). With multiple servers connected we can load shed, create globally distributed services and provide the best service to our customers so we’re closer to them geographically and to their instances, reducing overall latency.
67+
68+
Example of Realtime architecture where a customer from CA will connect to the server closest to them and their Supabase instance
69+
70+
To achieve a connected cluster, we wanted to be as cloud-agnostic as possible. This makes our self-hosting options more accessible. We don’t want to introduce extra services to solve this single issue - Postgres is the logical way to achieve it.
71+
72+
The other piece of the puzzle was already built by the Erlang community being the defacto library to facilitate the creation of connected Elixir servers: [libcluster](https://github.com/bitwalker/libcluster).
73+
74+
### What is libcluster?
75+
76+
[libcluster](https://github.com/bitwalker/libcluster) is the go-to package for connecting multiple BEAM instances and setting up healing strategies. libcluster provides out-of-the-box strategies and it allows users to define their own strategies by implementing a simple behavior that defines cluster formation and healing according to the supporting service you want to use.
77+
78+
### How did we use Postgres?
79+
80+
Postgres provides an event system using two commands: [NOTIFY](https://www.postgresql.org/docs/current/sql-notify.html) and [LISTEN](https://www.postgresql.org/docs/current/sql-listen.html) so we can use them to propagate events within our Postgres instance.
81+
82+
To use this features, you can use psql itself or any other Postgres client. Start by listening on a specific channel, and then notify to receive a payload.
83+
84+
```markdown
85+
postgres=# LISTEN channel;
86+
LISTEN
87+
postgres=# NOTIFY channel, 'payload';
88+
NOTIFY
89+
Asynchronous notification "channel" with payload "payload" received from server process with PID 326.
90+
```
91+
92+
Now we can replicate the same behavior in Elixir and [Postgrex](https://hex.pm/packages/postgrex) within IEx (Elixir's interactive shell).
93+
94+
```elixir
95+
Mix.install([{:postgrex, "~> 0.17.3"}])
96+
config = [
97+
hostname: "localhost",
98+
username: "postgres",
99+
password: "postgres",
100+
database: "postgres",
101+
port: 5432
102+
]
103+
{:ok, db_notification_pid} = Postgrex.Notifications.start_link(config)
104+
Postgrex.Notifications.listen!(db_notification_pid, "channel")
105+
{:ok, db_conn_pid} = Postgrex.start_link(config)
106+
Postgrex.query!(db_conn_pid, "NOTIFY channel, 'payload'", [])
107+
108+
receive do msg -> IO.inspect(msg) end
109+
# Mailbox will have a message with the following content:
110+
# {:notification, #PID<0.223.0>, #Reference<0.57446457.3896770561.212335>, "channel", "test"}
111+
```
112+
113+
### Building the strategy
114+
115+
Using the libcluster `Strategy` behavior, inspired by [this GitHub repository](https://github.com/kevbuchanan/libcluster_postgres) and knowing how `NOTIFY/LISTEN` works, implementing a strategy becomes straightforward:
116+
117+
1. We send a `NOTIFY` to a channel with our `node()` address to a configured channel
118+
119+
```elixir
120+
# lib/cluster/strategy/postgres.ex:52
121+
def handle_continue(:connect, state) do
122+
with {:ok, conn} <- Postgrex.start_link(state.meta.opts.()),
123+
{:ok, conn_notif} <- Postgrex.Notifications.start_link(state.meta.opts.()),
124+
{_, _} <- Postgrex.Notifications.listen(conn_notif, state.config[:channel_name]) do
125+
Logger.info(state.topology, "Connected to Postgres database")
126+
127+
meta = %{
128+
state.meta
129+
| conn: conn,
130+
conn_notif: conn_notif,
131+
heartbeat_ref: heartbeat(0)
132+
}
133+
134+
{:noreply, put_in(state.meta, meta)}
135+
else
136+
reason ->
137+
Logger.error(state.topology, "Failed to connect to Postgres: #{inspect(reason)}")
138+
{:noreply, state}
139+
end
140+
end
141+
```
142+
143+
1. We actively listen for new `{:notification, pid, reference, channel, payload}` messages and connect to the node received in the payload
144+
145+
```elixir
146+
# lib/cluster/strategy/postgres.ex:80
147+
def handle_info({:notification, _, _, _, node}, state) do
148+
node = String.to_atom(node)
149+
150+
if node != node() do
151+
topology = state.topology
152+
Logger.debug(topology, "Trying to connect to node: #{node}")
153+
154+
case Strategy.connect_nodes(topology, state.connect, state.list_nodes, [node]) do
155+
:ok -> Logger.debug(topology, "Connected to node: #{node}")
156+
{:error, _} -> Logger.error(topology, "Failed to connect to node: #{node}")
157+
end
158+
end
159+
160+
{:noreply, state}
161+
end
162+
```
163+
164+
1. Finally, we configure a heartbeat that is similar to the first message sent for cluster formation so libcluster is capable of heal if need be
165+
166+
```elixir
167+
# lib/cluster/strategy/postgres.ex:73
168+
def handle_info(:heartbeat, state) do
169+
Process.cancel_timer(state.meta.heartbeat_ref)
170+
Postgrex.query(state.meta.conn, "NOTIFY #{state.config[:channel_name]}, '#{node()}'", [])
171+
ref = heartbeat(state.config[:heartbeat_interval])
172+
{:noreply, put_in(state.meta.heartbeat_ref, ref)}
173+
end
174+
```
175+
176+
These three simple steps allow us to connect as many nodes as needed, regardless of the cloud provider, by utilising something that most projects already have: a Postgres connection.
68177

69178
## Acknowledgements
70179

0 commit comments

Comments
 (0)