-
Notifications
You must be signed in to change notification settings - Fork 40
feat: add support for multiple validators. #1080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
) | ||
end) | ||
|
||
Supervisor.init(children, strategy: :one_for_one) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One validator dying should not affect others, so I applied :one_for_one
.
slot: Types.slot(), | ||
root: Types.root(), | ||
duties: %{ | ||
attester: list(:not_computed), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is list(:not_computed | %{...})
@@ -230,7 +231,7 @@ defmodule LambdaEthereumConsensus.Validator do | |||
old -> old |> Enum.reverse() |> Enum.take(1) | |||
end | |||
|
|||
%{duties | attester: attester_duties, proposer: old_duty ++ proposer_duties} | |||
%{duties | attester: attester_duties, proposer: Enum.dedup(old_duty ++ proposer_duties)} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This reintroduces a bug related to last-slot-in-epoch proposals. We should instead discard "future" duties before computing old_duty
. Something like:
slot = Misc.compute_start_slot_for_epoch(epoch)
case duties.proposer do
:not_computed -> []
old -> old |> Enum.filter(&(&1 < slot)) |> Enum.reverse() |> Enum.take(1)
end
GenServer.cast(subscriber, {:on_tick, logical_time}) | ||
end) | ||
|
||
ValidatorManager.notify_tick(logical_time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
breaking the pattern here... ValidatorManager is not a genserver so it can't receive cast messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional and low priority, but if you want consistency, you can add a notify_tick
function in the Gossip.BeaconBlock
module to wrap the cast
, which is a very common practice (it's uncommon to call cast
directly from the outside). You can then Enum.each
over the modules and call notify_tick
for each.
|
||
finalized_hash = | ||
if finalized_root == <<0::256>> do | ||
<<0::256>> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dealing with the case where there is not finalized block yet
https://github.com/ethereum/execution-apis/blob/main/src/engine/paris.md#forkchoicestatev1
Note: safeBlockHash and finalizedBlockHash fields are allowed to have 0x0000000000000000000000000000000000000000000000000000000000000000 value unless transition block is finalized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, added some comments but are either optional or to be tackled on separate PRs.
GenServer.cast(subscriber, {:on_tick, logical_time}) | ||
end) | ||
|
||
ValidatorManager.notify_tick(logical_time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Optional and low priority, but if you want consistency, you can add a notify_tick
function in the Gossip.BeaconBlock
module to wrap the cast
, which is a very common practice (it's uncommon to call cast
directly from the outside). You can then Enum.each
over the modules and call notify_tick
for each.
…us into support-multiple-validators
Motivation
We want to be able to support multiple validators for a single node
Description
The way this works is to have a
keystore_dir
andkeystore_password_file
each of which has one file per validator. The files look like this according to kurtosis:validator-keys/teku-keys/0xb05cafec5912f22dbd6f15677f25f13d93ecd5ec6f957fddd7cf27d73521b34aaaf6a219f77b21128d18321c2c8d679b.json
validator-keys/teku-secrets/0xb05cafec5912f22dbd6f15677f25f13d93ecd5ec6f957fddd7cf27d73521b34aaaf6a219f77b21128d18321c2c8d679b.txt
Also fixed some validator bugs.