Skip to content

OpenAI API and UI using streamlit #875

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed

OpenAI API and UI using streamlit #875

wants to merge 3 commits into from

Conversation

vmpuri
Copy link
Contributor

@vmpuri vmpuri commented Jul 1, 2024

Description
Encapsulate the functions in generate.py within a class, allowing us to create child classes that override functions where needed.

Test plan:
Run generate

python torchchat.py generate stories15M                                                   
Using device=mps 
Loading model...
Time to load model: 0.30 seconds
-----------------------------------------------------------
Hello, my name is Daisy. She has a toy car that can go very fast and makes loud noises. She likes to race it around the floor and make it go vroom vroom.
One day, Daisy has a new toy. It is a big red truck that can go very fast. It has a loud horn that goes "beep beep". Daisy wants to play with the truck, but it is too high for her to reach. She tries to jump, but she is too short.
She sees her dad coming into the room. He is holding a fluffy pillow. He says, "Hi, Daisy. Do you want me to help you?" Daisy nods. She loves her dad. He hears her and smiles. He says, "Wow, what a nice toy. Can I go for a ride?"
Daisy agrees. She says
[Max Sequence Length Reached. Ending Conversation.]
---------------------------------------------------
Time for inference 1: 4.98 sec total, time to first token 1.44 sec with parallel prefill, 200 tokens, 40.18 tokens/sec, 24.89 ms/token
Bandwidth achieved: 1.96 GB/s
*** This first iteration will include cold start effects for dynamic import, hardware caches. ***

========================================

Average tokens/sec: 40.18
Memory used: 0.00 GB

Run eval:

python3 torchchat.py eval stories15M --tasks wikitext --limit 10
Note: NumExpr detected 10 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
NumExpr defaulting to 8 threads.
PyTorch version 2.5.0.dev20240624 available.
Using device=mps
Loading model...
Time to load model: 0.24 seconds
-----------------------------------------------------------
Using device 'mps'
/Users/puri/torchchat/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 665/665 [00:00<00:00, 1.14MB/s]
model.safetensors: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 548M/548M [00:15<00:00, 34.6MB/s]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 124/124 [00:00<00:00, 574kB/s]
tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 26.0/26.0 [00:00<00:00, 78.3kB/s]
vocab.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.04M/1.04M [00:00<00:00, 5.50MB/s]
merges.txt: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 456k/456k [00:00<00:00, 2.41MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.36M/1.36M [00:00<00:00, 7.10MB/s]
[Task: wikitext] metric word_perplexity is defined, but aggregation is not. using default aggregation=weighted_perplexity
[Task: wikitext] metric word_perplexity is defined, but higher_is_better is not. using default higher_is_better=False
[Task: wikitext] metric byte_perplexity is defined, but aggregation is not. using default aggregation=weighted_perplexity
[Task: wikitext] metric byte_perplexity is defined, but higher_is_better is not. using default higher_is_better=False
[Task: wikitext] metric bits_per_byte is defined, but aggregation is not. using default aggregation=bits_per_byte
[Task: wikitext] metric bits_per_byte is defined, but higher_is_better is not. using default higher_is_better=False
Downloading builder script: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.7k/10.7k [00:00<00:00, 26.4MB/s]
Downloading readme: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.78k/7.78k [00:00<00:00, 13.6MB/s]
Repo card metadata block was not found. Setting CardData to empty.
Repo card metadata block was not found. Setting CardData to empty.
Downloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.72M/4.72M [00:00<00:00, 20.3MB/s]
Generating test split: 62 examples [00:00, 1388.98 examples/s]
Generating train split: 629 examples [00:00, 6545.47 examples/s]
Generating validation split: 60 examples [00:00, 2990.31 examples/s]
Building contexts for wikitext on rank 0...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 652.91it/s]
Running loglikelihood_rolling requests
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:05<00:00,  1.85it/s]
Time to run eval: 46.99s.
Time in model.forward: 1.63s, over 33 model evaluations
forward run time stats - Median: 0.02s Min: 0.01s Max: 0.69s
For model /Users/puri/.torchchat/model-cache/stories15M/stories15M.pt
wikitext:
 word_perplexity,none: 47350.8811
 byte_perplexity,none: 7.7811
 bits_per_byte,none: 2.9600
 alias: wikitext

Run generate --compiled

python3 torchchat.py generate llama2 --quantize '{"precision": {"dtype":"float16"}, "executor":{"accelerator":"cpu"}}' --prompt "Once upon a time," --max-new-tokens 256 --compile
...
Using device=cpu Apple M1 Pro
Loading model...
Time to load model: 0.11 seconds
Quantizing the model with: {'precision': {'dtype': 'float16'}, 'executor': {'accelerator': 'cpu'}}
Time to quantize model: 0.00 seconds
-----------------------------------------------------------
Once upon a time, there was a rich and powerful king who was known for his wisdom and fairness. His kingdom was prosperous and peaceful, and the people loved him dearly. But as time passed, the king grew old and weak, and he knew that his time on this earth was coming to an end.

One day, he called for his trusted advisor, a wise old man who had served him for many years. "My dear advisor," the king said, "I am tired and weak, and I know that my time is almost gone. I have one final request of you."

"Of course, Your Majesty," the advisor replied. "What is it that you wish?"

"I wish to be buried in a simple grave," the king said, "and I wish for no grand monuments or statues to be built in my honor. I want to be remembered as a humble king who ruled his kingdom with wisdom and fairness, and who loved his people dearly."

The advisor was taken aback by the king's request, but he promised to fulfill his wishes to the best of his ability. And so, the king passed away, surrounded by his loyal subjects.

As
[Max Sequence Length Reached. Ending Conversation.]
---------------------------------------------------
just-in-time compilation time (incl run time): 2.3e+02 seconds
Time for inference 1: 232.53 sec total, time to first token 6.30 sec with parallel prefill, 256 tokens, 1.10 tokens/sec, 908.34 ms/token
Bandwidth achieved: 14.84 GB/s
*** This first iteration will include cold start effects for dynamic import, hardware caches, JIT compilation. ***

========================================

Average tokens/sec: 1.10
Memory used: 0.00 GB

Copy link

pytorch-bot bot commented Jul 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchchat/875

Note: Links to docs will display an error until the docs builds have been completed.

❌ 20 New Failures, 3 Cancelled Jobs, 4 Unrelated Failures

As of commit ed1c939 with merge base e1fb003 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 1, 2024
@vmpuri vmpuri requested a review from Jack-Khuu July 1, 2024 21:27
@Jack-Khuu Jack-Khuu self-requested a review July 1, 2024 22:03
@Jack-Khuu Jack-Khuu self-requested a review July 3, 2024 03:33
@vmpuri
Copy link
Contributor Author

vmpuri commented Jul 3, 2024

For now, this is pretty hacky. To run the Streamlit app:

pip install streamlit
streamlit run torchchat.py -- server stories15M --max-new-tokens 256 --compile

Current Issues:

  • generate class still needs to be decomposed - specifically the chat() and generate() functions.
  • Stats are broken
  • torchchat.py chat is broken by these changes - refactor everything else to use the chunked generation method rather than the callback directly printing to stdout
    Many other code quality fixes will need to be made before this is landable.

@byjlw byjlw requested review from malfet, dbort, byjlw and kartikayk July 8, 2024 18:02
@byjlw byjlw changed the title Create Generator class OpenAI API and UI using streamlit Jul 9, 2024
Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before getting into this review, please split this into smaller PRs. There are a lot of different things going on here, and if we needed to revert this it would have a large blast radius. Independent topics I see:

  • Setting up ISSUE_TEMPLATE
  • Fixing formatting and capitalization (whitespace, ExecuTorch, etc)
  • Adding the API layer
  • Adding the browser layer on top of the API layer
  • The changes under distributed/
  • Maybe others

I usually use ghstack to make a stack of PRs; basically, separate/squash things into one git commit per PR, then run ghstack to push/update the whole stack.

Though some of these are totally independent, and don't need to be stacked: they'd be better off as non-stacked PRs, because then we could land them on their own time.

@byjlw
Copy link
Contributor

byjlw commented Jul 9, 2024

Generate is broken.
python3 torchchat.py generate llama3 --prompt "write me a story about a boy and his bear"

@vmpuri vmpuri force-pushed the generator_class branch from 34aa53e to ce31856 Compare July 10, 2024 00:26
README.md Outdated
@@ -118,6 +118,34 @@ python3 torchchat.py generate llama3 --prompt "write me a story about a boy and

For more information run `python3 torchchat.py generate --help`

The `Generator` class can also be imported into a Python program to generate responses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a bit overkill for the readme. We want to keep the readme as clean and short as possible. I'd return the necessary info for generate via --help

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shortened this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything else in this readme is about the commandline. Torchchat isn't meant to be a library that other people can import and use, so we shouldn't talk about the internals of files in this README.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this section from the readme

@vmpuri vmpuri force-pushed the generator_class branch 4 times, most recently from 5d3d896 to c6bd796 Compare July 10, 2024 22:34
README.md Outdated
@@ -118,6 +118,34 @@ python3 torchchat.py generate llama3 --prompt "write me a story about a boy and

For more information run `python3 torchchat.py generate --help`

The `Generator` class can also be imported into a Python program to generate responses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Everything else in this readme is about the commandline. Torchchat isn't meant to be a library that other people can import and use, so we shouldn't talk about the internals of files in this README.

utils/utils.py Outdated
from subprocess import check_output

import torch
def get_device_info(name: str) -> str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a pydoc string. What sort of names does it expect as input? What kinds of strings can it return?

generate.py Outdated
profile: Optional[Path],
quantize,
draft_quantize,
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a doc comment describing all of the arguments.

utils/utils.py Outdated
@@ -0,0 +1,24 @@
import platform
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file needs a copyright header

@vmpuri vmpuri force-pushed the generator_class branch 2 times, most recently from ba0a1a3 to 5518efc Compare July 12, 2024 17:25
Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please take a look at my other pending comments, too

eval.py Outdated
return encoded

def tok_decode(self, tokens):
decoded = self._tokenizer.decode(tokens)
return decoded

def _model_call(self, inps):
# TODO: make batches work
# TODO: make bat˚≈˚ches work
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stray extra characters

eval.py Outdated
@@ -85,11 +84,17 @@ def __init__(
self,
model: Transformer,
tokenizer,
model_forward: callable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

callable isn't a type, it's a built-in function that returns true if something is a callable. These should be annotated using from typing import Callable, model_forward: Callable, instead. Same for other uses of lowercase callable.

generate.py Outdated
Comment on lines 123 to 125
"""
Generates text samples based on a pre-trained Transformer model and tokenizer.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc strings need to be inside the thing that they document

class Generator:
    """Generates text samples based on a pre-trained Transformer model and tokenizer."""

README.md Outdated
@@ -118,6 +118,34 @@ python3 torchchat.py generate llama3 --prompt "write me a story about a boy and

For more information run `python3 torchchat.py generate --help`

The `Generator` class can also be imported into a Python program to generate responses.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this section from the readme

api/api.py Outdated


@dataclass
class AbstractMessageType(ABC):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add docstrings on these classes. Especially this one, which represents some kind of base class.

For each class, if there's a URL that describes the type and the fields in it, please point to that URL. As-is, it's not clear what most of the fields in this file mean, what their valid values are, etc.

api/api.py Outdated


@dataclass
class AbstractMessageType(ABC):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do users need to see this type? Or could it be _AbstractMessageType to hide it?

api/api.py Outdated


@dataclass
class ToolMessage(AbstractMessageType):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is ToolMessage not ToolMessageType, which is inconsistent with the other subclasses of AbstractMessageType.

tbh I prefer the version without Type, since these are all types; could the other subclasses drop the Type too, or is this part of the API? Either way, it'd be better to make the names consistent if possible.

@vmpuri vmpuri force-pushed the generator_class branch from 5518efc to 5f7dfd8 Compare July 12, 2024 19:34
Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generate.py commit is looking really good! Thanks for all of the changes so far.

The final blocking issue is the "sys.path.append" thing.

eval.py Outdated
@@ -85,11 +84,17 @@ def __init__(
self,
model: Transformer,
tokenizer,
model_forward: Callable,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the code checks to see if this is None, this should be

model_forward: Optional[Callable] = None,

Could leave out the = None part, but it makes things easier for callers if they don't want to provide model_forward. And it preserves BC for this API.

generate.py Outdated
Comment on lines 128 to 134
builder_args (BuilderArgs): A BuilderArgs object defining the model configuration
speculative_builder_args (BuilderArgs): A BuilderArgs object defining the speculative model configuration for speculative decode
tokenizer_args (TokenizerArgs): A TokenizerArgs object defining the tokenizer configuration for both the model and speculative model
generator_args (GeneratorArgs): A GeneratorArgs object controlling the generation parameters
profile (Path): A Path object pointing to a directory where the profiling results will be stored, if enabled.
quantize (bool): Whether to quantize the model.
draft_quantize (bool): Whether to quantize the draft model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc comments shouldn't mention the types; the params themselves are (or should be) type-annotated instead, and the reader can look at those. This reduces the possibility of the doc getting out of sync with the code.

generate.py Outdated
quantize,
draft_quantize,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on the docstring, these should both be annotated as bool

generate.py Outdated
Comment on lines 142 to 153
# support running without installing as a package
wd = Path(__file__).parent.parent.resolve()
sys.path.append(str(wd))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reminder about this. This should not happen inside the class.

generate.py Outdated
This model is not known to support the chat function
and may produce nonsensical or false output.
*******************************************************
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: inconsistent indentation

            print(textwrap.dedent(
                """
                *******************************************************
                This model is not known to support the chat function
                and may produce nonsensical or false output.
                *******************************************************
                """
            ))

Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the comments and docstrings on api.py, they really help a lot. Please take a look at the other unresolved comments I left on this file earlier.

api/api.py Outdated
Comment on lines 16 to 20
"""
Message classes and associated objects - see the types of Messages under "Create Chat Completion >>> Request body >>> messages"
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this doesn't annotate anything specific, it should be a # comment block. The first docstring on line 11 will be associated with the file/module, but this second string will be ignored by pydoc.

api/api.py Outdated

@dataclass
class AbstractMessage(ABC):
"Base class with common parameters for Message types. All Messages require a role and can have a content string"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Always use """ for docstrings.

Also, docstrings should always start with a short (<80col) summary on the first line with the quotes, ending with a period. Any additional details should be after a blank line, and use full sentences ending with periods.

See https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings for full details.

"""Base class with common parameters for Message types.

All Messages require a role and can have a content string.
"""

api/api.py Outdated

@dataclass
class AbstractMessage(ABC):
"Base class with common parameters for Message types. All Messages require a role and can have a content string"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are valid/expected role and content strings? How would an author choose what to put in those fields?

api/api.py Outdated
tool_calls: Optional[List[ToolCall]] = None


"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should also be a # comment. (but thank you for adding these block comments)

Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The browser commit looks good.

@vmpuri vmpuri force-pushed the generator_class branch 4 times, most recently from c830c61 to ea88641 Compare July 12, 2024 23:30
import torch


def get_device_info(name: str) -> str:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a docstring. What format of string does this expect? What format of string does it return?

@vmpuri vmpuri force-pushed the generator_class branch 4 times, most recently from 18f43bf to 40c4676 Compare July 15, 2024 22:34
Copy link
Contributor

@dbort dbort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple minor things, but looks great!

Next step is to split this into three PRs using ghstack

generate.py Outdated
Comment on lines 127 to 133
builder_args: A BuilderArgs object defining the model configuration
speculative_builder_args: A BuilderArgs object defining the speculative model configuration for speculative decode
tokenizer_args: A TokenizerArgs object defining the tokenizer configuration for both the model and speculative model
generator_args: A GeneratorArgs object controlling the generation parameters
profile: A Path object pointing to a directory where the profiling results will be stored, if enabled.
quantize: Whether to quantize the model.
draft_quantize: Whether to quantize the draft model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ok, but in the future don't mention the types of things ("A XYZ object") since it's already provided by the type annotation on the function definition.

generate.py Outdated
tokenizer_args: A TokenizerArgs object defining the tokenizer configuration for both the model and speculative model
generator_args: A GeneratorArgs object controlling the generation parameters
profile: A Path object pointing to a directory where the profiling results will be stored, if enabled.
quantize: Whether to quantize the model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is ok, but when documenting booleans it's better to say like "If true, quantizes the model." The "whether" phrasing can be ambiguous, especially when the arg names are less clear than these.

"""Returns a human-readable description of the hardware based on a torch.device.type

Args:
device: A torch.device.type string, such as CPU
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CPU isn't a valid input; it only accepts "cpu". I'd say something like

device: A torch.device.type string: one of {"cpu", "cuda"}.

or just

device: A torch.device.type string.

api/api.py Outdated
class _AbstractMessage(ABC):
"""Base class with common parameters for Message types.

All Messages require a role and can have a content string. Each Message type originates from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still doesn't describe the expected format of these strings, especially role.

In general, saying that something is a string isn't enough for someone to know how to use it. If the format of the string matters, then the user needs to know what the valid or expected values are. If the string can be anything, then they should know that, but also know how the string is being used.

else self.model.config.max_seq_length
)

def completion(self, completion_request: CompletionRequest):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method seems to be the primary way of interacting with this class, so it should have a docstring.

And the docstring should have a "Yields:" section (instead of a "Returns:" section) describing the return value.

else self.model.config.max_seq_length
)

def completion(self, completion_request: CompletionRequest):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a return/yield type if possible. https://stackoverflow.com/a/38423388 talks about annotating generator functions.

Varun Puri added 2 commits July 17, 2024 13:35
ghstack-source-id: acb9464
Pull Request resolved: #906
ghstack-source-id: cc8f13a
Pull Request resolved: #907
@malfet malfet force-pushed the generator_class branch from ae169b5 to e2100bb Compare July 17, 2024 20:36
@byjlw byjlw closed this Jul 18, 2024
@dbort
Copy link
Contributor

dbort commented Jul 18, 2024

This PR was split and merged as #918, #907, and #908

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants