Skip to content

fix: refactoring of fetch_node #511

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Aug 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions examples/local_models/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions examples/local_models/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{}
36 changes: 36 additions & 0 deletions requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
# features: []
# all-features: false
# with-sources: false
# generate-hashes: false
# universal: false

-e file:.
aiofiles==24.1.0
Expand Down Expand Up @@ -110,6 +112,7 @@ filelock==3.15.4
# via huggingface-hub
# via torch
# via transformers
# via triton
fireworks-ai==0.14.0
# via langchain-fireworks
fonttools==4.53.1
Expand Down Expand Up @@ -185,6 +188,7 @@ graphviz==0.20.3
# via scrapegraphai
greenlet==3.0.3
# via playwright
# via sqlalchemy
groq==0.9.0
# via langchain-groq
grpc-google-iam-v1==0.13.1
Expand Down Expand Up @@ -353,6 +357,34 @@ numpy==1.26.4
# via shapely
# via streamlit
# via transformers
nvidia-cublas-cu12==12.1.3.1
# via nvidia-cudnn-cu12
# via nvidia-cusolver-cu12
# via torch
nvidia-cuda-cupti-cu12==12.1.105
# via torch
nvidia-cuda-nvrtc-cu12==12.1.105
# via torch
nvidia-cuda-runtime-cu12==12.1.105
# via torch
nvidia-cudnn-cu12==8.9.2.26
# via torch
nvidia-cufft-cu12==11.0.2.54
# via torch
nvidia-curand-cu12==10.3.2.106
# via torch
nvidia-cusolver-cu12==11.4.5.107
# via torch
nvidia-cusparse-cu12==12.1.0.106
# via nvidia-cusolver-cu12
# via torch
nvidia-nccl-cu12==2.19.3
# via torch
nvidia-nvjitlink-cu12==12.6.20
# via nvidia-cusolver-cu12
# via nvidia-cusparse-cu12
nvidia-nvtx-cu12==12.1.105
# via torch
openai==1.37.0
# via burr
# via langchain-fireworks
Expand Down Expand Up @@ -593,6 +625,8 @@ tqdm==4.66.4
transformers==4.43.3
# via langchain-huggingface
# via sentence-transformers
triton==2.2.0
# via torch
typer==0.12.3
# via fastapi-cli
typing-extensions==4.12.2
Expand Down Expand Up @@ -635,6 +669,8 @@ uvicorn==0.30.3
# via fastapi
uvloop==0.19.0
# via uvicorn
watchdog==4.0.1
# via streamlit
watchfiles==0.22.0
# via uvicorn
websockets==12.0
Expand Down
34 changes: 34 additions & 0 deletions requirements.lock
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@
# features: []
# all-features: false
# with-sources: false
# generate-hashes: false
# universal: false

-e file:.
aiohttp==3.9.5
Expand Down Expand Up @@ -67,6 +69,7 @@ filelock==3.15.4
# via huggingface-hub
# via torch
# via transformers
# via triton
fireworks-ai==0.14.0
# via langchain-fireworks
free-proxy==1.1.1
Expand Down Expand Up @@ -133,6 +136,7 @@ graphviz==0.20.3
# via scrapegraphai
greenlet==3.0.3
# via playwright
# via sqlalchemy
groq==0.9.0
# via langchain-groq
grpc-google-iam-v1==0.13.1
Expand Down Expand Up @@ -258,6 +262,34 @@ numpy==1.26.4
# via sentence-transformers
# via shapely
# via transformers
nvidia-cublas-cu12==12.1.3.1
# via nvidia-cudnn-cu12
# via nvidia-cusolver-cu12
# via torch
nvidia-cuda-cupti-cu12==12.1.105
# via torch
nvidia-cuda-nvrtc-cu12==12.1.105
# via torch
nvidia-cuda-runtime-cu12==12.1.105
# via torch
nvidia-cudnn-cu12==8.9.2.26
# via torch
nvidia-cufft-cu12==11.0.2.54
# via torch
nvidia-curand-cu12==10.3.2.106
# via torch
nvidia-cusolver-cu12==11.4.5.107
# via torch
nvidia-cusparse-cu12==12.1.0.106
# via nvidia-cusolver-cu12
# via torch
nvidia-nccl-cu12==2.19.3
# via torch
nvidia-nvjitlink-cu12==12.6.20
# via nvidia-cusolver-cu12
# via nvidia-cusparse-cu12
nvidia-nvtx-cu12==12.1.105
# via torch
openai==1.37.0
# via langchain-fireworks
# via langchain-openai
Expand Down Expand Up @@ -408,6 +440,8 @@ tqdm==4.66.4
transformers==4.43.3
# via langchain-huggingface
# via sentence-transformers
triton==2.2.0
# via torch
typing-extensions==4.12.2
# via anthropic
# via anyio
Expand Down
Loading