Skip to content

Getting scores with llamaindex using custom model as bedrock or gemini #1941

Open
@mdciri

Description

@mdciri

Describe the Feature
I would like to do evaluation using Bedrock or Gemini with llamaindex and not just langchain (https://docs.ragas.io/en/stable/howtos/customizations/customize_models/).

i have already tried to use to use LlamaIndexLLMWrapper and LlamaIndexEmbeddingsWrapper, but it does not work. Indeed I got error or a big loop with:

from llama_index.llms.bedrock_converse import BedrockConverse
from llama_index.embeddings.bedrock import BedrockEmbedding
from ragas.llms import LlamaIndexLLMWrapper
from ragas.embeddings import LlamaIndexEmbeddingsWrapper
from ragas.metrics import ResponseRelevancy

bedrock_model = BedrockConverse(...)
bedrock_embeddings = BedrockEmbedding(...)
response_relevancy_scorer = ResponseRelevancy(
    llm=LangchainLLMWrapper(bedrock_model),
    embeddings=LangchainEmbeddingsWrapper(bedrock_embeddings)
)
out = await response_relevancy_scorer.single_turn_ascore(sample)

May you, please, consider to improve the code with such features?

Why is the feature important for you?
I normally use llamaindex and not langchain.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions