Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Internal Server Error #8

Closed
LuisYordano opened this issue Apr 11, 2024 · 3 comments
Closed

Internal Server Error #8

LuisYordano opened this issue Apr 11, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@LuisYordano
Copy link

LuisYordano commented Apr 11, 2024

Currently, I am testing hayhooks, and I get the : Internal Server Error

-----example1.yml-------

example1

example1.yml

components:
  converter:
    init_parameters:
      extractor_type: DefaultExtractor
    type: haystack.components.converters.html.HTMLToDocument
  fetcher:
    init_parameters:
      raise_on_failure: true
      retry_attempts: 2
      timeout: 3
      user_agents:
      - haystack/LinkContentFetcher/2.0.1
    type: haystack.components.fetchers.link_content.LinkContentFetcher
  llm:
    init_parameters:
      generation_kwargs: {}
      model: orca-mini
      raw: false
      streaming_callback: null
      system_prompt: null
      template: null
      timeout: 1200
      url: http:https://localhost:11434/api/generate
    type: haystack_integrations.components.generators.ollama.generator.OllamaGenerator
  prompt:
    init_parameters:
      template: |
        "According to the contents of this website:
        {% for document in documents %}
          {{document.content}}
        {% endfor %}
        Answer the given question: {{query}}
        Answer:
        "
    type: haystack.components.builders.prompt_builder.PromptBuilder
connections:
- receiver: converter.sources
  sender: fetcher.streams
- receiver: prompt.documents
  sender: converter.documents
- receiver: llm.prompt
  sender: prompt.prompt

metadata: {}

Request body

{
  "converter": {
    "meta": {}
  },
  "fetcher": {
    "urls": [
      "https://haystack.deepset.ai/overview/quick-start"
    ]
  },
  "llm": {
    "generation_kwargs": {}
  },
  "prompt": {
    "query": "Which components do I need for a RAG pipeline?"
  }
}

Could you indicate what is the correct curl command?


----Example2.yml----------

example2

example2.yml

components:
  llm:
    init_parameters:
      generation_kwargs: {}
      model: orca-mini
      raw: false
      streaming_callback: null
      system_prompt: null
      template: null
      timeout: 1200
      url: http:https://localhost:11434/api/generate
    type: haystack_integrations.components.generators.ollama.generator.OllamaGenerator
  prompt_builder:
    init_parameters:
      template: |
          "Given these documents, answer the question.
          Documents:
          {% for doc in documents %}
              {{ doc.content }}
          {% endfor %}
          Question: {{query}}
          Answer:"
    type: haystack.components.builders.prompt_builder.PromptBuilder
  retriever:
    init_parameters:
      document_store:
        init_parameters:
          collection_name: documents
          embedding_function: default
          persist_path: .
        type: haystack_integrations.document_stores.chroma.document_store.ChromaDocumentStore
      filters: null
      top_k: 10
    type: haystack_integrations.components.retrievers.chroma.retriever.ChromaEmbeddingRetriever
  text_embedder:
    init_parameters:
      generation_kwargs: {}
      model: orca-mini
      timeout: 1200
      url: http:https://localhost:11434/api/embeddings
    type: haystack_integrations.components.embedders.ollama.text_embedder.OllamaTextEmbedder
connections:
- receiver: retriever.query_embedding
  sender: text_embedder.embedding
- receiver: prompt_builder.documents
  sender: retriever.documents
- receiver: llm.prompt
  sender: prompt_builder.prompt
max_loops_allowed: 100
metadata: {}

Request body

{
  "llm": {
    "generation_kwargs": {}
  },
  "prompt_builder": {
    "query": "How old was he when he died?"
  },
  "retriever": {
    "filters": {},
    "top_k": 3
  },
  "text_embedder": {
    "text": "How old was he when he died?",
    "generation_kwargs": {}
  }
}


Could you indicate what is the correct curl command?

@masci
Copy link
Member

masci commented Apr 12, 2024

Hey @LuisYordano thanks for trying out Hayhooks and thanks for reporting the issue!

I'll look into this and will let you know 👍

@masci masci added the bug Something isn't working label Apr 12, 2024
@jacksteussie
Copy link

I don't know if it's the same issue, but when I get the Internal Server Error on my deployment, when I check the fastapi logs I get a TypeError: Object of type timedelta is not JSON serializable.

@masci
Copy link
Member

masci commented May 9, 2024

Your example works with the latest version 0.0.13

@masci masci closed this as completed May 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants