faq.mdx 1.2 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768
  1. ---
  2. title: ❓ FAQs
  3. description: 'Collections of all the frequently asked questions'
  4. ---
  5. #### How to use GPT-4 as the LLM model?
  6. <CodeGroup>
  7. ```python main.py
  8. import os
  9. from embedchain import Pipeline as App
  10. os.environ['OPENAI_API_KEY'] = 'xxx'
  11. # load llm configuration from gpt4.yaml file
  12. app = App.from_config(yaml_path="gpt4.yaml")
  13. ```
  14. ```yaml gpt4.yaml
  15. llm:
  16. provider: openai
  17. config:
  18. model: 'gpt-4'
  19. temperature: 0.5
  20. max_tokens: 1000
  21. top_p: 1
  22. stream: false
  23. ```
  24. </CodeGroup>
  25. #### I don't have OpenAI credits. How can I use some open source model?
  26. <CodeGroup>
  27. ```python main.py
  28. import os
  29. from embedchain import Pipeline as App
  30. os.environ['OPENAI_API_KEY'] = 'xxx'
  31. # load llm configuration from opensource.yaml file
  32. app = App.from_config(yaml_path="opensource.yaml")
  33. ```
  34. ```yaml opensource.yaml
  35. llm:
  36. provider: gpt4all
  37. config:
  38. model: 'orca-mini-3b.ggmlv3.q4_0.bin'
  39. temperature: 0.5
  40. max_tokens: 1000
  41. top_p: 1
  42. stream: false
  43. embedder:
  44. provider: gpt4all
  45. config:
  46. model: 'all-MiniLM-L6-v2'
  47. ```
  48. </CodeGroup>
  49. #### How to contact support?
  50. If docs aren't sufficient, please feel free to reach out to us using one of the following methods:
  51. <Snippet file="get-help.mdx" />