faq.mdx 1.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100
  1. ---
  2. title: ❓ FAQs
  3. description: 'Collections of all the frequently asked questions'
  4. ---
  5. #### Does Embedchain support OpenAI's Assistant APIs?
  6. Yes, it does. Please refer to the [OpenAI Assistant docs page](/get-started/openai-assistant).
  7. #### How to use `gpt-4-turbo` model released on OpenAI DevDay?
  8. <CodeGroup>
  9. ```python main.py
  10. import os
  11. from embedchain import Pipeline as App
  12. os.environ['OPENAI_API_KEY'] = 'xxx'
  13. # load llm configuration from gpt4_turbo.yaml file
  14. app = App.from_config(yaml_path="gpt4_turbo.yaml")
  15. ```
  16. ```yaml gpt4_turbo.yaml
  17. llm:
  18. provider: openai
  19. config:
  20. model: 'gpt-4-turbo'
  21. temperature: 0.5
  22. max_tokens: 1000
  23. top_p: 1
  24. stream: false
  25. ```
  26. </CodeGroup>
  27. #### How to use GPT-4 as the LLM model?
  28. <CodeGroup>
  29. ```python main.py
  30. import os
  31. from embedchain import Pipeline as App
  32. os.environ['OPENAI_API_KEY'] = 'xxx'
  33. # load llm configuration from gpt4.yaml file
  34. app = App.from_config(yaml_path="gpt4.yaml")
  35. ```
  36. ```yaml gpt4.yaml
  37. llm:
  38. provider: openai
  39. config:
  40. model: 'gpt-4'
  41. temperature: 0.5
  42. max_tokens: 1000
  43. top_p: 1
  44. stream: false
  45. ```
  46. </CodeGroup>
  47. #### I don't have OpenAI credits. How can I use some open source model?
  48. <CodeGroup>
  49. ```python main.py
  50. import os
  51. from embedchain import Pipeline as App
  52. os.environ['OPENAI_API_KEY'] = 'xxx'
  53. # load llm configuration from opensource.yaml file
  54. app = App.from_config(yaml_path="opensource.yaml")
  55. ```
  56. ```yaml opensource.yaml
  57. llm:
  58. provider: gpt4all
  59. config:
  60. model: 'orca-mini-3b-gguf2-q4_0.gguf'
  61. temperature: 0.5
  62. max_tokens: 1000
  63. top_p: 1
  64. stream: false
  65. embedder:
  66. provider: gpt4all
  67. config:
  68. model: 'all-MiniLM-L6-v2'
  69. ```
  70. </CodeGroup>
  71. #### How to contact support?
  72. If docs aren't sufficient, please feel free to reach out to us using one of the following methods:
  73. <Snippet file="get-help.mdx" />