faq.mdx 3.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155
  1. ---
  2. title: ❓ FAQs
  3. description: 'Collections of all the frequently asked questions'
  4. ---
  5. <AccordionGroup>
  6. <Accordion title="Does Embedchain support OpenAI's Assistant APIs?">
  7. Yes, it does. Please refer to the [OpenAI Assistant docs page](/examples/openai-assistant).
  8. </Accordion>
  9. <Accordion title="How to use MistralAI language model?">
  10. Use the model provided on huggingface: `mistralai/Mistral-7B-v0.1`
  11. <CodeGroup>
  12. ```python main.py
  13. import os
  14. from embedchain import Pipeline as App
  15. os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token"
  16. app = App.from_config("huggingface.yaml")
  17. ```
  18. ```yaml huggingface.yaml
  19. llm:
  20. provider: huggingface
  21. config:
  22. model: 'mistralai/Mistral-7B-v0.1'
  23. temperature: 0.5
  24. max_tokens: 1000
  25. top_p: 0.5
  26. stream: false
  27. embedder:
  28. provider: huggingface
  29. config:
  30. model: 'sentence-transformers/all-mpnet-base-v2'
  31. ```
  32. </CodeGroup>
  33. </Accordion>
  34. <Accordion title="How to use ChatGPT 4 turbo model released on OpenAI DevDay?">
  35. Use the model `gpt-4-turbo` provided my openai.
  36. <CodeGroup>
  37. ```python main.py
  38. import os
  39. from embedchain import Pipeline as App
  40. os.environ['OPENAI_API_KEY'] = 'xxx'
  41. # load llm configuration from gpt4_turbo.yaml file
  42. app = App.from_config(config_path="gpt4_turbo.yaml")
  43. ```
  44. ```yaml gpt4_turbo.yaml
  45. llm:
  46. provider: openai
  47. config:
  48. model: 'gpt-4-turbo'
  49. temperature: 0.5
  50. max_tokens: 1000
  51. top_p: 1
  52. stream: false
  53. ```
  54. </CodeGroup>
  55. </Accordion>
  56. <Accordion title="How to use GPT-4 as the LLM model?">
  57. <CodeGroup>
  58. ```python main.py
  59. import os
  60. from embedchain import Pipeline as App
  61. os.environ['OPENAI_API_KEY'] = 'xxx'
  62. # load llm configuration from gpt4.yaml file
  63. app = App.from_config(config_path="gpt4.yaml")
  64. ```
  65. ```yaml gpt4.yaml
  66. llm:
  67. provider: openai
  68. config:
  69. model: 'gpt-4'
  70. temperature: 0.5
  71. max_tokens: 1000
  72. top_p: 1
  73. stream: false
  74. ```
  75. </CodeGroup>
  76. </Accordion>
  77. <Accordion title="I don't have OpenAI credits. How can I use some open source model?">
  78. <CodeGroup>
  79. ```python main.py
  80. import os
  81. from embedchain import Pipeline as App
  82. os.environ['OPENAI_API_KEY'] = 'xxx'
  83. # load llm configuration from opensource.yaml file
  84. app = App.from_config(config_path="opensource.yaml")
  85. ```
  86. ```yaml opensource.yaml
  87. llm:
  88. provider: gpt4all
  89. config:
  90. model: 'orca-mini-3b-gguf2-q4_0.gguf'
  91. temperature: 0.5
  92. max_tokens: 1000
  93. top_p: 1
  94. stream: false
  95. embedder:
  96. provider: gpt4all
  97. config:
  98. model: 'all-MiniLM-L6-v2'
  99. ```
  100. </CodeGroup>
  101. </Accordion>
  102. <Accordion title="How to stream response while using OpenAI model in Embedchain?">
  103. You can achieve this by setting `stream` to `true` in the config file.
  104. <CodeGroup>
  105. ```yaml openai.yaml
  106. llm:
  107. provider: openai
  108. config:
  109. model: 'gpt-3.5-turbo'
  110. temperature: 0.5
  111. max_tokens: 1000
  112. top_p: 1
  113. stream: true
  114. ```
  115. ```python main.py
  116. import os
  117. from embedchain import Pipeline as App
  118. os.environ['OPENAI_API_KEY'] = 'sk-xxx'
  119. app = App.from_config(config_path="openai.yaml")
  120. app.add("https://www.forbes.com/profile/elon-musk")
  121. response = app.query("What is the net worth of Elon Musk?")
  122. # response will be streamed in stdout as it is generated.
  123. ```
  124. </CodeGroup>
  125. </Accordion>
  126. </AccordionGroup>
  127. #### Still have questions?
  128. If docs aren't sufficient, please feel free to reach out to us using one of the following methods:
  129. <Snippet file="get-help.mdx" />