faq.mdx 4.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191
  1. ---
  2. title: ❓ FAQs
  3. description: 'Collections of all the frequently asked questions'
  4. ---
  5. <AccordionGroup>
  6. <Accordion title="Does Embedchain support OpenAI's Assistant APIs?">
  7. Yes, it does. Please refer to the [OpenAI Assistant docs page](/examples/openai-assistant).
  8. </Accordion>
  9. <Accordion title="How to use MistralAI language model?">
  10. Use the model provided on huggingface: `mistralai/Mistral-7B-v0.1`
  11. <CodeGroup>
  12. ```python main.py
  13. import os
  14. from embedchain import App
  15. os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token"
  16. app = App.from_config("huggingface.yaml")
  17. ```
  18. ```yaml huggingface.yaml
  19. llm:
  20. provider: huggingface
  21. config:
  22. model: 'mistralai/Mistral-7B-v0.1'
  23. temperature: 0.5
  24. max_tokens: 1000
  25. top_p: 0.5
  26. stream: false
  27. embedder:
  28. provider: huggingface
  29. config:
  30. model: 'sentence-transformers/all-mpnet-base-v2'
  31. ```
  32. </CodeGroup>
  33. </Accordion>
  34. <Accordion title="How to use ChatGPT 4 turbo model released on OpenAI DevDay?">
  35. Use the model `gpt-4-turbo` provided my openai.
  36. <CodeGroup>
  37. ```python main.py
  38. import os
  39. from embedchain import App
  40. os.environ['OPENAI_API_KEY'] = 'xxx'
  41. # load llm configuration from gpt4_turbo.yaml file
  42. app = App.from_config(config_path="gpt4_turbo.yaml")
  43. ```
  44. ```yaml gpt4_turbo.yaml
  45. llm:
  46. provider: openai
  47. config:
  48. model: 'gpt-4-turbo'
  49. temperature: 0.5
  50. max_tokens: 1000
  51. top_p: 1
  52. stream: false
  53. ```
  54. </CodeGroup>
  55. </Accordion>
  56. <Accordion title="How to use GPT-4 as the LLM model?">
  57. <CodeGroup>
  58. ```python main.py
  59. import os
  60. from embedchain import App
  61. os.environ['OPENAI_API_KEY'] = 'xxx'
  62. # load llm configuration from gpt4.yaml file
  63. app = App.from_config(config_path="gpt4.yaml")
  64. ```
  65. ```yaml gpt4.yaml
  66. llm:
  67. provider: openai
  68. config:
  69. model: 'gpt-4'
  70. temperature: 0.5
  71. max_tokens: 1000
  72. top_p: 1
  73. stream: false
  74. ```
  75. </CodeGroup>
  76. </Accordion>
  77. <Accordion title="I don't have OpenAI credits. How can I use some open source model?">
  78. <CodeGroup>
  79. ```python main.py
  80. from embedchain import App
  81. # load llm configuration from opensource.yaml file
  82. app = App.from_config(config_path="opensource.yaml")
  83. ```
  84. ```yaml opensource.yaml
  85. llm:
  86. provider: gpt4all
  87. config:
  88. model: 'orca-mini-3b-gguf2-q4_0.gguf'
  89. temperature: 0.5
  90. max_tokens: 1000
  91. top_p: 1
  92. stream: false
  93. embedder:
  94. provider: gpt4all
  95. config:
  96. model: 'all-MiniLM-L6-v2'
  97. ```
  98. </CodeGroup>
  99. </Accordion>
  100. <Accordion title="How to stream response while using OpenAI model in Embedchain?">
  101. You can achieve this by setting `stream` to `true` in the config file.
  102. <CodeGroup>
  103. ```yaml openai.yaml
  104. llm:
  105. provider: openai
  106. config:
  107. model: 'gpt-3.5-turbo'
  108. temperature: 0.5
  109. max_tokens: 1000
  110. top_p: 1
  111. stream: true
  112. ```
  113. ```python main.py
  114. import os
  115. from embedchain import App
  116. os.environ['OPENAI_API_KEY'] = 'sk-xxx'
  117. app = App.from_config(config_path="openai.yaml")
  118. app.add("https://www.forbes.com/profile/elon-musk")
  119. response = app.query("What is the net worth of Elon Musk?")
  120. # response will be streamed in stdout as it is generated.
  121. ```
  122. </CodeGroup>
  123. </Accordion>
  124. <Accordion title="How to persist data across multiple app sessions?">
  125. Set up the app by adding an `id` in the config file. This keeps the data for future use. You can include this `id` in the yaml config or input it directly in `config` dict.
  126. ```python app1.py
  127. import os
  128. from embedchain import App
  129. os.environ['OPENAI_API_KEY'] = 'sk-xxx'
  130. app1 = App.from_config(config={
  131. "app": {
  132. "config": {
  133. "id": "your-app-id",
  134. }
  135. }
  136. })
  137. app1.add("https://www.forbes.com/profile/elon-musk")
  138. response = app1.query("What is the net worth of Elon Musk?")
  139. ```
  140. ```python app2.py
  141. import os
  142. from embedchain import App
  143. os.environ['OPENAI_API_KEY'] = 'sk-xxx'
  144. app2 = App.from_config(config={
  145. "app": {
  146. "config": {
  147. # this will persist and load data from app1 session
  148. "id": "your-app-id",
  149. }
  150. }
  151. })
  152. response = app2.query("What is the net worth of Elon Musk?")
  153. ```
  154. </Accordion>
  155. </AccordionGroup>
  156. #### Still have questions?
  157. If docs aren't sufficient, please feel free to reach out to us using one of the following methods:
  158. <Snippet file="get-help.mdx" />