faq.mdx 2.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
  1. ---
  2. title: ❓ FAQs
  3. description: 'Collections of all the frequently asked questions'
  4. ---
  5. <AccordionGroup>
  6. <Accordion title="Does Embedchain support OpenAI's Assistant APIs?">
  7. Yes, it does. Please refer to the [OpenAI Assistant docs page](/get-started/openai-assistant).
  8. </Accordion>
  9. <Accordion title="How to use MistralAI language model?">
  10. Use the model provided on huggingface: `mistralai/Mistral-7B-v0.1`
  11. <CodeGroup>
  12. ```python main.py
  13. import os
  14. from embedchain import Pipeline as App
  15. os.environ["OPENAI_API_KEY"] = "sk-xxx"
  16. os.environ["HUGGINGFACE_ACCESS_TOKEN"] = "hf_your_token"
  17. app = App.from_config("huggingface.yaml")
  18. ```
  19. ```yaml huggingface.yaml
  20. llm:
  21. provider: huggingface
  22. config:
  23. model: 'mistralai/Mistral-7B-v0.1'
  24. temperature: 0.5
  25. max_tokens: 1000
  26. top_p: 0.5
  27. stream: false
  28. ```
  29. </CodeGroup>
  30. </Accordion>
  31. <Accordion title="How to use ChatGPT 4 turbo model released on OpenAI DevDay?">
  32. Use the model `gpt-4-turbo` provided my openai.
  33. <CodeGroup>
  34. ```python main.py
  35. import os
  36. from embedchain import Pipeline as App
  37. os.environ['OPENAI_API_KEY'] = 'xxx'
  38. # load llm configuration from gpt4_turbo.yaml file
  39. app = App.from_config(config_path="gpt4_turbo.yaml")
  40. ```
  41. ```yaml gpt4_turbo.yaml
  42. llm:
  43. provider: openai
  44. config:
  45. model: 'gpt-4-turbo'
  46. temperature: 0.5
  47. max_tokens: 1000
  48. top_p: 1
  49. stream: false
  50. ```
  51. </CodeGroup>
  52. </Accordion>
  53. <Accordion title="How to use GPT-4 as the LLM model?">
  54. <CodeGroup>
  55. ```python main.py
  56. import os
  57. from embedchain import Pipeline as App
  58. os.environ['OPENAI_API_KEY'] = 'xxx'
  59. # load llm configuration from gpt4.yaml file
  60. app = App.from_config(config_path="gpt4.yaml")
  61. ```
  62. ```yaml gpt4.yaml
  63. llm:
  64. provider: openai
  65. config:
  66. model: 'gpt-4'
  67. temperature: 0.5
  68. max_tokens: 1000
  69. top_p: 1
  70. stream: false
  71. ```
  72. </CodeGroup>
  73. </Accordion>
  74. <Accordion title="I don't have OpenAI credits. How can I use some open source model?">
  75. <CodeGroup>
  76. ```python main.py
  77. import os
  78. from embedchain import Pipeline as App
  79. os.environ['OPENAI_API_KEY'] = 'xxx'
  80. # load llm configuration from opensource.yaml file
  81. app = App.from_config(config_path="opensource.yaml")
  82. ```
  83. ```yaml opensource.yaml
  84. llm:
  85. provider: gpt4all
  86. config:
  87. model: 'orca-mini-3b-gguf2-q4_0.gguf'
  88. temperature: 0.5
  89. max_tokens: 1000
  90. top_p: 1
  91. stream: false
  92. embedder:
  93. provider: gpt4all
  94. config:
  95. model: 'all-MiniLM-L6-v2'
  96. ```
  97. </CodeGroup>
  98. </Accordion>
  99. </AccordionGroup>
  100. #### Need more help?
  101. If docs aren't sufficient, please feel free to reach out to us using one of the following methods:
  102. <Snippet file="get-help.mdx" />