app_types.mdx 6.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213
  1. ---
  2. title: '📱 App types'
  3. ---
  4. ## App Types
  5. Embedchain supports a variety of LLMs, embedding functions/models and vector databases.
  6. Our app gives you full control over which components you want to use, you can mix and match them to your hearts content.
  7. <Tip>
  8. Out of the box, if you just use `app = App()`, Embedchain uses what we believe to be the best configuration available. This might include paid/proprietary components. Currently, this is
  9. * LLM: OpenAi (gpt-3.5-turbo)
  10. * Embedder: OpenAi (text-embedding-ada-002)
  11. * Database: ChromaDB
  12. </Tip>
  13. ### LLM
  14. #### Choosing an LLM
  15. The following LLM providers are supported by Embedchain:
  16. - OPENAI
  17. - ANTHPROPIC
  18. - VERTEX_AI
  19. - GPT4ALL
  20. - AZURE_OPENAI
  21. - LLAMA2
  22. You can choose one by importing it from `embedchain.llm`. E.g.:
  23. ```python
  24. from embedchain import App
  25. from embedchain.llm.llama2 import Llama2Llm
  26. app = App(llm=Llama2Llm())
  27. ```
  28. #### Configuration
  29. The LLMs can be configured by passing an LlmConfig object.
  30. The config options can be found [here](/advanced/query_configuration#llmconfig)
  31. ```python
  32. from embedchain import App
  33. from embedchain.llm.llama2 import Llama2Llm
  34. from embedchain.config import LlmConfig
  35. app = App(llm=Llama2Llm(), llm_config=LlmConfig(number_documents=3, temperature=0))
  36. ```
  37. ### Embedder
  38. #### Choosing an Embedder
  39. The following providers for embedding functions are supported by Embedchain:
  40. - OPENAI
  41. - HUGGING_FACE
  42. - VERTEX_AI
  43. - GPT4ALL
  44. - AZURE_OPENAI
  45. You can choose one by importing it from `embedchain.embedder`. E.g.:
  46. ```python
  47. from embedchain import App
  48. from embedchain.embedder.vertexai import VertexAiEmbedder
  49. app = App(embedder=VertexAiEmbedder())
  50. ```
  51. #### Configuration
  52. The LLMs can be configured by passing an EmbedderConfig object.
  53. ```python
  54. from embedchain import App
  55. from embedchain.embedder.openai import OpenAiEmbedder
  56. from embedchain.config import EmbedderConfig
  57. app = App(embedder=OpenAiEmbedder(), embedder_config=EmbedderConfig(model="text-embedding-ada-002"))
  58. ```
  59. <Tip>
  60. You can also pass an `LlmConfig` instance directly to the `query` or `chat` method.
  61. This creates a temporary config for that request alone, so you could, for example, use a different model (from the same provider) or get more context documents for a specific query.
  62. </Tip>
  63. ### Vector Database
  64. #### Choosing a Vector Database
  65. The following vector databases are supported by Embedchain:
  66. - ChromaDB
  67. - Elasticsearch
  68. You can choose one by importing it from `embedchain.vectordb`. E.g.:
  69. ```python
  70. from embedchain import App
  71. from embedchain.vectordb.elasticsearch import ElasticsearchDB
  72. app = App(db=ElasticsearchDB())
  73. ```
  74. #### Configuration
  75. The vector databases can be configured by passing a specific config object.
  76. These vary greatly between the different vector databases.
  77. ```python
  78. from embedchain import App
  79. from embedchain.vectordb.elasticsearch import ElasticsearchDB
  80. from embedchain.config import ElasticsearchDBConfig
  81. app = App(db=ElasticsearchDB(), db_config=ElasticsearchDBConfig())
  82. ```
  83. ### PersonApp
  84. ```python
  85. from embedchain import PersonApp
  86. naval_chat_bot = PersonApp("name_of_person_or_character") #Like "Yoda"
  87. ```
  88. - `PersonApp` uses OpenAI's model, so these are paid models. 💸 You will be charged for embedding model usage and LLM usage.
  89. - `PersonApp` uses OpenAI's embedding model to create embeddings for chunks and ChatGPT API as LLM to get answer given the relevant docs. Make sure that you have an OpenAI account and an API key. If you don't have an API key, you can create one by visiting [this link](https://platform.openai.com/account/api-keys).
  90. - Once you have the API key, set it in an environment variable called `OPENAI_API_KEY`
  91. ```python
  92. import os
  93. os.environ["OPENAI_API_KEY"] = "sk-xxxx"
  94. ```
  95. ### Full Configuration Examples
  96. Embedchain previously offered fully configured classes, namely `App`, `OpenSourceApp`, `CustomApp` and `Llama2App`.
  97. We deprecated these apps. The reason for this decision was that it was hard to switch from to a different LLM, embedder or vector db, if you one day decided that that's what you want to do.
  98. The new app allows drop-in replacements, such as changing `App(llm=OpenAiLlm())` to `App(llm=Llama2Llm())`.
  99. To make the switch to our new, fully configurable app easier, we provide you with full examples for what the old classes would look like implemented as a new app.
  100. You can swap these in, and if you decide you want to try a different model one day, you don't have to rewrite your whole bot.
  101. #### App
  102. App without any configuration is still using the best options available, so you can keep using:
  103. ```python
  104. from embedchain import App
  105. app = App()
  106. ```
  107. #### OpenSourceApp
  108. Use this snippet to run an open source app.
  109. ```python
  110. from embedchain import App
  111. from embedchain.llm.gpt4all import GPT4ALLLlm
  112. from embedchain.embedder.gpt4all import GPT4AllEmbedder
  113. from embedchain.vectordb.chroma import ChromaDB
  114. app = App(llm=GPT4ALLLlm(), embedder=GPT4AllEmbedder(), db=ChromaDB())
  115. ```
  116. #### Llama2App
  117. ```python
  118. from embedchain import App
  119. from embedchain.llm.llama2 import Llama2Llm
  120. app = App(llm=Llama2Llm())
  121. ```
  122. #### CustomApp
  123. Every app is a custom app now.
  124. If you were previously using a `CustomApp`, you can now just change it to `App`.
  125. Here's one example, what you could do if we combined everything shown on this page.
  126. ```python
  127. from embedchain import App
  128. from embedchain.config import ElasticsearchDBConfig, EmbedderConfig, LlmConfig
  129. from embedchain.embedder.openai import OpenAiEmbedder
  130. from embedchain.llm.llama2 import Llama2Llm
  131. from embedchain.vectordb.elasticsearch import ElasticsearchDB
  132. app = App(
  133. llm=Llama2Llm(),
  134. llm_config=LlmConfig(number_documents=3, temperature=0),
  135. embedder=OpenAiEmbedder(),
  136. embedder_config=EmbedderConfig(model="text-embedding-ada-002"),
  137. db=ElasticsearchDB(),
  138. db_config=ElasticsearchDBConfig(),
  139. )
  140. ```
  141. ### Compatibility with other apps
  142. - If there is any other app instance in your script or app, you can change the import as
  143. ```python
  144. from embedchain import App as EmbedChainApp
  145. from embedchain import PersonApp as EmbedChainPersonApp
  146. # or
  147. from embedchain import App as ECApp
  148. from embedchain import PersonApp as ECPApp
  149. ```