You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gcgj-dify-1.7.0/api/core/model_runtime/model_providers
ice yao 1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
1 year ago
..
__base feat/enhance the multi-modal support (#8818) 1 year ago
anthropic chore: avoid implicit optional in type annotations of method (#8727) 1 year ago
azure_ai_studio chore: format get_customizable_model_schema return value (#9335) 1 year ago
azure_openai refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
baichuan refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
bedrock refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
chatglm chore: refurbish Python code by applying refurb linter rules (#8296) 2 years ago
cohere refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
deepseek fix: response_format label (#8326) 2 years ago
fireworks refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
fishaudio Corrected type annotation to "Any" from "any" all files in "model_providers" folder (#9135) 1 year ago
google fix: remove the stream option of zhipu and gemini (#9319) 1 year ago
groq Added Llama 3.2 Vision Models Speech2Text Models for Groq (#9479) 1 year ago
huggingface_hub refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
huggingface_tei chore: format get_customizable_model_schema return value (#9335) 1 year ago
hunyuan refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
jina refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
leptonai chore(api/core): apply ruff reformatting (#7624) 2 years ago
localai chore: format get_customizable_model_schema return value (#9335) 1 year ago
minimax refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
mistralai add MixtralAI Model (#8517) 2 years ago
mixedbread refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
moonshot chore: format get_customizable_model_schema return value (#9335) 1 year ago
nomic refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
novita chore(api/core): apply ruff reformatting (#7624) 2 years ago
nvidia refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
nvidia_nim chore(api/core): apply ruff reformatting (#7624) 2 years ago
oci refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
ollama refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
openai chore: format get_customizable_model_schema return value (#9335) 1 year ago
openai_api_compatible chore: format get_customizable_model_schema return value (#9335) 1 year ago
openllm chore: format get_customizable_model_schema return value (#9335) 1 year ago
openrouter feat: add parameter top-k for the llm model provided by openrouter and siliconflow (#9455) 1 year ago
perfxcloud refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
replicate refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
sagemaker chore: format get_customizable_model_schema return value (#9335) 1 year ago
siliconflow chore: format get_customizable_model_schema return value (#9335) 1 year ago
spark fix:Spark's large language model token calculation error #7911 (#8755) 2 years ago
stepfun chore: format get_customizable_model_schema return value (#9335) 1 year ago
tencent chore: refurbish Python code by applying refurb linter rules (#8296) 2 years ago
togetherai chore(api/core): apply ruff reformatting (#7624) 2 years ago
tongyi chore: format get_customizable_model_schema return value (#9335) 1 year ago
triton_inference_server chore: format get_customizable_model_schema return value (#9335) 1 year ago
upstage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
vertex_ai refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
volcengine_maas chore: format get_customizable_model_schema return value (#9335) 1 year ago
voyage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
wenxin refactor wenxin rerank (#9486) 1 year ago
xinference chore: format get_customizable_model_schema return value (#9335) 1 year ago
yi feat: add yi custom llm intergration (#9482) 1 year ago
zhinao chore(api/core): apply ruff reformatting (#7624) 2 years ago
zhipuai refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
__init__.py Model Runtime (#1858) 2 years ago
_position.yaml feat: add voyage ai as a new model provider (#8747) 1 year ago
model_provider_factory.py feat: support pinning, including, and excluding for model providers and tools (#7419) 2 years ago