You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gcgj-dify-1.7.0/api/core/model_runtime/model_providers
SiliconFlow, Inc a4fc057a1c
ISSUE=11042: add tts model in siliconflow (#11043)
1 year ago
..
__base feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
anthropic feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
azure_ai_studio chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 1 year ago
azure_openai fix: Azure OpenAI o1 max_completion_token error (#10593) 1 year ago
baichuan refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
bedrock Fixing #11005: Incorrect max_tokens in yaml file for AWS Bedrock US Cross Region Inference version of 3.5 Sonnet v2 and 3.5 Haiku (#11013) 1 year ago
chatglm chore: refurbish Python code by applying refurb linter rules (#8296) 2 years ago
cohere fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 1 year ago
deepseek Fix Deepseek Function/Tool Calling (#11023) 1 year ago
fireworks refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
fishaudio fix: fish audio wrong validate credentials interface (#11019) 1 year ago
gitee_ai Gitee AI Qwen2.5-72B model (#10595) 1 year ago
google feat: support LLM process document file (#10966) 1 year ago
gpustack feat: add gpustack model provider (#10158) 1 year ago
groq Added Llama 3.2 Vision Models Speech2Text Models for Groq (#9479) 1 year ago
huggingface_hub refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
huggingface_tei feat: Add support for TEI API key authentication (#11006) 1 year ago
hunyuan fix: resolve the incorrect model name of hunyuan-standard-256k (#10052) 1 year ago
jina add jina rerank http timout parameter (#10476) 1 year ago
leptonai chore(api/core): apply ruff reformatting (#7624) 2 years ago
localai chore: format get_customizable_model_schema return value (#9335) 1 year ago
minimax add abab7-chat-preview model (#10654) 1 year ago
mistralai add MixtralAI Model (#8517) 2 years ago
mixedbread refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
moonshot fix: moonshot response_format raise error (#9847) 1 year ago
nomic refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
novita chore(api/core): apply ruff reformatting (#7624) 2 years ago
nvidia refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
nvidia_nim chore(api/core): apply ruff reformatting (#7624) 2 years ago
oci refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
ollama feat: support function call for ollama block chat api (#10784) 1 year ago
openai feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
openai_api_compatible Resolve 8475 support rerank model from infinity (#10939) 1 year ago
openllm fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 1 year ago
openrouter Support streaming output for OpenAI o1-preview and o1-mini (#10890) 1 year ago
perfxcloud refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
replicate refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
sagemaker chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 1 year ago
siliconflow ISSUE=11042: add tts model in siliconflow (#11043) 1 year ago
spark fix:Spark's large language model token calculation error #7911 (#8755) 2 years ago
stepfun chore: format get_customizable_model_schema return value (#9335) 1 year ago
tencent chore: refurbish Python code by applying refurb linter rules (#8296) 2 years ago
togetherai chore(api/core): apply ruff reformatting (#7624) 2 years ago
tongyi feat: support LLM process document file (#10966) 1 year ago
triton_inference_server chore: format get_customizable_model_schema return value (#9335) 1 year ago
upstage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
vertex_ai Fix: Correct the max tokens of Claude-3.5-Sonnet-20241022 for Bedrock and VertexAI (#10508) 1 year ago
vessl_ai fix: [VESSL-AI] edit some words in vessl_ai.yaml (#10417) 1 year ago
volcengine_maas fix: default max_chunks set to 1 as other providers (#10937) 1 year ago
voyage refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423) 1 year ago
wenxin add llm: ernie-4.0-turbo-128k of wenxin (#10135) 1 year ago
x feat: add xAI model provider (#10272) 1 year ago
xinference fix error with xinference tool calling with qwen2-instruct and add timeout retry setttings for xinference (#11012) 1 year ago
yi feat: add yi custom llm intergration (#9482) 1 year ago
zhinao chore(api/core): apply ruff reformatting (#7624) 2 years ago
zhipuai feat: support LLM process document file (#10966) 1 year ago
__init__.py Model Runtime (#1858) 2 years ago
_position.yaml feat: add voyage ai as a new model provider (#8747) 1 year ago
model_provider_factory.py feat: support pinning, including, and excluding for model providers and tools (#7419) 2 years ago