You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gcgj-dify-1.7.0/api/core/model_runtime/model_providers
kenwoodjw 096c0ad564
feat: Add support for TEI API key authentication (#11006)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
Co-authored-by: crazywoola <427733928@qq.com>
1 year ago
..
__base feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
anthropic feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
azure_ai_studio chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 2 years ago
azure_openai fix: Azure OpenAI o1 max_completion_token error (#10593) 2 years ago
baichuan
bedrock Fixing #11005: Incorrect max_tokens in yaml file for AWS Bedrock US Cross Region Inference version of 3.5 Sonnet v2 and 3.5 Haiku (#11013) 1 year ago
chatglm
cohere fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 1 year ago
deepseek
fireworks
fishaudio fix: fish audio wrong validate credentials interface (#11019) 1 year ago
gitee_ai Gitee AI Qwen2.5-72B model (#10595) 2 years ago
google feat: support LLM process document file (#10966) 1 year ago
gpustack feat: add gpustack model provider (#10158) 2 years ago
groq
huggingface_hub
huggingface_tei feat: Add support for TEI API key authentication (#11006) 1 year ago
hunyuan
jina add jina rerank http timout parameter (#10476) 2 years ago
leptonai
localai
minimax add abab7-chat-preview model (#10654) 2 years ago
mistralai
mixedbread
moonshot
nomic
novita
nvidia
nvidia_nim
oci
ollama feat: support function call for ollama block chat api (#10784) 1 year ago
openai feat: Allow using file variables directly in the LLM node and support more file types. (#10679) 1 year ago
openai_api_compatible Resolve 8475 support rerank model from infinity (#10939) 1 year ago
openllm fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883) 1 year ago
openrouter Support streaming output for OpenAI o1-preview and o1-mini (#10890) 1 year ago
perfxcloud
replicate
sagemaker chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425) 2 years ago
siliconflow feat: add vlm models from siliconflow (#10704) 2 years ago
spark
stepfun
tencent
togetherai
tongyi feat: support LLM process document file (#10966) 1 year ago
triton_inference_server
upstage
vertex_ai Fix: Correct the max tokens of Claude-3.5-Sonnet-20241022 for Bedrock and VertexAI (#10508) 2 years ago
vessl_ai fix: [VESSL-AI] edit some words in vessl_ai.yaml (#10417) 2 years ago
volcengine_maas fix: default max_chunks set to 1 as other providers (#10937) 1 year ago
voyage
wenxin
x feat: add xAI model provider (#10272) 2 years ago
xinference
yi
zhinao
zhipuai feat: support LLM process document file (#10966) 1 year ago
__init__.py
_position.yaml
model_provider_factory.py