You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
gcgj-dify-1.7.0/api/core/model_providers/providers
takatost 9ae91a2ec3
feat: optimize xinference request max token key and stop reason (#998)
3 years ago
..
__init__.py feat: server multi models support (#799) 3 years ago
anthropic_provider.py feat: claude paid optimize (#890) 3 years ago
azure_openai_provider.py feat: optimize error raise (#820) 3 years ago
base.py feat: server multi models support (#799) 3 years ago
chatglm_provider.py feat: server multi models support (#799) 3 years ago
hosted.py feat: claude paid optimize (#890) 3 years ago
huggingface_hub_provider.py feat: adjust hf max tokens (#979) 3 years ago
minimax_provider.py feat: server multi models support (#799) 3 years ago
openai_provider.py feat: server multi models support (#799) 3 years ago
openllm_provider.py fix: remove openllm pypi package because of this package too large (#931) 3 years ago
replicate_provider.py fix: replicate text generation model validate (#923) 3 years ago
spark_provider.py feat: add spark v2 support (#885) 3 years ago
tongyi_provider.py feat: server multi models support (#799) 3 years ago
wenxin_provider.py feat: server multi models support (#799) 3 years ago
xinference_provider.py feat: optimize xinference request max token key and stop reason (#998) 3 years ago