Merge branch 'fix/agent-external-knowledge-retrieval' into fix/external-knowledge-retrieval-issues

fix/external-knowledge-retrieval-issues
jyong 2 years ago
commit ed4e029609

@ -39,7 +39,7 @@ jobs:
api/pyproject.toml
api/poetry.lock
- name: Poetry check
- name: Check Poetry lockfile
run: |
poetry check -C api --lock
poetry show -C api
@ -47,6 +47,9 @@ jobs:
- name: Install dependencies
run: poetry install -C api --with dev
- name: Check dependencies in pyproject.toml
run: poetry run -C api bash dev/pytest/pytest_artifacts.sh
- name: Run Unit tests
run: poetry run -C api bash dev/pytest/pytest_unit_tests.sh

@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
alt="follow on X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -196,10 +196,14 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Using Terraform for Deployment
Deploy Dify to Cloud Platform with a single click using [terraform](https://www.terraform.io/)
##### Azure Global
Deploy Dify to Azure with a single click using [terraform](https://www.terraform.io/).
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
@ -219,7 +223,7 @@ At the same time, please consider supporting Dify by sharing it on social media
* [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
* [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
## Star history

@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
alt="follow on X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -179,10 +179,13 @@ docker compose up -d
#### استخدام Terraform للتوزيع
انشر Dify إلى منصة السحابة بنقرة واحدة باستخدام [terraform](https://www.terraform.io/)
##### Azure Global
استخدم [terraform](https://www.terraform.io/) لنشر Dify على Azure بنقرة واحدة.
- [Azure Terraform بواسطة @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform بواسطة @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## المساهمة

@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
alt="follow on X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -202,10 +202,14 @@ docker compose up -d
#### 使用 Terraform 部署
使用 [terraform](https://www.terraform.io/) 一键将 Dify 部署到云平台
##### Azure Global
使用 [terraform](https://www.terraform.io/) 一键部署 Dify 到 Azure。
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
@ -232,7 +236,7 @@ docker compose up -d
- [GitHub Issues](https://github.com/langgenius/dify/issues)。👉:使用 Dify.AI 时遇到的错误和问题,请参阅[贡献指南](CONTRIBUTING.md)。
- [电子邮件支持](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify)。👉:关于使用 Dify.AI 的问题。
- [Discord](https://discord.gg/FngNHpbcY7)。👉:分享您的应用程序并与社区交流。
- [Twitter](https://twitter.com/dify_ai)。👉:分享您的应用程序并与社区交流。
- [X(Twitter)](https://twitter.com/dify_ai)。👉:分享您的应用程序并与社区交流。
- [商业许可](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)。👉:有关商业用途许可 Dify.AI 的商业咨询。
- [微信]() 👉:扫描下方二维码,添加微信好友,备注 Dify我们将邀请您加入 Dify 社区。
<img src="./images/wechat.png" alt="wechat" width="100"/>

@ -17,7 +17,7 @@
alt="chat en Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="seguir en Twitter"></a>
alt="seguir en X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Descargas de Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -204,10 +204,13 @@ Si desea configurar una configuración de alta disponibilidad, la comunidad prop
#### Uso de Terraform para el despliegue
Despliega Dify en una plataforma en la nube con un solo clic utilizando [terraform](https://www.terraform.io/)
##### Azure Global
Utiliza [terraform](https://www.terraform.io/) para desplegar Dify en Azure con un solo clic.
- [Azure Terraform por @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform por @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuir
@ -228,7 +231,7 @@ Al mismo tiempo, considera apoyar a Dify compartiéndolo en redes sociales y en
* [Discusión en GitHub](https://github.com/langgenius/dify/discussions). Lo mejor para: compartir comentarios y hacer preguntas.
* [Reporte de problemas en GitHub](https://github.com/langgenius/dify/issues). Lo mejor para: errores que encuentres usando Dify.AI y propuestas de características. Consulta nuestra [Guía de contribución](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [Twitter](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
* [X(Twitter)](https://twitter.com/dify_ai). Lo mejor para: compartir tus aplicaciones y pasar el rato con la comunidad.
## Historial de Estrellas

@ -17,7 +17,7 @@
alt="chat sur Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="suivre sur Twitter"></a>
alt="suivre sur X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Tirages Docker" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -202,10 +202,13 @@ Si vous souhaitez configurer une configuration haute disponibilité, la communau
#### Utilisation de Terraform pour le déploiement
Déployez Dify sur une plateforme cloud en un clic en utilisant [terraform](https://www.terraform.io/)
##### Azure Global
Utilisez [terraform](https://www.terraform.io/) pour déployer Dify sur Azure en un clic.
- [Azure Terraform par @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform par @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contribuer
@ -226,7 +229,7 @@ Dans le même temps, veuillez envisager de soutenir Dify en le partageant sur le
* [Discussion GitHub](https://github.com/langgenius/dify/discussions). Meilleur pour: partager des commentaires et poser des questions.
* [Problèmes GitHub](https://github.com/langgenius/dify/issues). Meilleur pour: les bogues que vous rencontrez en utilisant Dify.AI et les propositions de fonctionnalités. Consultez notre [Guide de contribution](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [Twitter](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.
* [X(Twitter)](https://twitter.com/dify_ai). Meilleur pour: partager vos applications et passer du temps avec la communauté.
## Historique des étoiles

@ -17,7 +17,7 @@
alt="Discordでチャット"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="Twitterでフォロー"></a>
alt="X(Twitter)でフォロー"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -201,10 +201,13 @@ docker compose up -d
#### Terraformを使用したデプロイ
[terraform](https://www.terraform.io/) を使用して、ワンクリックでDifyをクラウドプラットフォームにデプロイします
##### Azure Global
[terraform](https://www.terraform.io/) を使用して、AzureにDifyをワンクリックでデプロイします。
- [nikawangのAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
- [@nikawangによるAzure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [@sotazumによるGoogle Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 貢献
@ -225,7 +228,7 @@ docker compose up -d
* [Github Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AIを使用する際に発生するエラーや問題については、[貢献ガイド](CONTRIBUTING_JA.md)を参照してください
* [Discord](https://discord.gg/FngNHpbcY7). 主に: アプリケーションの共有やコミュニティとの交流。
* [Twitter](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。
* [X(Twitter)](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。

@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
alt="follow on X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -202,10 +202,13 @@ If you'd like to configure a highly-available setup, there are community-contrib
#### Terraform atorlugu pilersitsineq
wa'logh nIqHom neH ghun deployment toy'wI' [terraform](https://www.terraform.io/) lo'laH.
##### Azure Global
Atoruk [terraform](https://www.terraform.io/) Dify-mik Azure-mut ataatsikkut ikkussuilluarlugu.
- [Azure Terraform atorlugu @nikawang](https://github.com/nikawang/dify-azure-terraform)
- [Azure Terraform mung @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform qachlot @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Contributing
@ -228,7 +231,7 @@ At the same time, please consider supporting Dify by sharing it on social media
). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
* [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
## Star History

@ -17,7 +17,7 @@
alt="chat on Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on Twitter"></a>
alt="follow on X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -39,7 +39,6 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
</p>
@ -195,10 +194,14 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
#### Terraform을 사용한 배포
[terraform](https://www.terraform.io/)을 사용하여 단 한 번의 클릭으로 Dify를 클라우드 플랫폼에 배포하십시오
##### Azure Global
[terraform](https://www.terraform.io/)을 사용하여 Azure에 Dify를 원클릭으로 배포하세요.
- [nikawang의 Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [sotazum의 Google Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
## 기여
코드에 기여하고 싶은 분들은 [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.

@ -17,7 +17,7 @@
alt="Discord'da sohbet et"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="Twitter'da takip et"></a>
alt="X(Twitter)'da takip et"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Çekmeleri" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -200,9 +200,13 @@ Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'
#### Dağıtım için Terraform Kullanımı
Dify'ı bulut platformuna tek tıklamayla dağıtın [terraform](https://www.terraform.io/) kullanarak
##### Azure Global
[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
- [Azure Terraform tarafından @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform tarafından @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Katkıda Bulunma
@ -222,7 +226,7 @@ Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda p
* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [Twitter](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [X(Twitter)](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
## Star history

@ -17,7 +17,7 @@
alt="chat trên Discord"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="theo dõi trên Twitter"></a>
alt="theo dõi trên X(Twitter)"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
@ -196,10 +196,14 @@ Nếu bạn muốn cấu hình một cài đặt có độ sẵn sàng cao, có
#### Sử dụng Terraform để Triển khai
Triển khai Dify lên nền tảng đám mây với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/)
##### Azure Global
Triển khai Dify lên Azure chỉ với một cú nhấp chuột bằng cách sử dụng [terraform](https://www.terraform.io/).
- [Azure Terraform bởi @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform bởi @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
## Đóng góp
Đối với những người muốn đóng góp mã, xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
@ -219,7 +223,7 @@ Triển khai Dify lên Azure chỉ với một cú nhấp chuột bằng cách s
* [Thảo luận GitHub](https://github.com/langgenius/dify/discussions). Tốt nhất cho: chia sẻ phản hồi và đặt câu hỏi.
* [Vấn đề GitHub](https://github.com/langgenius/dify/issues). Tốt nhất cho: lỗi bạn gặp phải khi sử dụng Dify.AI và đề xuất tính năng. Xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.
* [Discord](https://discord.gg/FngNHpbcY7). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
* [Twitter](https://twitter.com/dify_ai). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
* [X(Twitter)](https://twitter.com/dify_ai). Tốt nhất cho: chia sẻ ứng dụng của bạn và giao lưu với cộng đồng.
## Lịch sử Yêu thích

@ -271,6 +271,9 @@ HTTP_REQUEST_MAX_WRITE_TIMEOUT=600
HTTP_REQUEST_NODE_MAX_BINARY_SIZE=10485760
HTTP_REQUEST_NODE_MAX_TEXT_SIZE=1048576
# Respect X-* headers to redirect clients
RESPECT_XFORWARD_HEADERS_ENABLED=false
# Log file path
LOG_FILE=

@ -26,7 +26,7 @@ from commands import register_commands
from configs import dify_config
# DO NOT REMOVE BELOW
from events import event_handlers
from events import event_handlers # noqa: F401
from extensions import (
ext_celery,
ext_code_based_extension,
@ -36,6 +36,7 @@ from extensions import (
ext_login,
ext_mail,
ext_migrate,
ext_proxy_fix,
ext_redis,
ext_sentry,
ext_storage,
@ -45,7 +46,7 @@ from extensions.ext_login import login_manager
from libs.passport import PassportService
# TODO: Find a way to avoid importing models here
from models import account, dataset, model, source, task, tool, tools, web
from models import account, dataset, model, source, task, tool, tools, web # noqa: F401
from services.account_service import AccountService
# DO NOT REMOVE ABOVE
@ -156,6 +157,7 @@ def initialize_extensions(app):
ext_mail.init_app(app)
ext_hosting_provider.init_app(app)
ext_sentry.init_app(app)
ext_proxy_fix.init_app(app)
# Flask-Login configuration
@ -181,10 +183,10 @@ def load_user_from_request(request_from_flask_login):
decoded = PassportService().verify(auth_token)
user_id = decoded.get("user_id")
account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if account:
contexts.tenant_id.set(account.current_tenant_id)
return account
logged_in_account = AccountService.load_logged_in_account(account_id=user_id, token=auth_token)
if logged_in_account:
contexts.tenant_id.set(logged_in_account.current_tenant_id)
return logged_in_account
@login_manager.unauthorized_handler

@ -247,6 +247,12 @@ class HttpConfig(BaseSettings):
default=None,
)
RESPECT_XFORWARD_HEADERS_ENABLED: bool = Field(
description="Enable or disable the X-Forwarded-For Proxy Fix middleware from Werkzeug"
" to respect X-* headers to redirect clients",
default=False,
)
class InnerAPIConfig(BaseSettings):
"""

@ -191,6 +191,22 @@ class CeleryConfig(DatabaseConfig):
return self.CELERY_BROKER_URL.startswith("rediss://") if self.CELERY_BROKER_URL else False
class InternalTestConfig(BaseSettings):
"""
Configuration settings for Internal Test
"""
AWS_SECRET_ACCESS_KEY: Optional[str] = Field(
description="Internal test AWS secret access key",
default=None,
)
AWS_ACCESS_KEY_ID: Optional[str] = Field(
description="Internal test AWS access key ID",
default=None,
)
class MiddlewareConfig(
# place the configs in alphabet order
CeleryConfig,
@ -224,5 +240,6 @@ class MiddlewareConfig(
TiDBVectorConfig,
WeaviateConfig,
ElasticsearchConfig,
InternalTestConfig,
):
pass

@ -188,6 +188,7 @@ class ChatConversationApi(Resource):
subquery.c.from_end_user_session_id.ilike(keyword_filter),
),
)
.group_by(Conversation.id)
)
account = current_user

@ -13,6 +13,7 @@ from libs.login import login_required
from services.dataset_service import DatasetService
from services.external_knowledge_service import ExternalDatasetService
from services.hit_testing_service import HitTestingService
from services.knowledge_service import ExternalDatasetTestService
def _validate_name(name):
@ -232,8 +233,31 @@ class ExternalKnowledgeHitTestingApi(Resource):
raise InternalServerError(str(e))
class BedrockRetrievalApi(Resource):
# this api is only for internal testing
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("retrieval_setting", nullable=False, required=True, type=dict, location="json")
parser.add_argument(
"query",
nullable=False,
required=True,
type=str,
)
parser.add_argument("knowledge_id", nullable=False, required=True, type=str)
args = parser.parse_args()
# Call the knowledge retrieval service
result = ExternalDatasetTestService.knowledge_retrieval(
args["retrieval_setting"], args["query"], args["knowledge_id"]
)
return result, 200
api.add_resource(ExternalKnowledgeHitTestingApi, "/datasets/<uuid:dataset_id>/external-hit-testing")
api.add_resource(ExternalDatasetCreateApi, "/datasets/external")
api.add_resource(ExternalApiTemplateListApi, "/datasets/external-knowledge-api")
api.add_resource(ExternalApiTemplateApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>")
api.add_resource(ExternalApiUseCheckApi, "/datasets/external-knowledge-api/<uuid:external_knowledge_api_id>/use-check")
# this api is only for internal test
api.add_resource(BedrockRetrievalApi, "/test/retrieval")

@ -126,13 +126,12 @@ class ModelProviderIconApi(Resource):
Get model provider icon
"""
@setup_required
@login_required
@account_initialization_required
def get(self, provider: str, icon_type: str, lang: str):
model_provider_service = ModelProviderService()
icon, mimetype = model_provider_service.get_model_provider_icon(
provider=provider, icon_type=icon_type, lang=lang
provider=provider,
icon_type=icon_type,
lang=lang,
)
return send_file(io.BytesIO(icon), mimetype=mimetype)

@ -369,7 +369,7 @@ class CotAgentRunner(BaseAgentRunner, ABC):
return message
def _organize_historic_prompt_messages(
self, current_session_messages: list[PromptMessage] = None
self, current_session_messages: Optional[list[PromptMessage]] = None
) -> list[PromptMessage]:
"""
organize historic prompt messages

@ -27,7 +27,7 @@ class CotChatAgentRunner(CotAgentRunner):
return SystemPromptMessage(content=system_prompt)
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""

@ -1,4 +1,5 @@
import json
from typing import Optional
from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage, UserPromptMessage
@ -21,7 +22,7 @@ class CotCompletionAgentRunner(CotAgentRunner):
return system_prompt
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] = None) -> str:
def _organize_historic_prompt(self, current_session_messages: Optional[list[PromptMessage]] = None) -> str:
"""
Organize historic prompt
"""

@ -2,7 +2,7 @@ import json
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any, Union
from typing import Any, Optional, Union
from core.agent.base_agent_runner import BaseAgentRunner
from core.app.apps.base_app_queue_manager import PublishFrom
@ -370,7 +370,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return tool_calls
def _init_system_message(
self, prompt_template: str, prompt_messages: list[PromptMessage] = None
self, prompt_template: str, prompt_messages: Optional[list[PromptMessage]] = None
) -> list[PromptMessage]:
"""
Initialize system message
@ -385,7 +385,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
return prompt_messages
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""

@ -113,6 +113,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
# always enable retriever resource in debugger mode
app_config.additional_features.show_retrieve_source = True
workflow_run_id = str(uuid.uuid4())
# init application generate entity
application_generate_entity = AdvancedChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
@ -127,6 +128,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
invoke_from=invoke_from,
extras=extras,
trace_manager=trace_manager,
workflow_run_id=workflow_run_id,
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)

@ -149,6 +149,9 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
SystemVariableKey.CONVERSATION_ID: self.conversation.id,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.DIALOGUE_COUNT: conversation_dialogue_count,
SystemVariableKey.APP_ID: app_config.app_id,
SystemVariableKey.WORKFLOW_ID: app_config.workflow_id,
SystemVariableKey.WORKFLOW_RUN_ID: self.application_generate_entity.workflow_run_id,
}
# init variable pool

@ -45,6 +45,7 @@ from core.app.entities.task_entities import (
from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline
from core.app.task_pipeline.message_cycle_manage import MessageCycleManage
from core.app.task_pipeline.workflow_cycle_manage import WorkflowCycleManage
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
from core.ops.ops_trace_manager import TraceQueueManager
from core.workflow.enums import SystemVariableKey
@ -107,6 +108,10 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
SystemVariableKey.FILES: application_generate_entity.files,
SystemVariableKey.CONVERSATION_ID: conversation.id,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.DIALOGUE_COUNT: conversation.dialogue_count,
SystemVariableKey.APP_ID: application_generate_entity.app_config.app_id,
SystemVariableKey.WORKFLOW_ID: workflow.id,
SystemVariableKey.WORKFLOW_RUN_ID: application_generate_entity.workflow_run_id,
}
self._task_state = WorkflowTaskState()
@ -505,6 +510,10 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
self._message.total_price = usage.total_price
self._message.currency = usage.currency
self._task_state.metadata["usage"] = jsonable_encoder(usage)
else:
self._task_state.metadata["usage"] = jsonable_encoder(LLMUsage.empty_usage())
db.session.commit()
message_was_created.send(

@ -99,6 +99,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
user_id = user.id if isinstance(user, Account) else user.session_id
trace_manager = TraceQueueManager(app_model.id, user_id)
workflow_run_id = str(uuid.uuid4())
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
task_id=str(uuid.uuid4()),
@ -110,6 +111,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
invoke_from=invoke_from,
call_depth=call_depth,
trace_manager=trace_manager,
workflow_run_id=workflow_run_id,
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)

@ -90,6 +90,9 @@ class WorkflowAppRunner(WorkflowBasedAppRunner):
system_inputs = {
SystemVariableKey.FILES: files,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.APP_ID: app_config.app_id,
SystemVariableKey.WORKFLOW_ID: app_config.workflow_id,
SystemVariableKey.WORKFLOW_RUN_ID: self.application_generate_entity.workflow_run_id,
}
variable_pool = VariablePool(

@ -97,6 +97,9 @@ class WorkflowAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCycleMa
self._workflow_system_variables = {
SystemVariableKey.FILES: application_generate_entity.files,
SystemVariableKey.USER_ID: user_id,
SystemVariableKey.APP_ID: application_generate_entity.app_config.app_id,
SystemVariableKey.WORKFLOW_ID: workflow.id,
SystemVariableKey.WORKFLOW_RUN_ID: application_generate_entity.workflow_run_id,
}
self._task_state = WorkflowTaskState()

@ -152,6 +152,7 @@ class AdvancedChatAppGenerateEntity(AppGenerateEntity):
conversation_id: Optional[str] = None
parent_message_id: Optional[str] = None
workflow_run_id: Optional[str] = None
query: str
class SingleIterationRunEntity(BaseModel):
@ -172,6 +173,7 @@ class WorkflowAppGenerateEntity(AppGenerateEntity):
# app config
app_config: WorkflowUIBasedAppConfig
workflow_run_id: Optional[str] = None
class SingleIterationRunEntity(BaseModel):
"""

@ -82,8 +82,8 @@ class MessageCycleManage:
try:
name = LLMGenerator.generate_conversation_name(app_model.tenant_id, query)
conversation.name = name
except:
pass
except Exception as e:
logging.exception(f"generate conversation name failed: {e}")
db.session.merge(conversation)
db.session.commit()

@ -85,6 +85,9 @@ class WorkflowCycleManage:
# init workflow run
workflow_run = WorkflowRun()
workflow_run_id = self._workflow_system_variables[SystemVariableKey.WORKFLOW_RUN_ID]
if workflow_run_id:
workflow_run.id = workflow_run_id
workflow_run.tenant_id = self._workflow.tenant_id
workflow_run.app_id = self._workflow.app_id
workflow_run.sequence_number = new_sequence_number

@ -1,9 +1,9 @@
import os
from collections.abc import Mapping, Sequence
from typing import Any, Optional, TextIO, Union
from pydantic import BaseModel
from configs import dify_config
from core.ops.entities.trace_entity import TraceTaskName
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.tools.entities.tool_entities import ToolInvokeMessage
@ -50,7 +50,8 @@ class DifyAgentCallbackHandler(BaseModel):
tool_inputs: Mapping[str, Any],
) -> None:
"""Do nothing."""
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
if dify_config.DEBUG:
print_text("\n[on_tool_start] ToolCall:" + tool_name + "\n" + str(tool_inputs) + "\n", color=self.color)
def on_tool_end(
self,
@ -62,11 +63,12 @@ class DifyAgentCallbackHandler(BaseModel):
trace_manager: Optional[TraceQueueManager] = None,
) -> None:
"""If not the final action, print out observation."""
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
if dify_config.DEBUG:
print_text("\n[on_tool_end]\n", color=self.color)
print_text("Tool: " + tool_name + "\n", color=self.color)
print_text("Inputs: " + str(tool_inputs) + "\n", color=self.color)
print_text("Outputs: " + str(tool_outputs)[:1000] + "\n", color=self.color)
print_text("\n")
if trace_manager:
trace_manager.add_trace_task(
@ -82,30 +84,33 @@ class DifyAgentCallbackHandler(BaseModel):
def on_tool_error(self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any) -> None:
"""Do nothing."""
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
if dify_config.DEBUG:
print_text("\n[on_tool_error] Error: " + str(error) + "\n", color="red")
def on_agent_start(self, thought: str) -> None:
"""Run on agent start."""
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
if dify_config.DEBUG:
if thought:
print_text(
"\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\nThought: " + thought + "\n",
color=self.color,
)
else:
print_text("\n[on_agent_start] \nCurrent Loop: " + str(self.current_loop) + "\n", color=self.color)
def on_agent_finish(self, color: Optional[str] = None, **kwargs: Any) -> None:
"""Run on agent end."""
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
if dify_config.DEBUG:
print_text("\n[on_agent_finish]\n Loop: " + str(self.current_loop) + "\n", color=self.color)
self.current_loop += 1
@property
def ignore_agent(self) -> bool:
"""Whether to ignore agent callbacks."""
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"
return not dify_config.DEBUG
@property
def ignore_chat_model(self) -> bool:
"""Whether to ignore chat model callbacks."""
return not os.environ.get("DEBUG") or os.environ.get("DEBUG").lower() != "true"
return not dify_config.DEBUG

@ -198,16 +198,34 @@ class MessageFileParser:
if "amazonaws.com" not in parsed_url.netloc:
return False
query_params = parse_qs(parsed_url.query)
required_params = ["Signature", "Expires"]
for param in required_params:
if param not in query_params:
def check_presign_v2(query_params):
required_params = ["Signature", "Expires"]
for param in required_params:
if param not in query_params:
return False
if not query_params["Expires"][0].isdigit():
return False
if not query_params["Expires"][0].isdigit():
return False
signature = query_params["Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
signature = query_params["Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
def check_presign_v4(query_params):
required_params = ["X-Amz-Signature", "X-Amz-Expires"]
for param in required_params:
if param not in query_params:
return False
if not query_params["X-Amz-Expires"][0].isdigit():
return False
signature = query_params["X-Amz-Signature"][0]
if not re.match(r"^[A-Za-z0-9+/]+={0,2}$", signature):
return False
return True
return check_presign_v4(query_params) or check_presign_v2(query_params)
except Exception:
return False

@ -211,9 +211,9 @@ class IndexingRunner:
tenant_id: str,
extract_settings: list[ExtractSetting],
tmp_processing_rule: dict,
doc_form: str = None,
doc_form: Optional[str] = None,
doc_language: str = "English",
dataset_id: str = None,
dataset_id: Optional[str] = None,
indexing_technique: str = "economy",
) -> dict:
"""

@ -58,7 +58,11 @@ class TokenBufferMemory:
# instead of all messages from the conversation, we only need to extract messages
# that belong to the thread of last message
thread_messages = extract_thread_messages(messages)
thread_messages.pop(0)
# for newly created message, its answer is temporarily empty, we don't need to add it to memory
if thread_messages and not thread_messages[-1].answer:
thread_messages.pop()
messages = list(reversed(thread_messages))
message_file_parser = MessageFileParser(tenant_id=app_record.tenant_id, app_id=app_record.id)

@ -94,7 +94,7 @@ class LargeLanguageModel(AIModel):
)
try:
if "response_format" in model_parameters:
if "response_format" in model_parameters and model_parameters["response_format"] in {"JSON", "XML"}:
result = self._code_block_mode_wrapper(
model=model,
credentials=credentials,

@ -1,7 +1,7 @@
import logging
import re
from abc import abstractmethod
from typing import Optional
from typing import Any, Optional
from pydantic import ConfigDict
@ -88,7 +88,7 @@ class TTSModel(AIModel):
else:
return [{"name": d["name"], "value": d["mode"]} for d in voices]
def _get_model_default_voice(self, model: str, credentials: dict) -> any:
def _get_model_default_voice(self, model: str, credentials: dict) -> Any:
"""
Get voice for given tts model

@ -169,7 +169,7 @@ class AnthropicLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

@ -1081,8 +1081,81 @@ LLM_BASE_MODELS = [
),
),
),
AzureBaseModel(
base_model_name="o1-preview",
entity=AIModelEntity(
model="fake-deployment-name",
label=I18nObject(
en_US="fake-deployment-name-label",
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name="response_format",
label=I18nObject(zh_Hans="回复格式", en_US="response_format"),
type="string",
help=I18nObject(
zh_Hans="指定模型必须输出的格式", en_US="specifying the format that the model must output"
),
required=False,
options=["text", "json_object"],
),
_get_max_tokens(default=512, min_val=1, max_val=32768),
],
pricing=PriceConfig(
input=15.00,
output=60.00,
unit=0.000001,
currency="USD",
),
),
),
AzureBaseModel(
base_model_name="o1-mini",
entity=AIModelEntity(
model="fake-deployment-name",
label=I18nObject(
en_US="fake-deployment-name-label",
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name="response_format",
label=I18nObject(zh_Hans="回复格式", en_US="response_format"),
type="string",
help=I18nObject(
zh_Hans="指定模型必须输出的格式", en_US="specifying the format that the model must output"
),
required=False,
options=["text", "json_object"],
),
_get_max_tokens(default=512, min_val=1, max_val=65536),
],
pricing=PriceConfig(
input=3.00,
output=12.00,
unit=0.000001,
currency="USD",
),
),
),
]
EMBEDDING_BASE_MODELS = [
AzureBaseModel(
base_model_name="text-embedding-ada-002",

@ -53,6 +53,9 @@ model_credential_schema:
type: select
required: true
options:
- label:
en_US: 2024-09-01-preview
value: 2024-09-01-preview
- label:
en_US: 2024-08-01-preview
value: 2024-08-01-preview
@ -120,6 +123,18 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
- label:
en_US: o1-mini
value: o1-mini
show_on:
- variable: __model_type
value: llm
- label:
en_US: o1-preview
value: o1-preview
show_on:
- variable: __model_type
value: llm
- label:
en_US: gpt-4o-mini
value: gpt-4o-mini

@ -312,10 +312,24 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
if user:
extra_model_kwargs["user"] = user
# clear illegal prompt messages
prompt_messages = self._clear_illegal_prompt_messages(model, prompt_messages)
block_as_stream = False
if model.startswith("o1"):
if stream:
block_as_stream = True
stream = False
if "stream_options" in extra_model_kwargs:
del extra_model_kwargs["stream_options"]
if "stop" in extra_model_kwargs:
del extra_model_kwargs["stop"]
# chat model
messages = [self._convert_prompt_message_to_dict(m) for m in prompt_messages]
response = client.chat.completions.create(
messages=messages,
messages=[self._convert_prompt_message_to_dict(m) for m in prompt_messages],
model=model,
stream=stream,
**model_parameters,
@ -325,7 +339,91 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
if stream:
return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages, tools)
return self._handle_chat_generate_response(model, credentials, response, prompt_messages, tools)
block_result = self._handle_chat_generate_response(model, credentials, response, prompt_messages, tools)
if block_as_stream:
return self._handle_chat_block_as_stream_response(block_result, prompt_messages, stop)
return block_result
def _handle_chat_block_as_stream_response(
self,
block_result: LLMResult,
prompt_messages: list[PromptMessage],
stop: Optional[list[str]] = None,
) -> Generator[LLMResultChunk, None, None]:
"""
Handle llm chat response
:param model: model name
:param credentials: credentials
:param response: response
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:param stop: stop words
:return: llm response chunk generator
"""
text = block_result.message.content
text = cast(str, text)
if stop:
text = self.enforce_stop_tokens(text, stop)
yield LLMResultChunk(
model=block_result.model,
prompt_messages=prompt_messages,
system_fingerprint=block_result.system_fingerprint,
delta=LLMResultChunkDelta(
index=0,
message=AssistantPromptMessage(content=text),
finish_reason="stop",
usage=block_result.usage,
),
)
def _clear_illegal_prompt_messages(self, model: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Clear illegal prompt messages for OpenAI API
:param model: model name
:param prompt_messages: prompt messages
:return: cleaned prompt messages
"""
checklist = ["gpt-4-turbo", "gpt-4-turbo-2024-04-09"]
if model in checklist:
# count how many user messages are there
user_message_count = len([m for m in prompt_messages if isinstance(m, UserPromptMessage)])
if user_message_count > 1:
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
item.data
if item.type == PromptMessageContentType.TEXT
else "[IMAGE]"
if item.type == PromptMessageContentType.IMAGE
else ""
for item in prompt_message.content
]
)
if model.startswith("o1"):
system_message_count = len([m for m in prompt_messages if isinstance(m, SystemPromptMessage)])
if system_message_count > 0:
new_prompt_messages = []
for prompt_message in prompt_messages:
if isinstance(prompt_message, SystemPromptMessage):
prompt_message = UserPromptMessage(
content=prompt_message.content,
name=prompt_message.name,
)
new_prompt_messages.append(prompt_message)
prompt_messages = new_prompt_messages
return prompt_messages
def _handle_chat_generate_response(
self,
@ -560,7 +658,7 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
tokens_per_message = 4
# if there's a name, the role is omitted
tokens_per_name = -1
elif model.startswith("gpt-35-turbo") or model.startswith("gpt-4"):
elif model.startswith("gpt-35-turbo") or model.startswith("gpt-4") or model.startswith("o1"):
tokens_per_message = 3
tokens_per_name = 1
else:

@ -1,6 +1,6 @@
import concurrent.futures
import copy
from typing import Optional
from typing import Any, Optional
from openai import AzureOpenAI
@ -19,7 +19,7 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
def _invoke(
self, model: str, tenant_id: str, credentials: dict, content_text: str, voice: str, user: Optional[str] = None
) -> any:
) -> Any:
"""
_invoke text2speech model
@ -56,7 +56,7 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> Any:
"""
_tts_invoke_streaming text2speech model
:param model: model name

@ -92,7 +92,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

@ -511,7 +511,7 @@ class FireworksLargeLanguageModel(_CommonFireworks, LargeLanguageModel):
model: str,
messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None,
credentials: dict = None,
credentials: Optional[dict] = None,
) -> int:
"""
Approximate num tokens with GPT2 tokenizer.

@ -1,4 +1,4 @@
from typing import Optional
from typing import Any, Optional
import httpx
@ -46,7 +46,7 @@ class FishAudioText2SpeechModel(TTSModel):
content_text: str,
voice: str,
user: Optional[str] = None,
) -> any:
) -> Any:
"""
Invoke text2speech model
@ -87,7 +87,7 @@ class FishAudioText2SpeechModel(TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> Any:
"""
Invoke streaming text2speech model
:param model: model name
@ -112,7 +112,7 @@ class FishAudioText2SpeechModel(TTSModel):
except Exception as ex:
raise InvokeBadRequestError(str(ex))
def _tts_invoke_streaming_sentence(self, credentials: dict, content_text: str, voice: Optional[str] = None) -> any:
def _tts_invoke_streaming_sentence(self, credentials: dict, content_text: str, voice: Optional[str] = None) -> Any:
"""
Invoke streaming text2speech model

@ -5,3 +5,4 @@
- llama3-8b-8192
- mixtral-8x7b-32768
- llama2-70b-4096
- llama-guard-3-8b

@ -0,0 +1,25 @@
model: llama-guard-3-8b
label:
zh_Hans: Llama-Guard-3-8B
en_US: Llama-Guard-3-8B
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 8192
pricing:
input: '0.20'
output: '0.20'
unit: '0.000001'
currency: USD

@ -61,11 +61,19 @@ class JinaRerankModel(RerankModel):
rerank_documents = []
for result in results["results"]:
index = result["index"]
if "document" in result:
text = result["document"]["text"]
else:
# llama.cpp rerank maynot return original documents
text = docs[index]
rerank_document = RerankDocument(
index=result["index"],
text=result["document"]["text"],
index=index,
text=text,
score=result["relevance_score"],
)
if score_threshold is None or result["relevance_score"] >= score_threshold:
rerank_documents.append(rerank_document)

@ -70,11 +70,19 @@ class LocalaiRerankModel(RerankModel):
rerank_documents = []
for result in results["results"]:
index = result["index"]
if "document" in result:
text = result["document"]["text"]
else:
# llama.cpp rerank maynot return original documents
text = docs[index]
rerank_document = RerankDocument(
index=result["index"],
text=result["document"]["text"],
index=index,
text=text,
score=result["relevance_score"],
)
if score_threshold is None or result["relevance_score"] >= score_threshold:
rerank_documents.append(rerank_document)

@ -111,7 +111,7 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

@ -1,5 +1,5 @@
import concurrent.futures
from typing import Optional
from typing import Any, Optional
from openai import OpenAI
@ -16,7 +16,7 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
def _invoke(
self, model: str, tenant_id: str, credentials: dict, content_text: str, voice: str, user: Optional[str] = None
) -> any:
) -> Any:
"""
_invoke text2speech model
@ -55,7 +55,7 @@ class OpenAIText2SpeechModel(_CommonOpenAI, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> Any:
"""
_tts_invoke_streaming text2speech model

@ -688,7 +688,7 @@ class OAIAPICompatLargeLanguageModel(_CommonOaiApiCompat, LargeLanguageModel):
model: str,
messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None,
credentials: dict = None,
credentials: Optional[dict] = None,
) -> int:
"""
Approximate num tokens with GPT2 tokenizer.

@ -27,9 +27,9 @@ parameter_rules:
- name: max_tokens
use_template: max_tokens
required: true
default: 4096
default: 8192
min: 1
max: 4096
max: 8192
- name: response_format
use_template: response_format
pricing:

@ -77,7 +77,7 @@ class SageMakerText2SpeechModel(TTSModel):
"""
pass
def _detect_lang_code(self, content: str, map_dict: dict = None):
def _detect_lang_code(self, content: str, map_dict: Optional[dict] = None):
map_dict = {"zh": "<|zh|>", "en": "<|en|>", "ja": "<|jp|>", "zh-TW": "<|yue|>", "ko": "<|ko|>"}
response = self.comprehend_client.detect_dominant_language(Text=content)
@ -192,7 +192,7 @@ class SageMakerText2SpeechModel(TTSModel):
InvokeBadRequestError: [InvokeBadRequestError, KeyError, ValueError],
}
def _get_model_default_voice(self, model: str, credentials: dict) -> any:
def _get_model_default_voice(self, model: str, credentials: dict) -> Any:
return ""
def _get_model_word_limit(self, model: str, credentials: dict) -> int:
@ -225,7 +225,7 @@ class SageMakerText2SpeechModel(TTSModel):
json_obj = json.loads(json_str)
return json_obj
def _tts_invoke_streaming(self, model_type: str, payload: dict, sagemaker_endpoint: str) -> any:
def _tts_invoke_streaming(self, model_type: str, payload: dict, sagemaker_endpoint: str) -> Any:
"""
_tts_invoke_streaming text2speech model

@ -1,8 +1,18 @@
from collections.abc import Generator
from typing import Optional, Union
from core.model_runtime.entities.llm_entities import LLMResult
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.llm_entities import LLMMode, LLMResult
from core.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool
from core.model_runtime.entities.model_entities import (
AIModelEntity,
FetchFrom,
ModelFeature,
ModelPropertyKey,
ModelType,
ParameterRule,
ParameterType,
)
from core.model_runtime.model_providers.openai_api_compatible.llm.llm import OAIAPICompatLargeLanguageModel
@ -29,3 +39,53 @@ class SiliconflowLargeLanguageModel(OAIAPICompatLargeLanguageModel):
def _add_custom_parameters(cls, credentials: dict) -> None:
credentials["mode"] = "chat"
credentials["endpoint_url"] = "https://api.siliconflow.cn/v1"
def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
return AIModelEntity(
model=model,
label=I18nObject(en_US=model, zh_Hans=model),
model_type=ModelType.LLM,
features=[ModelFeature.TOOL_CALL, ModelFeature.MULTI_TOOL_CALL, ModelFeature.STREAM_TOOL_CALL]
if credentials.get("function_calling_type") == "tool_call"
else [],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.CONTEXT_SIZE: int(credentials.get("context_size", 8000)),
ModelPropertyKey.MODE: LLMMode.CHAT.value,
},
parameter_rules=[
ParameterRule(
name="temperature",
use_template="temperature",
label=I18nObject(en_US="Temperature", zh_Hans="温度"),
type=ParameterType.FLOAT,
),
ParameterRule(
name="max_tokens",
use_template="max_tokens",
default=512,
min=1,
max=int(credentials.get("max_tokens", 1024)),
label=I18nObject(en_US="Max Tokens", zh_Hans="最大标记"),
type=ParameterType.INT,
),
ParameterRule(
name="top_p",
use_template="top_p",
label=I18nObject(en_US="Top P", zh_Hans="Top P"),
type=ParameterType.FLOAT,
),
ParameterRule(
name="top_k",
use_template="top_k",
label=I18nObject(en_US="Top K", zh_Hans="Top K"),
type=ParameterType.FLOAT,
),
ParameterRule(
name="frequency_penalty",
use_template="frequency_penalty",
label=I18nObject(en_US="Frequency Penalty", zh_Hans="重复惩罚"),
type=ParameterType.FLOAT,
),
],
)

@ -20,6 +20,7 @@ supported_model_types:
- speech2text
configurate_methods:
- predefined-model
- customizable-model
provider_credential_schema:
credential_form_schemas:
- variable: api_key
@ -30,3 +31,57 @@ provider_credential_schema:
placeholder:
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
model_credential_schema:
model:
label:
en_US: Model Name
zh_Hans: 模型名称
placeholder:
en_US: Enter your model name
zh_Hans: 输入模型名称
credential_form_schemas:
- variable: api_key
label:
en_US: API Key
type: secret-input
required: true
placeholder:
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
- variable: context_size
label:
zh_Hans: 模型上下文长度
en_US: Model context size
required: true
type: text-input
default: '4096'
placeholder:
zh_Hans: 在此输入您的模型上下文长度
en_US: Enter your Model context size
- variable: max_tokens
label:
zh_Hans: 最大 token 上限
en_US: Upper bound for max tokens
default: '4096'
type: text-input
show_on:
- variable: __model_type
value: llm
- variable: function_calling_type
label:
en_US: Function calling
type: select
required: false
default: no_call
options:
- value: no_call
label:
en_US: Not Support
zh_Hans: 不支持
- value: function_call
label:
en_US: Support
zh_Hans: 支持
show_on:
- variable: __model_type
value: llm

@ -0,0 +1,4 @@
model: gte-rerank
model_type: rerank
model_properties:
context_size: 4000

@ -0,0 +1,136 @@
from typing import Optional
import dashscope
from dashscope.common.error import (
AuthenticationError,
InvalidParameter,
RequestFailure,
ServiceUnavailableError,
UnsupportedHTTPMethod,
UnsupportedModel,
)
from core.model_runtime.entities.rerank_entities import RerankDocument, RerankResult
from core.model_runtime.errors.invoke import (
InvokeAuthorizationError,
InvokeBadRequestError,
InvokeConnectionError,
InvokeError,
InvokeRateLimitError,
InvokeServerUnavailableError,
)
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.rerank_model import RerankModel
class GTERerankModel(RerankModel):
"""
Model class for GTE rerank model.
"""
def _invoke(
self,
model: str,
credentials: dict,
query: str,
docs: list[str],
score_threshold: Optional[float] = None,
top_n: Optional[int] = None,
user: Optional[str] = None,
) -> RerankResult:
"""
Invoke rerank model
:param model: model name
:param credentials: model credentials
:param query: search query
:param docs: docs for reranking
:param score_threshold: score threshold
:param top_n: top n
:param user: unique user id
:return: rerank result
"""
if len(docs) == 0:
return RerankResult(model=model, docs=docs)
# initialize client
dashscope.api_key = credentials["dashscope_api_key"]
response = dashscope.TextReRank.call(
query=query,
documents=docs,
model=model,
top_n=top_n,
return_documents=True,
)
rerank_documents = []
for _, result in enumerate(response.output.results):
# format document
rerank_document = RerankDocument(
index=result.index,
score=result.relevance_score,
text=result["document"]["text"],
)
# score threshold check
if score_threshold is not None:
if result.relevance_score >= score_threshold:
rerank_documents.append(rerank_document)
else:
rerank_documents.append(rerank_document)
return RerankResult(model=model, docs=rerank_documents)
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
try:
self.invoke(
model=model,
credentials=credentials,
query="What is the capital of the United States?",
docs=[
"Carson City is the capital city of the American state of Nevada. At the 2010 United States "
"Census, Carson City had a population of 55,274.",
"The Commonwealth of the Northern Mariana Islands is a group of islands in the Pacific Ocean that "
"are a political division controlled by the United States. Its capital is Saipan.",
],
score_threshold=0.8,
)
except Exception as ex:
print(ex)
raise CredentialsValidateFailedError(str(ex))
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
return {
InvokeConnectionError: [
RequestFailure,
],
InvokeServerUnavailableError: [
ServiceUnavailableError,
],
InvokeRateLimitError: [],
InvokeAuthorizationError: [
AuthenticationError,
],
InvokeBadRequestError: [
InvalidParameter,
UnsupportedModel,
UnsupportedHTTPMethod,
],
}

@ -18,6 +18,7 @@ supported_model_types:
- llm
- tts
- text-embedding
- rerank
configurate_methods:
- predefined-model
- customizable-model

@ -1,6 +1,6 @@
import threading
from queue import Queue
from typing import Optional
from typing import Any, Optional
import dashscope
from dashscope import SpeechSynthesizer
@ -20,7 +20,7 @@ class TongyiText2SpeechModel(_CommonTongyi, TTSModel):
def _invoke(
self, model: str, tenant_id: str, credentials: dict, content_text: str, voice: str, user: Optional[str] = None
) -> any:
) -> Any:
"""
_invoke text2speech model
@ -58,7 +58,7 @@ class TongyiText2SpeechModel(_CommonTongyi, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> Any:
"""
_tts_invoke_streaming text2speech model

@ -7,6 +7,7 @@ from collections.abc import Generator
from typing import Optional, Union, cast
import google.auth.transport.requests
import requests
import vertexai.generative_models as glm
from anthropic import AnthropicVertex, Stream
from anthropic.types import (
@ -653,9 +654,15 @@ class VertexAiLargeLanguageModel(LargeLanguageModel):
if c.type == PromptMessageContentType.TEXT:
parts.append(glm.Part.from_text(c.data))
else:
metadata, data = c.data.split(",", 1)
mime_type = metadata.split(";", 1)[0].split(":")[1]
parts.append(glm.Part.from_data(mime_type=mime_type, data=data))
message_content = cast(ImagePromptMessageContent, c)
if not message_content.data.startswith("data:"):
url_arr = message_content.data.split(".")
mime_type = f"image/{url_arr[-1]}"
parts.append(glm.Part.from_uri(mime_type=mime_type, uri=message_content.data))
else:
metadata, data = c.data.split(",", 1)
mime_type = metadata.split(";", 1)[0].split(":")[1]
parts.append(glm.Part.from_data(mime_type=mime_type, data=data))
glm_content = glm.Content(role="user", parts=parts)
return glm_content
elif isinstance(message, AssistantPromptMessage):

@ -64,7 +64,7 @@ class ErnieBotLargeLanguageModel(LargeLanguageModel):
stop: Optional[list[str]] = None,
stream: bool = True,
user: Optional[str] = None,
callbacks: list[Callback] = None,
callbacks: Optional[list[Callback]] = None,
) -> Union[LLMResult, Generator]:
"""
Code block mode wrapper for invoking large language model

@ -1,5 +1,5 @@
import concurrent.futures
from typing import Optional
from typing import Any, Optional
from xinference_client.client.restful.restful_client import RESTfulAudioModelHandle
@ -166,7 +166,7 @@ class XinferenceText2SpeechModel(TTSModel):
return self.model_voices["__default"]["all"]
def _get_model_default_voice(self, model: str, credentials: dict) -> any:
def _get_model_default_voice(self, model: str, credentials: dict) -> Any:
return ""
def _get_model_word_limit(self, model: str, credentials: dict) -> int:
@ -178,7 +178,7 @@ class XinferenceText2SpeechModel(TTSModel):
def _get_model_workers_limit(self, model: str, credentials: dict) -> int:
return 5
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> Any:
"""
_tts_invoke_streaming text2speech model

@ -223,6 +223,16 @@ class ZhipuAILargeLanguageModel(_CommonZhipuaiAI, LargeLanguageModel):
else:
new_prompt_messages.append(copy_prompt_message)
# zhipuai moved web_search param to tools
if "web_search" in model_parameters:
enable_web_search = model_parameters.get("web_search")
model_parameters.pop("web_search")
web_search_params = {"type": "web_search", "web_search": {"enable": enable_web_search}}
if "tools" in model_parameters:
model_parameters["tools"].append(web_search_params)
else:
model_parameters["tools"] = [web_search_params]
if model in {"glm-4v", "glm-4v-plus"}:
params = self._construct_glm_4v_parameter(model, new_prompt_messages, model_parameters)
else:

@ -41,8 +41,8 @@ class Assistant(BaseAPI):
conversation_id: Optional[str] = None,
attachments: Optional[list[assistant_create_params.AssistantAttachments]] = None,
metadata: dict | None = None,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
@ -72,9 +72,9 @@ class Assistant(BaseAPI):
def query_support(
self,
*,
assistant_id_list: list[str] = None,
request_id: str = None,
user_id: str = None,
assistant_id_list: Optional[list[str]] = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
@ -99,8 +99,8 @@ class Assistant(BaseAPI):
page: int = 1,
page_size: int = 10,
*,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

@ -1,7 +1,7 @@
from __future__ import annotations
from collections.abc import Mapping
from typing import TYPE_CHECKING, Literal, cast
from typing import TYPE_CHECKING, Literal, Optional, cast
import httpx
@ -34,11 +34,11 @@ class Files(BaseAPI):
def create(
self,
*,
file: FileTypes = None,
upload_detail: list[UploadDetail] = None,
file: Optional[FileTypes] = None,
upload_detail: Optional[list[UploadDetail]] = None,
purpose: Literal["fine-tune", "retrieval", "batch"],
knowledge_id: str = None,
sentence_size: int = None,
knowledge_id: Optional[str] = None,
sentence_size: Optional[int] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

@ -34,12 +34,12 @@ class Document(BaseAPI):
def create(
self,
*,
file: FileTypes = None,
file: Optional[FileTypes] = None,
custom_separator: Optional[list[str]] = None,
upload_detail: list[UploadDetail] = None,
upload_detail: Optional[list[UploadDetail]] = None,
purpose: Literal["retrieval"],
knowledge_id: str = None,
sentence_size: int = None,
knowledge_id: Optional[str] = None,
sentence_size: Optional[int] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

@ -31,11 +31,11 @@ class Videos(BaseAPI):
self,
model: str,
*,
prompt: str = None,
image_url: str = None,
prompt: Optional[str] = None,
image_url: Optional[str] = None,
sensitive_word_check: Optional[SensitiveWordCheckRequest] | NotGiven = NOT_GIVEN,
request_id: str = None,
user_id: str = None,
request_id: Optional[str] = None,
user_id: Optional[str] = None,
extra_headers: Headers | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,

@ -1,5 +1,6 @@
import json
import logging
import math
from typing import Any, Optional
from urllib.parse import urlparse
@ -112,7 +113,8 @@ class ElasticSearchVector(BaseVector):
def search_by_vector(self, query_vector: list[float], **kwargs: Any) -> list[Document]:
top_k = kwargs.get("top_k", 10)
knn = {"field": Field.VECTOR.value, "query_vector": query_vector, "k": top_k}
num_candidates = math.ceil(top_k * 1.5)
knn = {"field": Field.VECTOR.value, "query_vector": query_vector, "k": top_k, "num_candidates": num_candidates}
results = self._client.search(index=self._collection_name, knn=knn, size=top_k)

@ -162,7 +162,7 @@ class RelytVector(BaseVector):
else:
return None
def delete_by_uuids(self, ids: list[str] = None):
def delete_by_uuids(self, ids: Optional[list[str]] = None):
"""Delete by vector IDs.
Args:

@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Any
from typing import Any, Optional
from configs import dify_config
from core.embedding.cached_embedding import CacheEmbedding
@ -25,7 +25,7 @@ class AbstractVectorFactory(ABC):
class Vector:
def __init__(self, dataset: Dataset, attributes: list = None):
def __init__(self, dataset: Dataset, attributes: Optional[list] = None):
if attributes is None:
attributes = ["doc_id", "dataset_id", "document_id", "doc_hash"]
self._dataset = dataset
@ -106,7 +106,7 @@ class Vector:
case _:
raise ValueError(f"Vector store {vector_type} is not supported.")
def create(self, texts: list = None, **kwargs):
def create(self, texts: Optional[list] = None, **kwargs):
if texts:
embeddings = self._embeddings.embed_documents([document.page_content for document in texts])
self._vector_processor.create(texts=texts, embeddings=embeddings, **kwargs)

@ -1,3 +1,5 @@
from typing import Optional
from pydantic import BaseModel
@ -7,4 +9,4 @@ class DocumentContext(BaseModel):
"""
content: str
score: float
score: Optional[float] = None

@ -1,7 +1,7 @@
import re
import tempfile
from pathlib import Path
from typing import Union
from typing import Optional, Union
from urllib.parse import unquote
from configs import dify_config
@ -84,7 +84,7 @@ class ExtractProcessor:
@classmethod
def extract(
cls, extract_setting: ExtractSetting, is_automatic: bool = False, file_path: str = None
cls, extract_setting: ExtractSetting, is_automatic: bool = False, file_path: Optional[str] = None
) -> list[Document]:
if extract_setting.datasource_type == DatasourceType.FILE.value:
with tempfile.TemporaryDirectory() as temp_dir:

@ -1,4 +1,5 @@
import logging
from typing import Optional
from core.rag.extractor.extractor_base import BaseExtractor
from core.rag.models.document import Document
@ -17,7 +18,7 @@ class UnstructuredEpubExtractor(BaseExtractor):
def __init__(
self,
file_path: str,
api_url: str = None,
api_url: Optional[str] = None,
):
"""Initialize with file path."""
self._file_path = file_path

@ -231,6 +231,9 @@ class DatasetRetrieval:
source["content"] = segment.content
retrieval_resource_list.append(source)
if hit_callback and retrieval_resource_list:
retrieval_resource_list = sorted(retrieval_resource_list, key=lambda x: x.get("score"), reverse=True)
for position, item in enumerate(retrieval_resource_list, start=1):
item["position"] = position
hit_callback.return_retriever_resource_info(retrieval_resource_list)
if document_context_list:
document_context_list = sorted(document_context_list, key=lambda x: x.score, reverse=True)
@ -536,7 +539,7 @@ class DatasetRetrieval:
continue
# pass if dataset is not available
if dataset and dataset.available_document_count == 0:
if dataset and dataset.provider != "external" and dataset.available_document_count == 0:
continue
available_datasets.append(dataset)

@ -341,7 +341,7 @@ class ToolRuntimeVariablePool(BaseModel):
self.pool.append(variable)
def set_file(self, tool_name: str, value: str, name: str = None) -> None:
def set_file(self, tool_name: str, value: str, name: Optional[str] = None) -> None:
"""
set an image variable

@ -5,31 +5,41 @@
- searchapi
- serper
- searxng
- websearch
- tavily
- stackexchange
- pubmed
- arxiv
- aws
- nominatim
- devdocs
- spider
- firecrawl
- brave
- crossref
- jina
- webscraper
- dalle
- azuredalle
- stability
- wikipedia
- nominatim
- yahoo
- alphavantage
- arxiv
- pubmed
- stablediffusion
- webscraper
- jina
- cogview
- comfyui
- getimgai
- siliconflow
- spark
- stepfun
- xinference
- alphavantage
- yahoo
- openweather
- gaode
- aippt
- youtube
- code
- wolframalpha
- maths
- github
- chart
- time
- vectorizer
- gaode
- wecom
- qrcode
- youtube
- did
- dingtalk
- discord
- feishu
- feishu_base
- feishu_document
@ -39,4 +49,24 @@
- feishu_calendar
- feishu_spreadsheet
- slack
- twilio
- wecom
- wikipedia
- code
- wolframalpha
- maths
- github
- gitlab
- time
- vectorizer
- qrcode
- tianditu
- google_translate
- hap
- json_process
- judge0ce
- novitaai
- onebot
- regex
- trello
- vanna

@ -1,6 +1,6 @@
import json
from enum import Enum
from typing import Any, Union
from typing import Any, Optional, Union
import boto3
@ -21,7 +21,7 @@ class SageMakerTTSTool(BuiltinTool):
s3_client: Any = None
comprehend_client: Any = None
def _detect_lang_code(self, content: str, map_dict: dict = None):
def _detect_lang_code(self, content: str, map_dict: Optional[dict] = None):
map_dict = {"zh": "<|zh|>", "en": "<|en|>", "ja": "<|jp|>", "zh-TW": "<|yue|>", "ko": "<|ko|>"}
response = self.comprehend_client.detect_dominant_language(Text=content)

@ -6,9 +6,9 @@ identity:
zh_Hans: Jina AI
pt_BR: Jina AI
description:
en_US: Convert any URL to an LLM-friendly input or perform searches on the web for grounding information. Experience improved output for your agent and RAG systems at no cost.
zh_Hans: 将任何URL转换为LLM易读的输入或在网页上搜索引擎上搜索引擎。
pt_BR: Converte qualquer URL em uma entrada LLm-fácil de ler ou realize pesquisas na web para obter informação de grounding. Tenha uma experiência melhor para seu agente e sistemas RAG sem custo.
en_US: Your Search Foundation, Supercharged!
zh_Hans: 您的搜索底座,从此不同!
pt_BR: Your Search Foundation, Supercharged!
icon: icon.svg
tags:
- search

File diff suppressed because it is too large Load Diff

@ -0,0 +1,44 @@
from datetime import datetime
from typing import Any, Union
import pytz
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.errors import ToolInvokeError
from core.tools.tool.builtin_tool import BuiltinTool
class LocaltimeToTimestampTool(BuiltinTool):
def _invoke(
self,
user_id: str,
tool_parameters: dict[str, Any],
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
Convert localtime to timestamp
"""
localtime = tool_parameters.get("localtime")
timezone = tool_parameters.get("timezone", "Asia/Shanghai")
if not timezone:
timezone = None
time_format = "%Y-%m-%d %H:%M:%S"
timestamp = self.localtime_to_timestamp(localtime, time_format, timezone)
if not timestamp:
return self.create_text_message(f"Invalid localtime: {localtime}")
return self.create_text_message(f"{timestamp}")
@staticmethod
def localtime_to_timestamp(localtime: str, time_format: str, local_tz=None) -> int | None:
try:
if local_tz is None:
local_tz = datetime.now().astimezone().tzinfo
if isinstance(local_tz, str):
local_tz = pytz.timezone(local_tz)
local_time = datetime.strptime(localtime, time_format)
localtime = local_tz.localize(local_time)
timestamp = int(localtime.timestamp())
return timestamp
except Exception as e:
raise ToolInvokeError(str(e))

@ -0,0 +1,33 @@
identity:
name: localtime_to_timestamp
author: zhuhao
label:
en_US: localtime to timestamp
zh_Hans: 获取时间戳
description:
human:
en_US: A tool for localtime convert to timestamp
zh_Hans: 获取时间戳
llm: A tool for localtime convert to timestamp
parameters:
- name: localtime
type: string
required: true
form: llm
label:
en_US: localtime
zh_Hans: 本地时间
human_description:
en_US: localtime, such as 2024-1-1 0:0:0
zh_Hans: 本地时间, 比如2024-1-1 0:0:0
- name: timezone
type: string
required: false
form: llm
label:
en_US: Timezone
zh_Hans: 时区
human_description:
en_US: Timezone, such as Asia/Shanghai
zh_Hans: 时区, 比如Asia/Shanghai
default: Asia/Shanghai

@ -0,0 +1,44 @@
from datetime import datetime
from typing import Any, Union
import pytz
from core.tools.entities.tool_entities import ToolInvokeMessage
from core.tools.errors import ToolInvokeError
from core.tools.tool.builtin_tool import BuiltinTool
class TimestampToLocaltimeTool(BuiltinTool):
def _invoke(
self,
user_id: str,
tool_parameters: dict[str, Any],
) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
"""
Convert timestamp to localtime
"""
timestamp = tool_parameters.get("timestamp")
timezone = tool_parameters.get("timezone", "Asia/Shanghai")
if not timezone:
timezone = None
time_format = "%Y-%m-%d %H:%M:%S"
locatime = self.timestamp_to_localtime(timestamp, timezone)
if not locatime:
return self.create_text_message(f"Invalid timestamp: {timestamp}")
localtime_format = locatime.strftime(time_format)
return self.create_text_message(f"{localtime_format}")
@staticmethod
def timestamp_to_localtime(timestamp: int, local_tz=None) -> datetime | None:
try:
if local_tz is None:
local_tz = datetime.now().astimezone().tzinfo
if isinstance(local_tz, str):
local_tz = pytz.timezone(local_tz)
local_time = datetime.fromtimestamp(timestamp, local_tz)
return local_time
except Exception as e:
raise ToolInvokeError(str(e))

@ -0,0 +1,33 @@
identity:
name: timestamp_to_localtime
author: zhuhao
label:
en_US: Timestamp to localtime
zh_Hans: 时间戳转换
description:
human:
en_US: A tool for timestamp convert to localtime
zh_Hans: 时间戳转换
llm: A tool for timestamp convert to localtime
parameters:
- name: timestamp
type: number
required: true
form: llm
label:
en_US: Timestamp
zh_Hans: 时间戳
human_description:
en_US: Timestamp
zh_Hans: 时间戳
- name: timezone
type: string
required: false
form: llm
label:
en_US: Timezone
zh_Hans: 时区
human_description:
en_US: Timezone, such as Asia/Shanghai
zh_Hans: 时区, 比如Asia/Shanghai
default: Asia/Shanghai

@ -111,9 +111,10 @@ class VannaTool(BuiltinTool):
# with "visualize" set to True (default behavior) leads to remote code execution.
# Affected versions: <= 0.5.5
#########################################################################################
generate_chart = False
# generate_chart = tool_parameters.get("generate_chart", True)
res = vn.ask(prompt, False, True, generate_chart)
allow_llm_to_see_data = tool_parameters.get("allow_llm_to_see_data", False)
res = vn.ask(
prompt, print_results=False, auto_train=True, visualize=False, allow_llm_to_see_data=allow_llm_to_see_data
)
result = []

@ -200,14 +200,14 @@ parameters:
en_US: If enabled, it will attempt to train on the metadata of that database
zh_Hans: 是否自动从数据库获取元数据来训练
form: form
- name: generate_chart
- name: allow_llm_to_see_data
type: boolean
required: false
default: True
default: false
label:
en_US: Generate Charts
zh_Hans: 生成图表
en_US: Whether to allow the LLM to see the data
zh_Hans: 是否允许LLM查看数据
human_description:
en_US: Generate Charts
zh_Hans: 是否生成图表
en_US: Whether to allow the LLM to see the data
zh_Hans: 是否允许LLM查看数据
form: form

@ -8,6 +8,9 @@ identity:
en_US: The fastest way to get actionable insights from your database just by asking questions.
zh_Hans: 一个基于大模型和RAG的Text2SQL工具。
icon: icon.png
tags:
- utilities
- productivity
credentials_for_provider:
api_key:
type: secret-input

@ -104,14 +104,15 @@ class StableDiffusionTool(BuiltinTool):
model = self.runtime.credentials.get("model", None)
if not model:
return self.create_text_message("Please input model")
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
# set model
try:
url = str(URL(base_url) / "sdapi" / "v1" / "options")
response = post(
url,
json={"sd_model_checkpoint": model},
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
raise ToolProviderCredentialValidationError("Failed to set model, please tell user to set model")
@ -257,14 +258,15 @@ class StableDiffusionTool(BuiltinTool):
draw_options["prompt"] = f"{lora},{prompt}"
else:
draw_options["prompt"] = prompt
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
try:
url = str(URL(base_url) / "sdapi" / "v1" / "img2img")
response = post(
url,
json=draw_options,
timeout=120,
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
return self.create_text_message("Failed to generate image")
@ -298,14 +300,15 @@ class StableDiffusionTool(BuiltinTool):
else:
draw_options["prompt"] = prompt
draw_options["override_settings"]["sd_model_checkpoint"] = model
api_key = self.runtime.credentials.get("api_key") or "abc"
headers = {"Authorization": f"Bearer {api_key}"}
try:
url = str(URL(base_url) / "sdapi" / "v1" / "txt2img")
response = post(
url,
json=draw_options,
timeout=120,
headers={"Authorization": f"Bearer {self.runtime.credentials['api_key']}"},
headers=headers,
)
if response.status_code != 200:
return self.create_text_message("Failed to generate image")

@ -6,12 +6,18 @@ from core.tools.provider.builtin_tool_provider import BuiltinToolProviderControl
class XinferenceProvider(BuiltinToolProviderController):
def _validate_credentials(self, credentials: dict) -> None:
base_url = credentials.get("base_url")
api_key = credentials.get("api_key")
model = credentials.get("model")
base_url = credentials.get("base_url", "").removesuffix("/")
api_key = credentials.get("api_key", "")
if not api_key:
api_key = "abc"
credentials["api_key"] = api_key
model = credentials.get("model", "")
if not base_url or not model:
raise ToolProviderCredentialValidationError("Xinference base_url and model is required")
headers = {"Authorization": f"Bearer {api_key}"}
res = requests.post(
f"{base_url}/sdapi/v1/options",
headers={"Authorization": f"Bearer {api_key}"},
headers=headers,
json={"sd_model_checkpoint": model},
)
if res.status_code != 200:

@ -31,7 +31,7 @@ credentials_for_provider:
zh_Hans: 请输入你的模型名称
api_key:
type: secret-input
required: true
required: false
label:
en_US: API Key
zh_Hans: Xinference 服务器的 API Key

@ -1,3 +1,5 @@
from typing import Optional
from core.model_runtime.entities.llm_entities import LLMResult
from core.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage
from core.tools.entities.tool_entities import ToolProviderType
@ -124,7 +126,7 @@ class BuiltinTool(Tool):
return result
def get_url(self, url: str, user_agent: str = None) -> str:
def get_url(self, url: str, user_agent: Optional[str] = None) -> str:
"""
get url
"""

@ -1,10 +1,12 @@
from pydantic import BaseModel, Field
from core.rag.datasource.retrieval_service import RetrievalService
from core.rag.models.document import Document as RetrievalDocument
from core.rag.retrieval.retrieval_methods import RetrievalMethod
from core.tools.tool.dataset_retriever.dataset_retriever_base_tool import DatasetRetrieverBaseTool
from extensions.ext_database import db
from models.dataset import Dataset, Document, DocumentSegment
from services.external_knowledge_service import ExternalDatasetService
default_retrieval_model = {
"search_method": RetrievalMethod.SEMANTIC_SEARCH.value,
@ -53,97 +55,137 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool):
for hit_callback in self.hit_callbacks:
hit_callback.on_query(query, dataset.id)
# get retrieval model , if the model is not setting , using default
retrieval_model = dataset.retrieval_model or default_retrieval_model
if dataset.indexing_technique == "economy":
# use keyword table query
documents = RetrievalService.retrieve(
retrieval_method="keyword_search", dataset_id=dataset.id, query=query, top_k=self.top_k
if dataset.provider == "external":
results = []
external_documents = ExternalDatasetService.fetch_external_knowledge_retrieval(
tenant_id=dataset.tenant_id,
dataset_id=dataset.id,
query=query,
external_retrieval_parameters=dataset.retrieval_model,
)
return str("\n".join([document.page_content for document in documents]))
for external_document in external_documents:
document = RetrievalDocument(
page_content=external_document.get("content"),
metadata=external_document.get("metadata"),
provider="external",
)
document.metadata["score"] = external_document.get("score")
document.metadata["title"] = external_document.get("title")
document.metadata["dataset_id"] = dataset.id
document.metadata["dataset_name"] = dataset.name
results.append(document)
# deal with external documents
context_list = []
for position, item in enumerate(results, start=1):
source = {
"position": position,
"dataset_id": item.metadata.get("dataset_id"),
"dataset_name": item.metadata.get("dataset_name"),
"document_name": item.metadata.get("title"),
"data_source_type": "external",
"retriever_from": self.retriever_from,
"score": item.metadata.get("score"),
"title": item.metadata.get("title"),
"content": item.page_content,
}
context_list.append(source)
for hit_callback in self.hit_callbacks:
hit_callback.return_retriever_resource_info(context_list)
return str("\n".join([item.page_content for item in results]))
else:
if self.top_k > 0:
# retrieval source
# get retrieval model , if the model is not setting , using default
retrieval_model = dataset.retrieval_model or default_retrieval_model
if dataset.indexing_technique == "economy":
# use keyword table query
documents = RetrievalService.retrieve(
retrieval_method=retrieval_model.get("search_method", "semantic_search"),
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
score_threshold=retrieval_model.get("score_threshold", 0.0)
if retrieval_model["score_threshold_enabled"]
else 0.0,
reranking_model=retrieval_model.get("reranking_model", None)
if retrieval_model["reranking_enable"]
else None,
reranking_mode=retrieval_model.get("reranking_mode") or "reranking_model",
weights=retrieval_model.get("weights", None),
retrieval_method="keyword_search", dataset_id=dataset.id, query=query, top_k=self.top_k
)
return str("\n".join([document.page_content for document in documents]))
else:
documents = []
for hit_callback in self.hit_callbacks:
hit_callback.on_tool_end(documents)
document_score_list = {}
if dataset.indexing_technique != "economy":
for item in documents:
if item.metadata.get("score"):
document_score_list[item.metadata["doc_id"]] = item.metadata["score"]
document_context_list = []
index_node_ids = [document.metadata["doc_id"] for document in documents]
segments = DocumentSegment.query.filter(
DocumentSegment.dataset_id == self.dataset_id,
DocumentSegment.completed_at.isnot(None),
DocumentSegment.status == "completed",
DocumentSegment.enabled == True,
DocumentSegment.index_node_id.in_(index_node_ids),
).all()
if segments:
index_node_id_to_position = {id: position for position, id in enumerate(index_node_ids)}
sorted_segments = sorted(
segments, key=lambda segment: index_node_id_to_position.get(segment.index_node_id, float("inf"))
)
for segment in sorted_segments:
if segment.answer:
document_context_list.append(f"question:{segment.get_sign_content()} answer:{segment.answer}")
else:
document_context_list.append(segment.get_sign_content())
if self.return_resource:
context_list = []
resource_number = 1
if self.top_k > 0:
# retrieval source
documents = RetrievalService.retrieve(
retrieval_method=retrieval_model.get("search_method", "semantic_search"),
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
score_threshold=retrieval_model.get("score_threshold", 0.0)
if retrieval_model["score_threshold_enabled"]
else 0.0,
reranking_model=retrieval_model.get("reranking_model", None)
if retrieval_model["reranking_enable"]
else None,
reranking_mode=retrieval_model.get("reranking_mode") or "reranking_model",
weights=retrieval_model.get("weights", None),
)
else:
documents = []
for hit_callback in self.hit_callbacks:
hit_callback.on_tool_end(documents)
document_score_list = {}
if dataset.indexing_technique != "economy":
for item in documents:
if item.metadata.get("score"):
document_score_list[item.metadata["doc_id"]] = item.metadata["score"]
document_context_list = []
index_node_ids = [document.metadata["doc_id"] for document in documents]
segments = DocumentSegment.query.filter(
DocumentSegment.dataset_id == self.dataset_id,
DocumentSegment.completed_at.isnot(None),
DocumentSegment.status == "completed",
DocumentSegment.enabled == True,
DocumentSegment.index_node_id.in_(index_node_ids),
).all()
if segments:
index_node_id_to_position = {id: position for position, id in enumerate(index_node_ids)}
sorted_segments = sorted(
segments, key=lambda segment: index_node_id_to_position.get(segment.index_node_id, float("inf"))
)
for segment in sorted_segments:
context = {}
document = Document.query.filter(
Document.id == segment.document_id,
Document.enabled == True,
Document.archived == False,
).first()
if dataset and document:
source = {
"position": resource_number,
"dataset_id": dataset.id,
"dataset_name": dataset.name,
"document_id": document.id,
"document_name": document.name,
"data_source_type": document.data_source_type,
"segment_id": segment.id,
"retriever_from": self.retriever_from,
"score": document_score_list.get(segment.index_node_id, None),
}
if self.retriever_from == "dev":
source["hit_count"] = segment.hit_count
source["word_count"] = segment.word_count
source["segment_position"] = segment.position
source["index_node_hash"] = segment.index_node_hash
if segment.answer:
source["content"] = f"question:{segment.content} \nanswer:{segment.answer}"
else:
source["content"] = segment.content
context_list.append(source)
resource_number += 1
for hit_callback in self.hit_callbacks:
hit_callback.return_retriever_resource_info(context_list)
return str("\n".join(document_context_list))
if segment.answer:
document_context_list.append(
f"question:{segment.get_sign_content()} answer:{segment.answer}"
)
else:
document_context_list.append(segment.get_sign_content())
if self.return_resource:
context_list = []
resource_number = 1
for segment in sorted_segments:
context = {}
document = Document.query.filter(
Document.id == segment.document_id,
Document.enabled == True,
Document.archived == False,
).first()
if dataset and document:
source = {
"position": resource_number,
"dataset_id": dataset.id,
"dataset_name": dataset.name,
"document_id": document.id,
"document_name": document.name,
"data_source_type": document.data_source_type,
"segment_id": segment.id,
"retriever_from": self.retriever_from,
"score": document_score_list.get(segment.index_node_id, None),
}
if self.retriever_from == "dev":
source["hit_count"] = segment.hit_count
source["word_count"] = segment.word_count
source["segment_position"] = segment.position
source["index_node_hash"] = segment.index_node_hash
if segment.answer:
source["content"] = f"question:{segment.content} \nanswer:{segment.answer}"
else:
source["content"] = segment.content
context_list.append(source)
resource_number += 1
for hit_callback in self.hit_callbacks:
hit_callback.return_retriever_resource_info(context_list)
return str("\n".join(document_context_list))

@ -318,7 +318,7 @@ class Tool(BaseModel, ABC):
"""
return ToolInvokeMessage(type=ToolInvokeMessage.MessageType.TEXT, message=text, save_as=save_as)
def create_blob_message(self, blob: bytes, meta: dict = None, save_as: str = "") -> ToolInvokeMessage:
def create_blob_message(self, blob: bytes, meta: Optional[dict] = None, save_as: str = "") -> ToolInvokeMessage:
"""
create a blob message

@ -4,7 +4,7 @@ import mimetypes
from collections.abc import Generator
from os import listdir, path
from threading import Lock
from typing import Any, Union
from typing import Any, Optional, Union
from configs import dify_config
from core.agent.entities import AgentToolEntity
@ -72,7 +72,7 @@ class ToolManager:
@classmethod
def get_tool(
cls, provider_type: str, provider_id: str, tool_name: str, tenant_id: str = None
cls, provider_type: str, provider_id: str, tool_name: str, tenant_id: Optional[str] = None
) -> Union[BuiltinTool, ApiTool]:
"""
get the tool

@ -1,3 +1,5 @@
from typing import Optional
import httpx
from core.tools.errors import ToolProviderCredentialValidationError
@ -32,7 +34,12 @@ class FeishuRequest:
return res.get("tenant_access_token")
def _send_request(
self, url: str, method: str = "post", require_token: bool = True, payload: dict = None, params: dict = None
self,
url: str,
method: str = "post",
require_token: bool = True,
payload: Optional[dict] = None,
params: Optional[dict] = None,
):
headers = {
"Content-Type": "application/json",

@ -3,6 +3,7 @@ import uuid
from json import dumps as json_dumps
from json import loads as json_loads
from json.decoder import JSONDecodeError
from typing import Optional
from requests import get
from yaml import YAMLError, safe_load
@ -16,7 +17,7 @@ from core.tools.errors import ToolApiSchemaError, ToolNotSupportedError, ToolPro
class ApiBasedToolSchemaParser:
@staticmethod
def parse_openapi_to_tool_bundle(
openapi: dict, extra_info: dict = None, warning: dict = None
openapi: dict, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
warning = warning if warning is not None else {}
extra_info = extra_info if extra_info is not None else {}
@ -174,7 +175,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def parse_openapi_yaml_to_tool_bundle(
yaml: str, extra_info: dict = None, warning: dict = None
yaml: str, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
"""
parse openapi yaml to tool bundle
@ -191,7 +192,7 @@ class ApiBasedToolSchemaParser:
return ApiBasedToolSchemaParser.parse_openapi_to_tool_bundle(openapi, extra_info=extra_info, warning=warning)
@staticmethod
def parse_swagger_to_openapi(swagger: dict, extra_info: dict = None, warning: dict = None) -> dict:
def parse_swagger_to_openapi(swagger: dict, extra_info: Optional[dict], warning: Optional[dict]) -> dict:
"""
parse swagger to openapi
@ -253,7 +254,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def parse_openai_plugin_json_to_tool_bundle(
json: str, extra_info: dict = None, warning: dict = None
json: str, extra_info: Optional[dict], warning: Optional[dict]
) -> list[ApiToolBundle]:
"""
parse openapi plugin yaml to tool bundle
@ -287,7 +288,7 @@ class ApiBasedToolSchemaParser:
@staticmethod
def auto_parse_to_tool_bundle(
content: str, extra_info: dict = None, warning: dict = None
content: str, extra_info: Optional[dict], warning: Optional[dict]
) -> tuple[list[ApiToolBundle], str]:
"""
auto parse to tool bundle

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save