Commit Graph

51 Commits (5f7f851b177f4e0b3e1a29debd04bcbb596b97cd)

Author SHA1 Message Date
-LAN- 559ab46ee1
fix: Removes redundant token calculations and updates dependencies
Eliminates unnecessary pre-calculation of token limits and recalculation of max tokens
across multiple app runners, simplifying the logic for prompt handling.

Updates tiktoken library from version 0.8.0 to 0.9.0 for improved tokenization performance.

Increases default token limit in TokenBufferMemory to accommodate larger prompt messages.

These changes streamline the token management process and leverage the latest
improvements in the tiktoken library.

Fixes potential token overflow issues and prepares the system for handling larger
inputs more efficiently.

Relates to internal optimization tasks.

Signed-off-by: -LAN- <laipz8200@outlook.com>
11 months ago
-LAN- 413dfd5628
feat: add completion mode and context size options for LLM configuration (#13325)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- f9515901cc
fix: Azure AI Foundry model cannot be used in the workflow (#13323)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- 04d13a8116
feat(credits): Allow to configure model-credit mapping (#13274)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- b47669b80b
fix: deduct LLM quota after processing invoke result (#13075)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
yihong 56e15d09a9
feat: mypy for all type check (#10921) 1 year ago
JasonVV 4b1e13e982
Fix 11979 (#11984) 1 year ago
-LAN- 996a9135f6
feat(llm_node): support order in text and files (#11837)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
Novice 79a710ce98
Feat: continue on error (#11458)
Co-authored-by: Novice Lee <novicelee@NovicedeMacBook-Pro.local>
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
1 year ago
yihong 716576043d
fix: issue 11247 that Completion mode content maybe list or str (#11504)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
1 year ago
-LAN- 464e6354c5
feat: correct the prompt grammar. (#11328)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- 223a30401c
fix: LLM invoke error should not be raised (#11141)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- 044e7b63c2
fix(llm_node): Ignore file if not supported. (#11114) 1 year ago
-LAN- 5b7b328193
feat: Allow to contains files in the system prompt even model not support. (#11111) 1 year ago
-LAN- cbb4e95928
fix(llm_node): Ignore user query when memory is disabled. (#11106) 1 year ago
-LAN- 20c091a5e7
fix: user query be ignored if query_prompt_template is an empty string (#11103) 1 year ago
-LAN- 60b5dac3ab
fix: query will be None if the query_prompt_template not exists (#11031)
Signed-off-by: -LAN- <laipz8200@outlook.com>
1 year ago
非法操作 08ac36812b
feat: support LLM process document file (#10966)
Co-authored-by: -LAN- <laipz8200@outlook.com>
1 year ago
-LAN- c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
Co-authored-by: Joel <iamjoel007@gmail.com>
1 year ago
非法操作 033ab5490b
feat: support LLM understand video (#9828) 1 year ago
-LAN- 38bca6731c
refactor(workflow): introduce specific error handling for LLM nodes (#10221) 1 year ago
-LAN- 8b5ea39916
chore(llm_node): remove unnecessary type ignore for context assignment (#10216) 1 year ago
-LAN- 3b53e06e0d
fix(workflow): refine variable type checks in LLMNode (#10051) 1 year ago
-LAN- eb87e690ed
fix(llm-node): handle NoneSegment variables properly (#9978) 1 year ago
-LAN- d018b32d0b
fix(workflow): enhance prompt handling with vision support (#9790) 1 year ago
-LAN- 8f670f31b8
refactor(variables): replace deprecated 'get_any' with 'get' method (#9584) 1 year ago
-LAN- 5838345f48
fix(entities): add validator for `VisionConfig` to handle None values (#9598) 1 year ago
-LAN- 2e657b7b12
fix(workflow): handle NoneSegments in variable extraction (#9585) 1 year ago
-LAN- c063617553
fix(workflow): improve database session handling and variable management (#9581) 1 year ago
-LAN- e61752bd3a
feat/enhance the multi-modal support (#8818) 1 year ago
Bowen Liang 40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296) 2 years ago
Jyong bb3002b173
revert page column (#8217) 2 years ago
Bowen Liang 2cf1187b32
chore(api/core): apply ruff reformatting (#7624) 2 years ago
takatost dabfd74622
feat: Parallel Execution of Nodes in Workflows (#8192)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
Co-authored-by: Yi <yxiaoisme@gmail.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2 years ago
Byeongjin Kang d489b8b3e0
feat: return page number of pdf documents upon retrieval (#7749) 2 years ago
Joe fee4d3f6ca
feat: ops trace add llm model (#7306) 2 years ago
orangeclk f53454f81d
add finish_reason to the LLM node output (#7498) 2 years ago
-LAN- 4f5f27cf2b
refactor(api/core/workflow/enums.py): Rename SystemVariable to SystemVariableKey. (#7445) 2 years ago
-LAN- 8f16165f92
chore(api/core): Improve FileVar's type hint and imports. (#7290) 2 years ago
-LAN- 32dc963556
feat(api/workflow): Add `Conversation.dialogue_count` (#7275) 2 years ago
-LAN- ad7552ea8d
fix(api/core/workflow/nodes/llm/llm_node.py): Fix LLM Node error. (#6576) 2 years ago
takatost 6b5fac3004
fix: fetch context error in llm node (#6562) 2 years ago
-LAN- 5e6fc58db3
Feat/environment variables in workflow (#6515)
Co-authored-by: JzoNg <jzongcode@gmail.com>
2 years ago
rerorero 3a423e8ce7
fix: visioin model always with low quality (#5253) 2 years ago
Yeuoly 8578ee0864
feat: support LLM jinja2 template prompt (#3968)
Co-authored-by: Joel <iamjoel007@gmail.com>
2 years ago
takatost 12435774ca
feat: query prompt template support in chatflow (#3791)
Co-authored-by: Joel <iamjoel007@gmail.com>
2 years ago
takatost 3da179f77b
feat: add conversation_id and user_id in chatflow/workflow system vars (#3771)
Co-authored-by: Joel <iamjoel007@gmail.com>
2 years ago
takatost b890c11c14
feat: filter empty content messages in llm node (#3547) 2 years ago
takatost 1219e41d29
fix: array[string] context in llm node invalid (#3518) 2 years ago
takatost cfb5ccc7d3
fix: image was sent to an unsupported LLM when sending second message (#3268) 2 years ago