-
Vũ Hoàng Anh authored
- Add create_embeddings_async() to support OpenAI batch embedding API - Refactor data_retrieval_tool to batch embed all queries in ONE request - Replace print() with logger.info() in product_search_helpers - Remove visual_search checks (only text search supported) Performance: 5-10x faster for multi-search queries (300ms vs 1.5s for 5 queries) Rate Limit: Saves RPM by batching multiple embeddings into single API call
5748e55c
| Name |
Last commit
|
Last update |
|---|---|---|
| .. | ||
| __init__.py | ||
| brand_knowledge_tool.py | ||
| customer_info_tool.py | ||
| data_retrieval_tool.py | ||
| get_tools.py | ||
| product_search_helpers.py | ||
| save.py |