Commit a6ec88fc authored by Vũ Hoàng Anh's avatar Vũ Hoàng Anh

feat: Modernize Stock Cache Dashboard with premium product grid and enriched API

parent b9d8c77c
---
description: Vòng lặp Dọn dẹp Rác Backend - Tự động quét và gom nhóm các file log, scratch script, temp output vào đúng thư mục chuẩn.
---
# Backend Junk Cleanup Workflow (Dọn Rác)
**CONCEPT:** Workflow này dùng để ra lệnh cho AI (như tao) tự động dọn dẹp các thư mục làm việc, đặc biệt là thư mục `backend`, tránh việc xả rác file tạm, file log và script nháp lung tung.
## CÁC BƯỚC THỰC HIỆN
### 🧹 Bước 1: Quét và Định vị Rác
Sử dụng tool `list_dir` để soi thư mục Root của `backend` (hoặc thư mục được chỉ định). Hãy tìm các loại file sau:
1. **Scratch Scripts:** Các file Python bắt đầu bằng `scratch_`, `test_`, `eval_`, `dump_`... (chỉ dùng để test nháp 1 lần).
2. **Output/Logs:** Các file `*.json`, `*.txt` do AI/Agent nhả ra trong lúc test (vd: `response.json`, `audit_out.txt`, `log.txt`).
3. **Runners:** Các file script điều khiển như `run.txt`, `*.sh`, `*.ps1`.
### 📂 Bước 2: Tạo cấu trúc thư mục Scripts chuẩn
Nếu chưa có, hãy tạo cấu trúc thư mục sau bên trong `backend/`:
- `scripts/`
- `scripts/scratch/` (để chứa các đoạn code test/nháp)
- `scripts/outputs/` (để chứa các file log, json output rác)
### 🚚 Bước 3: Di chuyển file
Sử dụng tool `run_command` (với lệnh PowerShell) để dọn dẹp:
1. **Move** tất cả file thuộc nhóm **Scratch Scripts** vào `scripts/scratch/`.
2. **Move** tất cả file thuộc nhóm **Output/Logs** vào `scripts/outputs/`.
3. **Move** tất cả **Runners** vào `scripts/` (trừ khi nó cần nằm ở ngoài phục vụ Docker thì giữ lại 1 bản).
> [!WARNING] Cẩn thận rác thật với hàng xịn
> Tuyệt đối không chạm vào các file cấu hình quan trọng như `.env`, `requirements.txt`, `server.py`, `worker.py`, hay `config.py`.
### ✅ Bước 4: Kiểm tra lại (Verification)
Chạy lại lệnh `ls` hoặc `list_dir` trong thư mục gốc.
Nếu chỉ còn lại các file chính thức yếu nhân của server, hãy thông báo: "🎉 Chúc mừng bro, nhà đã sạch bóng!"
This diff is collapsed.
DONE
\ No newline at end of file
......@@ -48,26 +48,17 @@ class ClassifierLeadSearchArgs(BaseModel):
product_line_vn: list[str] = Field(
default=[],
description=(
"OPTIONAL - Khong con dung lam filter SQL. De [] la OK. "
"He thong dung keywords LIKE description_text thay the."
"BẮT BUỘC ĐIỀN nêú xác định được loại SP. Đây là bộ lọc SQL QUAN TRỌNG NHẤT. "
"Sử dụng các từ khóa chuẩn: 'Áo phông', 'Áo Polo', 'Áo sơ mi', 'Quần jean', 'Váy liền', 'Chân váy', 'Áo khoác', 'Áo len', 'Áo nỉ', 'Áo giữ nhiệt', 'Bộ mặc nhà', 'Quần Khaki', 'Quần dài', 'Bộ thể thao', 'Áo kiểu'..."
),
)
keywords: list[str] = Field(
default=[],
description=(
"Tu khoa TRA CUU trong description_text cua san pham. MAX 3 cum tu. "
"NGUYEN TAC 1 — GIAT NGUYEN VAN: Lay CHINH XAC tu khach noi. "
" VD: khach noi 'qua chuoi' -> keywords=['qua chuoi']. "
" VD: khach noi 'hinh in stitch' -> keywords=['stitch']. "
" VD: khach noi 'cotton ong rong' -> keywords=['cotton', 'ong rong']. "
"NGUYEN TAC 2 — TUYET DOI CAM paraphrase: KHONG duoc 'dich' hay 'dien giai' tu cua khach. "
" SAI: 'qua chuoi' -> ['in hinh', 'hinh noi bat', 'trang tri']. "
" DUNG: 'qua chuoi' -> ['qua chuoi']. "
"NGUYEN TAC 3 — TUYET DOI CAM mau sac: mau co truong master_color rieng! "
" SAI: 'ao do' -> keywords=['do']. DUNG: master_color='do', keywords=[]. "
"Chi dua keywords khi khach co: chat lieu (interlock, cotton, denim, len), "
"phong cach (oversize, ong rong, bo sat, suong), dip (dam cuoi, di bien, di hoc), "
"hinh in / hoa tiet cu the (qua chuoi, stitch, caro, ke soc, hoa, xuat xu...)."
"Các từ khóa bổ trợ tìm kiếm trong mô tả sản phẩm (chất liệu, dịp, tính năng). MAX 3 cụm. "
"KHÔNG ĐƯA loại sản phẩm (áo, quần) vào đây nếu đã điền product_line_vn. "
"KHÔNG ĐƯA màu sắc vào đây. "
"VD: 'áo phông cotton mát' -> product_line_vn=['Áo phông'], keywords=['cotton', 'thoáng mát']."
),
)
gender_by_product: str | None = Field(
......@@ -93,7 +84,7 @@ class ClassifierOutput(BaseModel):
reasoning: str = Field(description="Ly luan goi tool hay tra loi truc tiep")
# ── TH 1: GOI TOOL ──
tool_name: str | None = Field(description="Ten tool (lead_search_tool). Bat buoc de null neu khach chi dang tam su/chao hoi/ko can DB.")
tool_name: str | None = Field(default=None, description="Ten tool (lead_search_tool). Bat buoc de null neu khach chi dang tam su/chao hoi/ko can DB.")
lead_search_args: ClassifierLeadSearchArgs | None = Field(
default=None,
description=(
......@@ -104,7 +95,7 @@ class ClassifierOutput(BaseModel):
tool_args: dict | None = Field(default=None, description="Args cho cac tool khac (knowledge_search, check_is_stock, v.v.). Null neu dung lead_search_tool.")
# ── TH 2: EARLY EXIT ──
ai_response: str | None = Field(description="Cau tra loi cho khach. CHI nha ra khi tool_name=null. Neu goi tool, bat buoc de null.")
ai_response: str | None = Field(default=None, description="Cau tra loi cho khach. CHI nha ra khi tool_name=null. Neu goi tool, bat buoc de null.")
product_ids: list[str] = Field(default_factory=list, description="Ma SKU (neu khach hoi trung ma).")
......
......@@ -107,8 +107,9 @@ class LeadSearchInput(BaseModel):
product_line_vn: list[str] = Field(
default=[],
description=(
"KO CON DUNG LAM FILTER SQL nua. De trong [] la OK. "
"He thong dung keywords LIKE description_text thay the — description da chua ten loai SP roi."
"Danh muc san pham chuẩn (VD: 'Áo phông', 'Áo Polo', 'Váy liền'...). "
"LUON UU TIEN dien neu xac dinh duoc loai san pham khach muon. "
"Dung de filter chinh xac theo cot product_line_vn trong DB."
),
)
gender_by_product: str | None = Field(
......@@ -169,28 +170,46 @@ class LeadSearchInput(BaseModel):
# SQL Builder - Hard Filters
# ======================================================
def _build_fixed_clauses(req: LeadSearchInput, params: list) -> list[str]:
"""Build cac filter CO DINH (luon giu trong moi tang)."""
"""Build các filter CỐ ĐỊNH (luôn giữ trong mọi tầng)."""
clauses = []
# product_line_vn removed — description_text already contains product type info
# Keywords LIKE on description_text handles category matching naturally
# Product Line VN
if req.product_line_vn:
lines = []
for line in req.product_line_vn:
if not line: continue
# Resolve synonyms (e.g. "áo polo" -> "Áo Polo")
from .product_mapping import resolve_product_line
resolved = resolve_product_line(line)
for r in resolved:
# Expand related lines (e.g. "Áo lót" -> ["Áo lót", "Áo bra active"])
expanded = get_related_lines(r)
lines.extend(expanded)
if lines:
placeholders = ", ".join(["%s"] * len(lines))
params.extend(lines)
clauses.append(f"product_line_vn IN ({placeholders})")
# Gender
if req.gender_by_product:
gender_lower = req.gender_by_product.lower().strip()
if gender_lower in ("women", "nu", "female"):
gender_db = "female"
genders_to_search = []
if gender_lower in ("women", "nu", "female", "nữ"):
genders_to_search = ["female", "women", "nu", "nữ", "unisex"]
elif gender_lower in ("men", "nam", "male"):
gender_db = "male"
else:
gender_db = gender_lower
if gender_db in ("female", "male"):
params.append(gender_db)
params.append("unisex")
clauses.append("gender_by_product IN (%s, %s)")
genders_to_search = ["male", "men", "nam", "unisex"]
elif gender_lower in ("boy", "bé trai", "be trai"):
genders_to_search = ["boy", "bé trai", "be trai", "unisex"]
elif gender_lower in ("girl", "bé gái", "be gai"):
genders_to_search = ["girl", "bé gái", "be gai", "unisex"]
else:
params.append(gender_db)
clauses.append("gender_by_product = %s")
genders_to_search = [gender_lower, "unisex"]
placeholders = ", ".join(["%s"] * len(genders_to_search))
params.extend(genders_to_search)
clauses.append(f"gender_by_product IN ({placeholders})")
# Age
if req.age_by_product:
......@@ -294,7 +313,7 @@ def _build_fixed_clauses(req: LeadSearchInput, params: list) -> list[str]:
# Tier 1: Keywords LIKE search on description_text
# ======================================================
def _build_keyword_clause(keywords: list[str], params: list) -> str | None:
"""Build menh de LIKE cho tung keyword, OR voi nhau."""
"""Build menh de LIKE cho tung keyword, AND voi nhau de tang do chinh xac."""
if not keywords:
return None
parts = []
......@@ -308,14 +327,40 @@ def _build_keyword_clause(keywords: list[str], params: list) -> str | None:
parts.append("(LOWER(description_text) LIKE LOWER(%s) OR LOWER(description_text_full) LIKE LOWER(%s))")
if not parts:
return None
# Dùng OR thay vì AND để tăng độ phủ (Recall) cho tìm kiếm từ khóa
return "(" + " OR ".join(parts) + ")"
def _build_full_query(fixed_clauses: list[str], keyword_clause: str | None) -> str:
"""Lap rap cau SELECT hoan chinh tu fixed + keyword clauses."""
def _build_exclusion_clauses(keywords: list[str], params: list) -> list[str]:
"""Build các filter LOẠI TRỪ (Negative filters) dựa trên từ khóa."""
clauses = []
kws_str = " ".join(keywords).lower()
# 1. Mùa đông/Lạnh -> Loại bỏ đồ hè hở hang
if any(k in kws_str for k in ["đông", "lạnh", "winter", "tuyết", "giữ ấm"]):
# Loại trừ Quần soóc, Áo ba lỗ
clauses.append("product_line_vn NOT IN (%s, %s)")
params.extend(["Quần soóc", "Áo ba lỗ"])
# 2. Đi làm/Công sở -> Loại bỏ đồ hoạt hình/manga nếu có thể
if any(k in kws_str for k in ["đi làm", "công sở", "văn phòng", "office"]):
# Loại trừ các mô tả chứa cartoon/manga/anime
# Dùng LIKE NOT để lọc bớt
forbidden = ["cartoon", "hoạt hình", "manga", "anime", "demon slayer", "naruto", "disney", "marvel"]
for f in forbidden:
clauses.append("LOWER(description_text) NOT LIKE %s")
params.append(f"%{f}%")
return clauses
def _build_full_query(fixed_clauses: list[str], keyword_clause: str | None, exclusion_clauses: list[str] = None) -> str:
"""Lắp ráp câu SELECT hoàn chỉnh từ fixed + keyword + exclusion clauses."""
all_clauses = fixed_clauses[:]
if keyword_clause:
all_clauses.append(keyword_clause)
if exclusion_clauses:
all_clauses.extend(exclusion_clauses)
where_sql = ""
if all_clauses:
......@@ -347,10 +392,10 @@ def _build_full_query(fixed_clauses: list[str], keyword_clause: str | None) -> s
suggest_items,
similar_items
FROM {TABLE_NAME}
{{where_sql}}
{where_sql}
ORDER BY quantity_sold DESC NULLS LAST
LIMIT 20
""".format(where_sql=where_sql)
"""
# ======================================================
......@@ -362,17 +407,22 @@ async def _embedding_search_with_vec(
fixed_clauses: list[str],
fixed_params: list,
db,
exclusion_clauses: list[str] = None,
) -> list:
"""
Nhan vector da compute -> approx_cosine_similarity tren StarRocks.
Nhận vector đã compute -> approx_cosine_similarity trên StarRocks.
CTE: vector_matches(top200) -> filtered(hard) -> GROUP BY dedup -> top 20.
"""
t0 = time.time()
vec_literal = "[" + ",".join(str(v) for v in query_vec) + "]"
post_filter_clauses = fixed_clauses[:]
if exclusion_clauses:
post_filter_clauses.extend(exclusion_clauses)
post_filter_where = ""
if fixed_clauses:
post_filter_where = " WHERE " + " AND ".join(fixed_clauses)
if post_filter_clauses:
post_filter_where = " WHERE " + " AND ".join(post_filter_clauses)
sql = f"""
WITH vector_matches AS (
......@@ -509,6 +559,10 @@ async def _cascading_search(
except Exception:
_lf = None
# ---- Prep Negative Filters (Exclusions) ----
ex_params = []
exclusions = _build_exclusion_clauses(req.keywords, ex_params)
# ---- Tang 1: CO DINH + KEYWORDS LIKE ----
if req.keywords:
t1 = time.time()
......@@ -516,8 +570,10 @@ async def _cascading_search(
fixed = _build_fixed_clauses(req, params)
search = _build_keyword_clause(req.keywords, params)
if search:
sql = _build_full_query(fixed, search)
products = await db.execute_query_async(sql, params=tuple(params))
# Combine params: fixed + search + exclusions
all_params = params + ex_params
sql = _build_full_query(fixed, search, exclusions)
products = await db.execute_query_async(sql, params=tuple(all_params))
kw_ms = round((time.time() - t1) * 1000, 1)
search_details["keyword_query"] = req.keywords
search_details["keyword_time_ms"] = kw_ms
......@@ -557,7 +613,9 @@ async def _cascading_search(
t2b = time.time()
params = []
fixed = _build_fixed_clauses(req, params)
products = await _embedding_search_with_vec(query_vec, embed_query, fixed, params, db)
# Combine params for embedding search
all_params = params + ex_params
products = await _embedding_search_with_vec(query_vec, embed_query, fixed, all_params, db, exclusions)
sr_ms = round((time.time() - t2b) * 1000, 1)
embed_total = round((time.time() - t2) * 1000, 1)
......@@ -592,8 +650,18 @@ async def _cascading_search(
# ---- Tang 3: CHI CO DINH ----
params = []
fixed = _build_fixed_clauses(req, params)
sql = _build_full_query(fixed, None)
products = await db.execute_query_async(sql, params=tuple(params))
# Neu khong co filter cung (line, gender, color, price) thi Tier 3 KHONG tra gi ca
# de tranh tra ve best-seller toan cong ty khong lien quan.
has_hard_filter = bool(req.product_line_vn or req.gender_by_product or req.master_color or req.price_max)
if not has_hard_filter:
logger.info("[SEARCH][TIER-3] SKIPPED - no hard filters")
products = []
else:
all_params = params + ex_params
sql = _build_full_query(fixed, None, exclusions)
products = await db.execute_query_async(sql, params=tuple(all_params))
logger.info(
"[SEARCH][TIER-3 HARD-FILTERS] filters={line=%s gender=%s age=%s color=%s price=%s~%s disc=%s~%s size=%s mode=%s} -> %d results",
req.product_line_vn, req.gender_by_product, req.age_by_product,
......
This diff is collapsed.
......@@ -42,7 +42,6 @@ async def fetch_stock_from_canifa(skus: list[str], timeout: float = 5.0) -> dict
if not skus:
return {}
# API chấp nhận nhiều SKU cách nhau bằng dấu phẩy
sku_string = ",".join(skus)
url = f"{CANIFA_STOCK_API}?skus={sku_string}"
......@@ -53,6 +52,21 @@ async def fetch_stock_from_canifa(skus: list[str], timeout: float = 5.0) -> dict
data = resp.json()
logger.info(f"📦 Stock API response: {len(data.get('result', []))} variants")
# Save fetched data to redis right away for background cache consistency
try:
from common.cache import redis_cache
r_client = redis_cache.get_client()
if r_client and "result" in data and isinstance(data["result"], list):
pipe = r_client.pipeline()
for item in data["result"]:
sku = item.get("sku") or item.get("product_color_code", "")
if sku:
pipe.setex(f"stock:{sku}", 300, json.dumps(item, ensure_ascii=False))
await pipe.execute()
except Exception as e:
logger.error(f"Failed to background push missed cache to redis: {e}")
return data
except httpx.TimeoutException:
......@@ -78,9 +92,35 @@ async def check_is_stock(skus: str) -> str:
if not sku_list:
return json.dumps({"error": "Không có mã sản phẩm hợp lệ"})
# Fetch stock from Canifa API - return raw response
stock_data = await fetch_stock_from_canifa(sku_list)
cached_results = []
missing_skus = []
try:
from common.cache import redis_cache
client = redis_cache.get_client()
if client:
for sku in sku_list:
item_str = await client.get(f"stock:{sku}")
if item_str:
cached_results.append(json.loads(item_str))
else:
missing_skus.append(sku)
else:
missing_skus = sku_list
except Exception as e:
logger.error(f"Cache check error: {e}")
missing_skus = sku_list
if missing_skus:
# Lấy từ API cho các mã còn thiếu
stock_data = await fetch_stock_from_canifa(missing_skus)
# Gộp chung vào nếu có
if "result" in stock_data and isinstance(stock_data["result"], list):
cached_results.extend(stock_data["result"])
# Trả về format chung giống hệt raw API để AI không bị ảnh hưởng logic
final_data = {"result": cached_results, "source": "redis_cached" if not missing_skus else "mixed"}
return json.dumps(stock_data, ensure_ascii=False)
return json.dumps(final_data, ensure_ascii=False)
import asyncio
import json
import logging
from typing import Optional
from fastapi import APIRouter, Query
from fastapi.responses import StreamingResponse, JSONResponse
from worker.stylist_engine import StylistEngine
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/fashion-matches/simulator", tags=["Fashion Matches Simulator"])
@router.get("/search")
async def search_product(q: str = Query(..., description="Mã SP hoặc Tên SP", min_length=2)):
"""API hỗ trợ tìm kiếm nhanh Product Code cho Simulator"""
try:
engine = StylistEngine()
catalog = engine._get_catalog()
q_lower = q.lower().strip()
results = []
for p in catalog:
code = p.get("code", "").lower()
ref = p.get("internal_ref_code", "").lower()
name = p.get("name", "").lower()
# Simple match logic
if q_lower in code or q_lower in ref or q_lower in name:
results.append({
"code": p.get("code"),
"name": p.get("name"),
"image": p.get("image", ""),
"color": p.get("color", "")
})
if len(results) >= 10:
break
return {"ok": True, "data": results}
except Exception as e:
logger.error("[Simulator] Search error: %s", e)
return JSONResponse({"ok": False, "error": str(e)}, status_code=500)
@router.get("/stream")
async def stream_flow(code: str = Query(..., description="Product code to simulate")):
"""SSE Endpoint cho Realtime Live Simulator"""
async def event_generator():
try:
# --- BƯỚC 1: INIT ---
msg = json.dumps({
"step": 1,
"node": "init",
"status": f"🔧 Khởi chạy Cỗ máy StylistEngine. Mã SP đưa vào: {code}..."
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
await asyncio.sleep(0.8) # Dramatic delay
engine = StylistEngine()
catalog = engine._get_catalog()
source = next((p for p in catalog if p["code"] == code), None)
if not source:
yield f"data: {json.dumps({'error': True, 'status': '❌ Không tìm thấy mã sản phẩm trong Catalog!'})}\n\n"
return
# --- BƯỚC 2: PHÂN TÍCH SP GỐC ---
msg = json.dumps({
"step": 2,
"node": "fetch_product",
"status": f"🔍 Bóc tách SP gốc: {source.get('name')} (Màu: {source.get('color')} / Giới tính: {source.get('gender')})",
"payload": {
"code": source.get("code"),
"name": source.get("name"),
"image": source.get("image"),
"color": source.get("color"),
"category": source.get("product_line")
}
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
await asyncio.sleep(1.0)
# --- BƯỚC 3: KÉO LUẬT DB ---
anchor_cat = source.get("product_line", "")
gender = source.get("gender", "")
db_rules = engine._fetch_rules_with_reason(anchor_cat, gender)
rules_count = len(db_rules)
if rules_count == 0:
status_rules = f"⚠️ Không có luật DB từ [chatbot_fashion_rules] cho '{anchor_cat}'. Rơi vào Fallback Rules."
else:
status_rules = f"📂 Khớp thành công {rules_count} luật (Rules) phối đồ theo Dịp (Occasion) & Role từ Database."
msg = json.dumps({
"step": 3,
"node": "fetch_rules",
"status": status_rules,
"payload": {"rules_count": rules_count}
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
await asyncio.sleep(1.2)
# --- BƯỚC 4: CHẤM ĐIỂM ---
msg = json.dumps({
"step": 4,
"node": "scoring",
"status": "🧮 Khởi động Scoring Engine: Tính điểm Color Synergy (30đ), Material (10đ), Occasion (20đ) cho toàn bộ Catalog..."
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
# (Run the heavy lifting here)
ai_matches = engine.compute_dynamic_rule_matches(code)
await asyncio.sleep(1.5)
# --- BƯỚC 5: HẬU KỲ VÀ PHẨN LOẠI MỞ RỘNG (DEDUPLICATE / SQL CLASSIFICATIONS) ---
msg = json.dumps({
"step": 5,
"node": "dedup",
"status": "📋 Sàng lọc (Deduplication): Loại bỏ kết quả trùng, lấy Top 3 cho mỗi Role (Top/Bottom/Khoác)... Đồng thời nạp Data Phân Loại mở rộng SQL."
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
classifications = engine.compute_super_classifications_sql(code)
await asyncio.sleep(0.8)
# --- BƯỚC 6: FINISH ---
msg = json.dumps({
"step": 6,
"node": "complete",
"status": "✅ HOÀN TẤT! Đẩy khung Outfit JSON ra giao diện.",
"payload": {
"ai_matches": ai_matches,
"classifications": classifications
}
}, ensure_ascii=False)
yield f"data: {msg}\n\n"
except Exception as e:
logger.error("[Simulator] Generator error: %s", e)
msg = json.dumps({"error": True, "status": f"❌ Lỗi Engine: {str(e)}"}, ensure_ascii=False)
yield f"data: {msg}\n\n"
return StreamingResponse(event_generator(), media_type="text/event-stream")
import json
from fastapi import APIRouter, HTTPException
import httpx
import logging
from common.cache import redis_cache
from common.sqlite_db import sqlite_db
router = APIRouter(prefix="/api/stock-cache", tags=["Stock Cache"])
logger = logging.getLogger(__name__)
# Fake list of SKUs for testing
TEST_SKUS = [
"6TW25W005", "6TW25W005-SB118", "8TE24A003", "8TE24A003-SW001",
"6TS24W006", "6DS24W002", "8BA23W001", "8TW23A004", "1TW23W001"
]
@router.post("/sync")
async def sync_stock(skus: list[str] = None):
"""
Worker route to fetch stock from Middleware and push to Redis.
In real usage, this might be triggered by a Cron job and fetch 2000 SKUs.
For this demo, it takes chunks of 100 SKUs safely.
"""
if not skus:
try:
# Fetch real SKUs from the SQLite database
rows = sqlite_db.fetch_all("SELECT DISTINCT product_color_code FROM sr__test_db__magento_product_dimension_with_text_embedding WHERE product_color_code IS NOT NULL LIMIT 2000")
if rows:
skus = [r["product_color_code"] for r in rows]
else:
skus = TEST_SKUS
except Exception as e:
logger.error(f"Failed to fetch SKUs from local DB: {e}")
skus = TEST_SKUS
client = redis_cache.get_client()
if not client:
raise HTTPException(status_code=500, detail="Redis is not enabled or not connected")
chunk_size = 100
total_synced = 0
errors = []
async with httpx.AsyncClient(timeout=10.0) as http_client:
for i in range(0, len(skus), chunk_size):
chunk = skus[i:i + chunk_size]
csv_skus = ",".join(chunk)
try:
url = "https://canifa.com/v1/middleware/stock_get_stock_list_parent"
# API expects GET ?skus=xxx
response = await http_client.get(url, params={"skus": csv_skus})
response.raise_for_status()
data = response.json()
# Assumed CANIFA API structure: {"result": [{"sku": "xxxx", "quantity": 10}, ...]}
pipe = client.pipeline()
if isinstance(data, dict) and "result" in data and isinstance(data["result"], list):
for item in data["result"]:
sku = item.get("sku") or item.get("product_color_code", "")
if sku:
# Save the entire dictionary as a JSON string so check_is_stock can read it directly
pipe.setex(f"stock:{sku}", 300, json.dumps(item, ensure_ascii=False))
total_synced += 1
# Executing pipeline
if total_synced > 0:
await pipe.execute()
except httpx.HTTPStatusError as e:
msg = f"HTTP error {e.response.status_code} fetching chunk {i}: {e.response.text[:200]}"
logger.error(msg)
errors.append(msg)
except httpx.ConnectError as e:
msg = f"Connection error fetching chunk {i}: {repr(e)}"
logger.error(msg)
errors.append(msg)
except asyncio.TimeoutError:
msg = f"Timeout fetching chunk {i}"
logger.error(msg)
errors.append(msg)
except Exception as e:
msg = f"Unexpected error fetching chunk {i}: {repr(e)}"
logger.error(msg)
errors.append(msg)
return {
"message": "Sync completed",
"total_skus_processed": min(total_synced, len(skus)), # roughly
"errors": errors
}
@router.get("/status")
async def get_test_stock(sku: str):
"""
Check stock of a specific SKU from Redis (Ultra fast 1ms).
"""
client = redis_cache.get_client()
if not client:
raise HTTPException(status_code=500, detail="Redis is not connected")
val = await client.get(f"stock:{sku}")
if val is None:
return {"sku": sku, "cached": False, "status": 0, "details": None}
try:
data = json.loads(val)
status = data.get("status", 0)
return {"sku": sku, "cached": True, "status": status, "details": data}
except Exception:
# Fallback if old string data is cached
return {"sku": sku, "cached": True, "status": val, "details": None}
@router.get("/keys")
async def get_all_cached_keys():
client = redis_cache.get_client()
if not client:
return {"total": 0, "keys": []}
keys = await client.keys("stock:*")
return {"total": len(keys), "keys": keys}
@router.get("/list")
async def list_stock_cache(limit: int = 500):
"""
List all cached stock data using MGET for performance,
enriched with metadata from SQLite DB.
"""
client = redis_cache.get_client()
if not client:
return []
keys = await client.keys("stock:*")
if not keys:
return []
# Sort keys for consistent display
keys.sort()
# Limit for safety
keys = keys[:limit]
values = await client.mget(keys)
# Extract SKUs to fetch metadata from SQLite in one batch
skus = [key.replace("stock:", "") for key in keys]
# Fetch metadata from SQLite
metadata_map = {}
if skus:
# We search by product_color_code or internal_ref_code
# Using a list of SKUs in a single query
placeholders = ",".join(["?"] * len(skus))
query = f"""
SELECT product_color_code, product_name, product_image_url_thumbnail, sale_price, original_price, product_line_vn
FROM sr__test_db__magento_product_dimension_with_text_embedding
WHERE product_color_code IN ({placeholders})
"""
try:
from common.sqlite_db import sqlite_db
rows = sqlite_db.fetch_all(query, tuple(skus))
for row in rows:
metadata_map[row['product_color_code']] = dict(row)
except Exception as e:
logger.error(f"Error fetching product metadata: {e}")
result = []
for key, val in zip(keys, values):
sku = key.replace("stock:", "")
meta = metadata_map.get(sku, {})
try:
data = json.loads(val) if val else {}
# Normalize structure for frontend
result.append({
"sku": sku,
"status": data.get("status", "0"),
"total_quantity": data.get("total_quantity", 0),
"last_sync": data.get("sync_time", "N/A"),
"product_name": meta.get("product_name") or sku,
"image_url": meta.get("product_image_url_thumbnail"),
"price": meta.get("sale_price"),
"original_price": meta.get("original_price"),
"product_line": meta.get("product_line_vn"),
"raw": data
})
except:
result.append({
"sku": sku,
"status": val,
"total_quantity": 0,
"last_sync": "Legacy",
"product_name": meta.get("product_name") or sku,
"image_url": meta.get("product_image_url_thumbnail"),
"price": meta.get("sale_price"),
"raw": {}
})
return result
import asyncio
async def stock_cache_worker_loop():
"""
Background worker to automatically sync stock to Redis every 3 minutes.
"""
while True:
try:
logger.info("🔄 [Cron] Bắt đầu tự động đồng bộ Stock Cache (mỗi 3 phút)...")
# Fetch all SKUs or a specific hot list from your Database here.
# Currently falling back to sync_stock() with TEST_SKUS list.
result = await sync_stock(None)
logger.info(f"✅ [Cron] Đồng bộ Stock Cache xong! Số mã đã xử lý: {result['total_skus_processed']}")
except asyncio.CancelledError:
logger.info("🛑 [Cron] Tắt cron job đồng bộ Stock Cache")
break
except Exception as e:
logger.error(f"❌ [Cron] Lỗi đồng bộ Stock Cache: {e}")
# Chạy lại sau mỗi 3 phút (180 giây)
await asyncio.sleep(180)
This diff is collapsed.
Binary files a/backend/audit_out.txt and /dev/null differ
#!/usr/bin/env python
"""
Batch test: tạo catalog nhỏ từ DB và chạy _compute_matches cho 1 sản phẩm.
"""
import sys, os, json, sqlite3
# MUST set before importing engine
os.environ['USE_LOCAL_SQLITE'] = 'True'
sys.path.insert(0, os.path.dirname(__file__))
sys.stdout.reconfigure(encoding='utf-8') if hasattr(sys.stdout, 'reconfigure') else None
from worker.stylist_engine import StylistEngine
print("=== BATCH TEST (Simplified) ===\n")
engine = StylistEngine()
print(f"[+] Engine loaded. Rules: {len(engine.rules['occasions'])} occasions, weights: {engine.rules['score_weights']}")
catalog = engine._get_catalog()
print(f"[+] Catalog size: {len(catalog)} products")
if len(catalog) == 0:
print("[!] Catalog empty - check DB connections (StarRocks/Postgres)")
sys.exit(1)
# Get valid anchor categories from SQLite DB (seeded rules)
import sqlite3
conn_sqlite = sqlite3.connect('database/canifa_ai_dump.sqlite')
cur_sqlite = conn_sqlite.cursor()
cur_sqlite.execute("SELECT DISTINCT anchor_category FROM chatbot_fashion_rules")
valid_anchors = {row[0].lower() for row in cur_sqlite.fetchall()}
cur_sqlite.close()
conn_sqlite.close()
print(f"[+] Valid anchor categories from DB: {len(valid_anchors)} categories")
test_product = None
for p in catalog:
pl = p.get("product_line", "").lower()
if pl in valid_anchors:
test_product = p
break
if not test_product:
print("[!] No catalog product matches any anchored product line in DB rules")
print("[!] Available anchors:", valid_anchors)
sys.exit(1)
print(f"[+] Testing with product code: {test_product['code']}")
print(f" Product line: {test_product['product_line']}")
print(f" Color: {test_product['color']}")
print(f" Gender: {test_product['gender']}")
print("\n[+] Running _compute_matches...")
matches = engine._compute_matches(test_product, catalog[:50])
print(f"\n[+] Result: {len(matches)} occasions generated")
for occ, roles in matches.items():
print(f"\n Occasion: {occ}")
for role, items in roles.items():
print(f" {role}: {len(items)} items")
for item in items[:2]:
print(f" - {item['code']} ({item['product_line']}) score={item['score']}")
print("\n[+] Batch test completed successfully!")
import asyncio
from database.postgres_pool import pool_wrapper
async def f():
await pool_wrapper.init_all()
rows = await pool_wrapper.execute_query_async("SELECT DISTINCT anchor_category FROM dashboard_canifa.chatbot_fashion_rules")
print([r['anchor_category'] for r in rows])
await pool_wrapper.close_all()
asyncio.run(f())
......@@ -11,27 +11,8 @@ def translate_query(query: str) -> str:
Dịch câu SQL đặc thù của Postgres/StarRocks sang dạng mà SQLite hiểu được.
"""
# 1. Thay thế Placeholder của Postgres/StarRocks (%s) thành của SQLite (?)
# Chỉ thay thế %s nếu nó KHÔNG nằm trong một chuỗi LIKE (ví dụ '%sơ mi%')
# Một cách đơn giản là chỉ thay %s nếu nó đứng độc lập hoặc theo sau bởi dấu phẩy/ngoặc
q = re.sub(r'(?<!%)\b%s\b(?!%)', '?', query)
# Tuy nhiên %s trong psycopg thường không phải là word boundary.
# Thử cách khác: chỉ thay %s nếu nó KHÔNG đứng cạnh ký tự chữ/số nào trừ khi là chính nó?
# Thực tế trong project này %s thường dùng làm placeholder cho tham số.
# Cách an toàn nhất cho Mock là replace %s nếu nó đứng sau dấu cách hoặc dấu mở ngoặc.
q = re.sub(r'([ \(\,])%s([ \)\,])', r'\1?\2', query)
# Xử lý trường hợp %s ở cuối câu
q = re.sub(r'([ \(\,])%s$', r'\1?', q)
# Nếu vẫn còn lỗi, ta có thể dùng một cách thủ công hơn cho dự án này:
# Nếu query có dùng LIKE, ta tạm thời không replace %s bừa bãi.
if "LIKE" in query.upper() and "%s" in query:
# Nếu có LIKE, khả năng cao %s là placeholder CỦA LIKE pattern nếu nó đứng cạnh %
# Nhưng ở đây %s là placeholder của psycopg.
# TRICK: TRong SQL query của stylist_engine, ta không dùng params cho LIKE pattern %s.
# Ta dùng f-string. Vậy %s CHẮC CHẮN là pattern hoặc lỗi.
pass
else:
q = q.replace("%s", "?")
# Dùng regex bao quát hơn: %s không đứng cạnh % khác
q = re.sub(r'(?<!%)%s(?!%)', '?', query)
# 2. Thay thế Postgres Schema (dashboard_canifa)
q = re.sub(r'"dashboard_canifa"\."([a-zA-Z0-9_]+)"', r'pg__dashboard_canifa__\1', q)
......@@ -46,18 +27,37 @@ def translate_query(query: str) -> str:
q = re.sub(r'public\.([a-zA-Z0-9_]+)', r'pg__public__\1', q)
# 3. Thay thế cấu trúc của StarRocks
# Mẫu query thường gặp: shared_source.magento_product_dimension_with_text_embedding
# Hoặc test_db.magento_product_...
# Hoặc magento_product_...
q = re.sub(r'([a-zA-Z0-9_]+\.)?`?magento_product_dimension_with_text_embedding`?', r'sr__test_db__magento_product_dimension_with_text_embedding', q)
# 4. Vá các ngữ pháp (Dialect) bị lệch pha
# ANY_VALUE(col) -> MAX(col)
q = re.sub(r'ANY_VALUE\s*\(', r'MAX(', q, flags=re.IGNORECASE)
# MAX_BY(col, score) -> MAX(col)
q = re.sub(r'MAX_BY\s*\(\s*([^,]+)\s*,\s*[^)]+\)', r'MAX(\1)', q, flags=re.IGNORECASE)
# IF(cond, true, false) -> IIF(cond, true, false) của SQLite
# Lưu ý: Tìm chữ IF đứng đầu cẩn thận dính chuỗi
q = re.sub(r'\bIF\s*\(', r'IIF(', q, flags=re.IGNORECASE)
# 5. Handle StarRocks vector similarity
# approx_cosine_similarity(vector, [0.1, ...]) -> 0.5 (dummy)
q = re.sub(r'approx_cosine_similarity\s*\(\s*vector\s*,\s*\[[^\]]+\]\s*\)', r'0.5', q, flags=re.IGNORECASE)
# 6. Handle ORDER BY ... NULLS LAST
q = re.sub(r'NULLS\s+LAST', '', q, flags=re.IGNORECASE)
# 7. Strip PostgreSQL type casts (::jsonb, ::text, ::int, ::timestamp, etc.)
q = re.sub(r'::[a-zA-Z_]+(?:\[\])?', '', q)
# 8. NOW() → CURRENT_TIMESTAMP (SQLite không có NOW())
q = re.sub(r'\bNOW\s*\(\s*\)', 'CURRENT_TIMESTAMP', q, flags=re.IGNORECASE)
# 9. ILIKE → LIKE (SQLite LIKE mặc định case-insensitive cho ASCII)
q = re.sub(r'\bILIKE\b', 'LIKE', q, flags=re.IGNORECASE)
# 10. COALESCE với jsonb → giữ nguyên (SQLite hỗ trợ COALESCE)
# TRUE/FALSE literals → 1/0
q = re.sub(r'\bTRUE\b', '1', q, flags=re.IGNORECASE)
q = re.sub(r'\bFALSE\b', '0', q, flags=re.IGNORECASE)
return q
class MockCursor:
......
import sys
import json
import logging
try:
from worker.stylist_engine import StylistEngine
except ImportError:
print("❌ LỖI: Không import được StylistEngine.")
sys.exit(1)
logging.basicConfig(level=logging.ERROR)
def dict_to_string(d):
return json.dumps(d, ensure_ascii=False, indent=2)
def main():
engine = StylistEngine()
catalog = engine._get_catalog()
adult_nam = None
adult_nu = None
for p in catalog:
gender = str(p.get("gender") or "").lower()
age = str(p.get("age_group") or "").lower()
name = str(p.get("name") or "").lower()
if not adult_nam and (gender == 'nam' or gender == 'men') and 'bé' not in name and 'kid' not in age:
adult_nam = p
elif not adult_nu and (gender == 'nữ' or gender == 'nu' or gender == 'women') and 'bé' not in name and 'kid' not in age:
adult_nu = p
if adult_nam and adult_nu:
break
print("================ PRODUCT MATCHES EXPORT =================\n")
for label, subj in [("Người lớn (Nam)", adult_nam), ("Người lớn (Nữ)", adult_nu)]:
if not subj: continue
code = subj['code']
print(f"--- TEST CASE: {label} ---")
print(f"Sản phẩm gốc: Tên={subj['name']} | Mã={code} | Giới tính={subj.get('gender')}")
matches = engine.compute_dynamic_rule_matches(code)
print("\n[BẢNG PHỐI ĐỒ (Dịp mặc)]:")
if matches:
for occ, roles in matches.items():
print(f" Dịp: {occ}")
for role, items in roles.items():
print(f" - Role: {role}")
for item in items[:3]:
item_name = str(item.get('name') or "")
item_gender = str(item.get('gender') or "")
print(f" -> {item['code']} | {item_name} (Gender: {item_gender})")
print("\n[PHÂN NHÓM MỞ RỘNG RAW SQL]:")
classifications = engine.compute_super_classifications_sql(code)
if classifications:
for group, g_dict in classifications.items():
for key, items in g_dict.items():
print(f" Nhóm: {group} -> {key}")
for item in items[:2]:
item_name = str(item.get('name') or "")
item_gender = str(item.get('gender_by_product') or "")
print(f" -> {item['magento_ref_code']} | {item_name} (Gender: {item_gender})")
print("\n\n")
if __name__ == "__main__":
main()
I will read the full content of `fashion_output.txt` to evaluate the product matches against the criteria specified in `temp_judge_prompt.txt`.
I will search for the string "PRODUCT MATCHES EXPORT" in the current directory to see if the matches were output to a different file or if there's more content elsewhere.
I will read `dump_matches.py` to understand how the product matches are generated and see if I should run it.
I'll run `dump_matches.py` and redirect its output to `fashion_output.txt` so I have something to evaluate.
I will check the `requirements.txt` file to determine the correct database driver and other dependencies.
I'll attempt to run `dump_matches.py` using the Python interpreter in the `.venv` directory to ensure all dependencies are available.
I'll set the `PYTHONIOENCODING` environment variable to `utf-8` to handle the Vietnamese characters and avoid the `UnicodeEncodeError`.
I'll read the contents of `fashion_output.txt` to evaluate the product matches for any errors.
I will search for the strings "be" and "kid" in the `fashion_output.txt` file to check for any mixed age group errors.
I'll use the `grep_search` tool to search for "be" and "kid" in `fashion_output.txt` to find any age group mismatches.
I will check the age group of the `Vietnam Youngster` products in the catalog to see if they are actually intended for kids.
I'll perform a thorough search for "be", "b├®", and "kid" in `fashion_output.txt` to ensure no age group mismatches were missed.
I will search for any product codes starting with "1" in `fashion_output.txt`, as these are typically kid items and would indicate an age group mismatch.
I'll re-read `fashion_output.txt` to ensure I've captured all the relevant details and errors before reporting them.
Kết quả đánh giá từ file `fashion_output.txt`:
1. **Lß╗ùi 1 (Trß╗Ön sai nh├│m tuß╗òi):** PASS_EVAL. Kh├┤ng ph├ít hiß╗çn m├│n ─æß╗ô n├áo c├│ chß╗®a chß╗» 'be' hoß║Àc 'kid' trong c├íc kß║┐t hß╗úp d├ánh cho ngã░ß╗Øi lß╗øn. (C├íc m├│n "Vietnam Youngster" ─æ├ú ─æã░ß╗úc kiß╗âm tra v├á thuß╗Öc nh├│m `age_group: adult`).
2. **Lß╗ùi 2 (Sai dß╗ïp mß║Àc / Mß║Ñt dß╗ïp mß║Àc):** **FAIL**.
- **Mß║Ñt ti├¬u occasion:** Mß╗Ñc `[Bß║óNG PHß╗ÉI ─Éß╗Æ (Dß╗ïp mß║Àc)]` ho├án to├án trß╗æng rß╗ùng cho cß║ú hai trã░ß╗Øng hß╗úp kiß╗âm tra (Nam v├á Nß╗»).
- **Sai dß╗ïp mß║Àc (M├╣a):** Trong nh├│m `Nh├│m: material -> winter` (M├╣a ─æ├┤ng), hß╗ç thß╗æng lß║íi gß╗úi ├¢ `5BS25S002 | Quß║ºn so├│c ngã░ß╗Øi lß╗øn VIETNAM ON THE PITCH` (Quß║ºn ─æ├╣i/so├│c kh├┤ng ph├╣ hß╗úp cho m├╣a ─æ├┤ng).
- **Sai dß╗ïp mß║Àc (C├┤ng viß╗çc):** Trong nh├│m `Nh├│m: occasion -> di_lam` (─Éi l├ám), hß╗ç thß╗æng gß╗úi ├¢ `5TH25W003 | ├üo sãí mi unisex ngã░ß╗Øi lß╗øn in h├¼nh Demon Slayer` (├üo in h├¼nh hoß║ít h├¼nh/manga thã░ß╗Øng kh├┤ng ph├╣ hß╗úp vß╗øi m├┤i trã░ß╗Øng c├┤ng sß╗ƒ trang trß╗ìng).
**Chi tiết các món đồ sai trái:**
- `5BS25S002-SA090 | Quß║ºn so├│c ngã░ß╗Øi lß╗øn VIETNAM ON THE PITCH` (Sai dß╗ïp mß║Àc: Xuß║Ñt hiß╗çn trong nh├│m Winter).
- `5TH25W003-SA952 | ├üo sãí mi unisex ngã░ß╗Øi lß╗øn in h├¼nh Demon Slayer` (Sai dß╗ïp mß║Àc: Xuß║Ñt hiß╗çn trong nh├│m ─Éi l├ám).
- To├án bß╗Ö mß╗Ñc `[Bß║óNG PHß╗ÉI ─Éß╗Æ (Dß╗ïp mß║Àc)]`: Trß╗æng rß╗ùng (Lß╗ùi thiß║┐u th├┤ng tin dß╗ïp mß║Àc).
import sys
import logging
try:
from worker.stylist_engine import StylistEngine
except ImportError:
print("❌ LỖI: Không import được StylistEngine. Chạy script này từ thư mục backend/.")
sys.exit(1)
logging.basicConfig(level=logging.INFO)
def main():
engine = StylistEngine()
catalog = engine._get_catalog()
# 1. Tìm các sản phẩm để test
adult_nam = None
adult_nu = None
kid_sp = None
for p in catalog:
gender = str(p.get("gender") or "").lower()
age = str(p.get("age_group") or "").lower()
name = str(p.get("name") or "").lower()
if not adult_nam and (gender == 'nam' or gender == 'men') and 'bé' not in name and 'kid' not in age:
adult_nam = p
elif not adult_nu and (gender == 'nữ' or gender == 'nu' or gender == 'women') and 'bé' not in name and 'kid' not in age:
adult_nu = p
elif not kid_sp and ('bé' in name or 'kid' in age):
kid_sp = p
if adult_nam and adult_nu and kid_sp:
break
test_subjects = [
("Người lớn (Nam)", adult_nam),
("Người lớn (Nữ)", adult_nu)
]
passed = True
print("\n" + "="*50)
print("🚀 BẮT ĐẦU CHẤM ĐIỂM FASHION MATCHES (EVAL) 🚀")
print("="*50)
for label, subj in test_subjects:
if not subj:
continue
code = subj['code']
print(f"\n👉 Test Case: {label}")
print(f" Sản phẩm gốc: {subj['name']} (Mã: {code})")
print(f" Gender: {subj.get('gender')}")
matches = engine.compute_dynamic_rule_matches(code)
# Test 1: Khắc khe loại trừ đồ trẻ em
found_kid_error = False
if matches:
for occ, roles in matches.items():
for role, items in roles.items():
for item in items:
item_name = str(item.get('name') or "").lower()
item_gender = str(item.get('gender') or "").lower()
item_age = str(item.get('age_group') or "").lower()
if 'bé' in item_name or 'bé' in item_gender or 'be' in item_gender or 'kid' in item_age:
print(f" ❌ FAILED [Nhầm độ tuổi]: Phối đồ {label} nhưng lòi ra đồ trẻ em!")
print(f" - Đồ bị lọt: {item['code']} | {item_name} (Gender: {item_gender})")
found_kid_error = True
passed = False
if not found_kid_error:
print(" ✅ PASS: Không dính đồ trẻ em trong gợi ý phối.")
# Test 2: Super Classifications SQL có lọt đồ trẻ con không?
classifications = engine.compute_super_classifications_sql(code)
found_sql_kid_error = False
if classifications:
for group, g_dict in classifications.items():
for key, items in g_dict.items():
for item in items:
item_name = str(item.get('name') or "").lower()
item_gender = str(item.get('gender_by_product') or "").lower()
if 'bé' in item_name or 'kid' in item_gender:
print(f" ❌ FAILED [Raw SQL SQL]: Bảng màu/chất liệu lôi lộn đồ kids lên.")
print(f" - Đồ bị lọt: {item['magento_ref_code']} | {item_name}")
found_sql_kid_error = True
passed = False
if not found_sql_kid_error:
print(" ✅ PASS: Raw SQL (Phân nhóm Color/Material) đã sạch đồ trẻ em.")
print("\n" + "="*50)
if passed:
print("🎉 TẤT CẢ TEST ĐỀU XANH! BÁO CÁO DONE VỚI RALPH LIỀN!! 🎉")
sys.exit(0)
else:
print("💥 CÒN RÁC! NGỒI ĐÓ MÀ SỬA TIẾP ĐI CLAUDE! 💥")
sys.exit(1)
if __name__ == "__main__":
main()
Binary files a/backend/fashion_output.txt and /dev/null differ
import codecs
import time
filepath = r'd:\cnf\chatbot-canifa-feedback\backend\static\fashion-matches\live-simulator.html'
with open(filepath, 'r', encoding='utf-8') as f:
text = f.read()
count1 = text.count(r'\`')
count2 = text.count(r'\${')
print("Found escaped backticks:", count1)
print("Found escaped dollars:", count2)
text = text.replace(r'\`', '`').replace(r'\${', '${')
with open(filepath, 'w', encoding='utf-8') as f:
f.write(text)
print("Fixed!")
Binary files a/backend/gemini_help.txt and /dev/null differ
Reading canifa_chat.lead_flow_history.sql...
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Reading dashboard_canifa.activity_logs.sql...
Reading dashboard_canifa.admin_users.sql...
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Reading dashboard_canifa.ai_outfit_tables.sql...
Reading dashboard_canifa.chatbot_fashion_rules.sql...
Reading dashboard_canifa.chat_history.sql...
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Reading dashboard_canifa.desc_field_config.sql...
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Reading dashboard_canifa.product_size_guide.sql...
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Reading dashboard_canifa.saved_reports.sql...
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Reading dashboard_canifa.sql_trace_sessions.sql...
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Reading dashboard_canifa.system_settings.sql...
Reading dashboard_canifa.ultra_descriptions.sql...
Reading public.chatbot_fashion_rules.sql...
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Reading public.prompt_rules.sql...
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Reading test_db.magento_product_dimension_with_text_embedding.sql...
Migration complete.
# Tự Động Test và Train Prompt Lead Bot & Hiển thị Hết Hàng (Vòng lặp Vô Tận)
## 🧠 Ngữ cảnh & Triết lý thiết kế (Cho AI đọc hiểu)
**Tại sao phải làm thế này?**
Thay vì để con người phải ngồi kiểm tra bằng tay từng log lỗi của Lead Bot và lọ mọ copy-paste để sửa System Prompt, ta sẽ tận dụng triết để **Cơ chế Sub-Agent của Claude** (Dựa theo chuẩn Evaluator-Optimizer pattern của Anthropic).
Hệ thống sẽ chạy một **vòng lặp vô tận (infinite loop)**: Test -> Chấm Điểm -> Đề xuất sửa đổi (Prompt/Thuật toán) -> Áp dụng -> Test lại. Nó chỉ dừng lại khi điểm Pass Rate đạt 100%.
**Sự khác biệt của Sub-Agent Pattern (Claude):**
Chúng ta không dùng 1 Prompt nhồi nhét. Chúng ta tách việc ra làm 2 Sub-Agents có Prompt chuyên biệt.
1. **Sub-Agent 1 (The Evaluator - Người chấm thi):** Làm nhiệm vụ cầm bộ `test_cases.json`, đối chiếu với kết quả Bot trả về, chấm điểm cực kỳ khắt khe. Trả về báo cáo lỗi chi tiết.
2. **Sub-Agent 2 (The Optimizer - Coder/Prompt Engineer):** Chỉ đọc báo cáo lỗi của Evaluator, lấy file nguồn (như `prompts.py` hoặc `lead_search_tool.py`) ra và tự đưa ra quyết định: *Sửa System Prompt hay Sửa thuật toán lấy Data?*. Sau đó tự động overwrite file.
---
## 📋 THIẾT KẾ CHI TIẾT CÁC SUB-AGENTS (CÂN LÀM)
### 1. The Evaluator Sub-Agent (Role: Khám bệnh)
**Nhiệm vụ:** Chạy 20 câu test.
**System Prompt của Evaluator:**
```text
Bạn là một AI Evaluator tối cao chuyên đánh giá chất lượng của một Fashion Retail Chatbot (Lead Bot).
Nhiệm vụ của bạn là lấy đầu vào từ User, so sánh với đầu ra của Bot và đối chiếu với Thực tế Data (Stock, Product Line).
Tiêu chí chấm:
1. Context Match (0-10): Bot có hiểu đúng bối cảnh (Ví dụ: "Trời mưa" phải gợi ý áo gió/chống nước, "Đám cưới" phải gợi ý đồ formal/lịch sự).
2. Data Accuracy (0-10): Bot có gợi ý sản phẩm hết hàng mà KHÔNG báo trước cho khách không?
3. Tool Usage (Pass/Fail): Bot có điền đúng các biến số vào Search Tool không?
Nếu phát hiện lỗi, hãy chỉ định rõ:
- Lỗi do System Prompt (Thiếu instruction ngữ cảnh).
- Lỗi do Data/Algorithm (Search tool thiếu cột stock, lấy nhầm category).
Trả về rập khuôn định dạng JSON chứa Feedback và Danh sách các case bị Fail.
```
### 2. The Optimizer Sub-Agent (Role: Bốc thuốc & Phẫu thuật)
**Nhiệm vụ:** Nhận JSON Feedback bên trên. Đọc code hiện hành. Ghi đè file.
**System Prompt của Optimizer:**
```text
Bạn là AI Optimizer. Bạn nhận được một bản án (Feedback) từ Evaluator chỉ ra rằng Lead Bot hiện tại đang hoạt động sai ở một số test case.
Trong tay bạn là mã nguồn hiện tại của System Prompt (`prompts.py`) và Search Algorithm (`lead_search_tool.py`).
Nhiệm vụ:
- Nếu lỗi là do Bot tư duy sai ngữ cảnh (vd: không biết trời mưa phải mặc gì): Hãy Viết lại (Rewrite) file `prompts.py`. Cập nhật thêm instruction, rule để che chắn lỗi này.
- Nếu lỗi là do Data thiếu (vd: Gợi ý sản phẩm hết hàng mà không biết): Hãy đề xuất việc thay đổi file thuật toán SQL `lead_search_tool.py` (vd: Join thêm `stock_quantity` và tự hardcode gắn `[TẠM HẾT HÀNG]` vào tool message) để Bot biết.
BẮT BUỘC trả về nội dung code hoàn chỉnh để tôi tự động ghi đè file. Không được làm hỏng các logic đang hoạt động tốt.
```
---
## 🚀 KẾ HOẠCH HÀNH ĐỘNG (ACTION ITEMS)
- [ ] Nghiên cứu và Cập nhật SQLite Mock (`canifa_ai_dump.sqlite`):
- [Only SQLite] Kiểm tra xem cần chèn cột `stock_quantity` vào SQLite (bảng mock của Postgres / StarRocks) bằng lệnh SQL.
- Cập nhật một số SKU ngẫu nhiên thành `0` để có sản phẩm hiển thị trạng thái "Hết hàng". Tuyệt đối chỉ thao tác với DB trong `backend/database`.
- [ ] Chỉnh sửa Thuật Toán Vòng Lặp (`lead_search_tool.py`):
- Nếu `stock_quantity = 0`, gắn cứng nội dung `(Lưu ý: Mẫu này hiện TẠM HẾT HÀNG)` vào ngay lập tức trong dữ liệu list string trả về cho Prompt/AI.
- [ ] Xây dựng Loop Chạy Vô Tận (Infinite Loop Script):
- Khởi tạo script `auto_train_lead_prompt.py`. Code thuật toán: `while true: run Evaluator -> if pass 100% break -> else run Optimizer -> overwrite files -> reload server -> sleep(3) -> continue`.
- [ ] Tích hợp Entrypoint Script:
- Cập nhật lại `D:\cnf\chatbot-canifa-feedback\backend\plan\run_train_lead.ps1` có hỗ trợ logging vòng lặp rõ ràng.
$ErrorActionPreference = "Continue" # Tranh bi dung dot ngot khi python in ra stderr
$env:PYTHONPATH = (Join-Path $PSScriptRoot "..")
$BACKEND_DIR = (Join-Path $PSScriptRoot "..")
$env:PYTHONIOENCODING="utf8"
Write-Host "========================================================" -ForegroundColor Magenta
Write-Host " LEAD BOT AUTO-EVAL LOOP (EVALUATOR-OPTIMIZER) " -ForegroundColor Magenta
Write-Host "========================================================" -ForegroundColor Magenta
# ==================== KHOI DONG BACKEND ====================
$PYTHON_EXEC = Join-Path $BACKEND_DIR ".venv\Scripts\python.exe"
Write-Host ""
Write-Host ">> DANG KHOI DONG UVICORN BACKEND (PORT 5000)..." -ForegroundColor Yellow
$backendProc = Start-Process -FilePath $PYTHON_EXEC -ArgumentList "-m uvicorn server:app --port 5000 --reload" -WorkingDirectory $BACKEND_DIR -PassThru -WindowStyle Normal
Write-Host ">> Cho 8s de server Uvicorn khoi dong hoan toan..." -ForegroundColor Gray
Start-Sleep -Seconds 8
# ==================== CLEAN UP ====================
@("DONE_LEAD.flag","tmp_eval_results.txt","tmp_claude_prompt.txt", "tmp_eval_errors.txt") | ForEach-Object {
if (Test-Path $_) { Remove-Item $_ }
}
$ITERATION = 1
$EVAL_SCRIPT = Join-Path $PSScriptRoot "..\scripts\lead_test\run_eval.py"
# ==================== INFINITE LOOP ====================
try {
while (!(Test-Path "DONE_LEAD.flag")) {
Write-Host ""
Write-Host "========================================" -ForegroundColor Cyan
Write-Host " ITERATION NUM: $ITERATION " -ForegroundColor Cyan
Write-Host "========================================" -ForegroundColor Cyan
# ──────────────────────────────────────────────────────
# PHASE 1: EVALUATOR SUB-AGENT (PYTHON SCRIPT RUN TEST)
# ──────────────────────────────────────────────────────
Write-Host ""
Write-Host "=== PHASE 1: EVALUATOR CHAM BAI ===" -ForegroundColor Yellow
# Chay python thong qua Start-Process de luu log an toan, khong lam crash Powershell script
$evalParams = @{
FilePath = $PYTHON_EXEC
ArgumentList = """$EVAL_SCRIPT"""
NoNewWindow = $true
Wait = $true
RedirectStandardOutput = "tmp_eval_results.txt"
RedirectStandardError = "tmp_eval_errors.txt"
}
Start-Process @evalParams
# Kiem tra ket qua file output
if (Test-Path "tmp_eval_results.txt") {
Get-Content "tmp_eval_results.txt" -Tail 15 | Write-Host -ForegroundColor DarkGray
$rawLog = Get-Content "tmp_eval_results.txt" -Raw
if ($rawLog -match "Server khong" -or $rawLog -match "Server không") {
Write-Host " [WARN] Server uvicorn chua san sang hoac bi ngat. Xin cho..." -ForegroundColor Red
$ITERATION++
Start-Sleep -Seconds 3
continue
}
$failedCount = Select-String -Path "tmp_eval_results.txt" -Pattern "FAIL|PARTIAL" -AllMatches
$isAllPass = ($failedCount -eq $null -or $failedCount.Count -eq 0) -and ($rawLog -match "ROUND \d+ DONE|LOOP HOÀN THÀNH|LOOP HOAN THANH")
if ($isAllPass) {
Write-Host " [RESULT] 100% TEST PASS!" -ForegroundColor Green
New-Item -Path "DONE_LEAD.flag" -ItemType File | Out-Null
break
} else {
Write-Host " [RESULT] CON LOI (Fail/Partial: $($failedCount.Count)). Chuyen cho Optimizer..." -ForegroundColor Red
}
} else {
Write-Host " [WARN] Khong tim thay file ket qua evaluatator. Tra thu file errors.txt:" -ForegroundColor Red
if (Test-Path "tmp_eval_errors.txt") {
Get-Content "tmp_eval_errors.txt" | Write-Host -ForegroundColor Red
}
$ITERATION++
Start-Sleep -Seconds 3
continue
}
# ──────────────────────────────────────────────────────
# PHASE 2: OPTIMIZER SUB-AGENT (CLAUDE CODE CLI)
# ──────────────────────────────────────────────────────
Write-Host ""
Write-Host "=== PHASE 2: OPTIMIZER (CLAUDE CODE CLI) ===" -ForegroundColor Yellow
Write-Host ">> Goi Claude Code CLI doc loi va sua file..." -ForegroundColor Gray
$evalOutput = Get-Content "tmp_eval_results.txt" -Raw
$claudePrompt = @"
Hay dong vai mot AI Optimizer (Prompt Engineer & Backend Dev) bac thay.
Duoi day la Bao Cao Danh Gia Loi tu Evaluator Sub-agent cho cuc Lead Bot cua chung ta tren cong 5000:
===== ERROR REPORT =====
$evalOutput
========================
NHIEM VU CUA BAN (La 1 con Sub-agent sua code):
Phan tich nguyen nhan cot loi khien cac case test bi FAIL/PARTIAL o tren. Dua vao nguyen nhan:
1. Tu dong sua file `$BACKEND_DIR/agent/lead_stage_agent/prompts.py` de va System Prompt hoac xu ly loi.
2. Hoac sua thuat toan `$BACKEND_DIR/agent/lead_stage_agent/lead_search_tool.py` (Vi du: kiem tra loi filter).
3. Nghiem cam KHONG DUOC sua file Script Evaluator `run_eval.py` hay de thi `test_cases.json`.
4. Khong can xin phep, hay tu dong sua file. Sau khi sua xong, giai thich ngan gon 1 dong roi dung.
"@
$claudePrompt | Out-File -Encoding utf8 "tmp_claude_prompt.txt"
try {
$promptStr = Get-Content "tmp_claude_prompt.txt" -Raw
Write-Host "[Gemini] Dang phan tich loi va va lo hong... Xin cho..." -ForegroundColor Cyan
# Goi Gemini bang cmd de thuc thi yolo khong can hoi.
$geminiParams = @{
FilePath = "cmd.exe"
ArgumentList = "/c", "cd /d D:\cnf\chatbot-canifa-feedback && gemini --yolo -p ""@backend\plan\tmp_claude_prompt.txt"""
NoNewWindow = $true
Wait = $true
}
Start-Process @geminiParams
Write-Host ">> Gemini Optimizer da thuc thi hoan tat." -ForegroundColor Green
} catch {
Write-Host " [ERROR] Khong the goi toi Gemini CLI." -ForegroundColor Red
}
Write-Host ">> Cho 5s de uvicorn 5000 hot-reload code moi..." -ForegroundColor Gray
Start-Sleep -Seconds 5
$ITERATION++
}
} finally {
# Don dep
Write-Host ""
Write-Host ">> TAT CON SERVER UVICORN PORT 5000..." -ForegroundColor Yellow
if ($backendProc -and !$backendProc.HasExited) {
Stop-Process -Id $backendProc.Id -Force
}
}
# ==================== DONE THÀNH CÔNG ====================
Write-Host ""
Write-Host "========================================" -ForegroundColor Green
Write-Host " [DONE] HOAN TAT AUTO-TRAIN LOOP! " -ForegroundColor Green
Write-Host " Tong so vong lap da chay: $ITERATION " -ForegroundColor Green
Write-Host "========================================" -ForegroundColor Green
@("tmp_eval_results.txt","tmp_claude_prompt.txt", "tmp_eval_errors.txt") | ForEach-Object {
if (Test-Path $_) { Remove-Item $_ }
}
Hay dong vai mot AI Optimizer (Prompt Engineer & Backend Dev) bac thay.
Duoi day la Bao Cao Danh Gia Loi tu Evaluator Sub-agent cho cuc Lead Bot cua chung ta tren cong 5000:
===== ERROR REPORT =====
❌ Server không chạy tại http://127.0.0.1:5000. Hãy start backend trước.
========================
NHIEM VU CUA BAN (La 1 con Sub-agent sua code):
Phan tich nguyen nhan cot loi khien cac case test bi FAIL/PARTIAL o tren. Dua vao nguyen nhan:
1. Tu dong sua file $BACKEND_DIR/agent/lead_stage_agent/prompts.py de va System Prompt hoac xu ly loi.
2. Hoac sua thuat toan $BACKEND_DIR/agent/lead_stage_agent/lead_search_tool.py (Vi du: kiem tra loi filter).
3. Nghiem cam KHONG DUOC sua file Script Evaluator un_eval.py hay de thi est_cases.json.
4. Khong can xin phep, hay tu dong sua file. Sau khi sua xong, giai thich ngan gon 1 dong roi dung.
❌ Server không chạy tại http://127.0.0.1:5000. Hãy start backend trước.
B.\.venv\Scripts\python.exe : [Stylist] DB rule fetch error: 0
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
#!/bin/bash
# Ralph Wiggum Loop (Gemini Subagent version)
# Vòng lặp bao gồm: 1 Gemini Sửa Code + 1 Gemini Chấm Điểm
PLAN_FILE="plan/doing/ai_stylist_plan.md"
SESSION_NAME="fashion_gemini_loop"
if [ -z "$TMUX" ]; then
echo "🛡️ Đang ném tác vụ vào tmux background '$SESSION_NAME'..."
rm -f DONE.flag
tmux new-session -d -s $SESSION_NAME "$0"
echo "✅ XONG! Subagent đang chạy ngầm trong tmux."
echo "👉 Xem subagent làm việc: tmux attach -t $SESSION_NAME"
exit 0
fi
echo "--------------------------------------------------------"
echo "🌀 GEMINI AUTO-EVAL LOOP: AI JUDGE & BUILDER 🌀"
echo "--------------------------------------------------------"
ITERATION=1
MAX_ITERATIONS=10
while [ ! -f "DONE.flag" ]; do
if [ $ITERATION -gt $MAX_ITERATIONS ]; then
echo "🚨 Đã chạy quá $MAX_ITERATIONS vòng. Dừng khẩn cấp!"
exit 1
fi
echo ">>> VÒNG LẶP SỐ $ITERATION <<<"
# Bước 1: Gọi Gemini Subagent Thợ Sửa Code (Builder)
echo ">> 🛠️ 1. GỌI GEMINI BUILDER ĐỂ SỬA CODE..."
# Giả định lệnh của Antigravity CLI là 'gemini prompt', nếu trên máy cài bản khác hãy đổi thành 'gemini ask' hoặc 'gemini code'
gemini prompt "@${PLAN_FILE} @worker/stylist_engine.py Đọc lỗi từ vòng trước và tiến hành sửa lại code trong worker/stylist_engine.py"
# Bước 2: Trích xuất Output
echo ">> 🔎 2. ĐANG DUMP KẾT QUẢ PHỐI ĐỒ ĐỂ CHẤM..."
./.venv/Scripts/python dump_matches.py > fashion_output.txt
# Bước 3: Gọi Gemini Subagent Giám Khảo (Judge) -> Giao việc phán xử cho LLM
echo ">> 🧑‍⚖️ 3. GỌI GEMINI JUDGE CHẤM ĐIỂM NGỮ CẢNH..."
gemini prompt "@fashion_output.txt Mày là AI Judge Thời Trang cấp cao. Cẩn thận soi xem kết quả in ra trong file kia có bị 2 lỗi sau không:
1. Đồ bị trộn sai nhóm tuổi: Áo quần của Người lớn ĐANG kết hợp nhầm với một món nào đó có chữ 'bé' hoặc 'kid'.
2. Sai dịp mặc: Quần áo ngủ/mặc nhà đưa đi chơi... (Hoặc mất tiêu occasion).
Nếu kết quả 100% SẠCH SẼ, không dính lỗi, hãy in ra duy nhất một dòng: PASS_EVAL.
Ngược lại, nếu phát hiện món đồ nào sai trái, hãy in ra chi tiết món đó để báo lỗi." > eval_feedback.txt
# Bước 4: Ai Judge phán quyết
if grep -q "PASS_EVAL" eval_feedback.txt; then
echo "💯 AI JUDGE PHÁN QUYẾT: PASS !!!"
touch DONE.flag
break
else
echo "❌ AI JUDGE ĐÁNH FAIL. TRÍCH LỖI:"
cat eval_feedback.txt
echo "🔄 Chuẩn bị quật roi bắt Gemini Builder sửa tiếp..."
sleep 3
# Ghi lại lỗi vòng trước vào plan để con AI Builder vòng sau rút kinh nghiệm
echo "--- LỖI TỪ VÒNG $ITERATION ---" >> $PLAN_FILE
cat eval_feedback.txt >> $PLAN_FILE
fi
ITERATION=$((ITERATION+1))
done
echo "🎉 SELF-EVAL LOOP THÀNH CÔNG RỰC RỠ! 🎉"
from worker.stylist_engine import StylistEngine
e = StylistEngine()
res = e.compute_dynamic_rule_matches('6TS26A002-SK010')
print("=== Keys trả về từ engine ===")
for occ, roles in res.items():
tot = sum(len(v) for v in roles.values())
print(f' occ_key="{occ}" total={tot} roles={list(roles.keys())}')
print()
print("=== Check OCC_LABELS mapping (frontend expects) ===")
OCC_LABELS = {
'di_choi': 'Đi chơi / dạo phố',
'cong_so': 'Đi làm công sở',
'mac_nha': 'Ở nhà / mặc ngủ',
'du_lich': 'Du lịch',
'hang_ngay': 'Hàng ngày',
}
for k, v in OCC_LABELS.items():
n = sum(len(roles.values()) for roles in [res.get(k, {})])
items = sum(len(v2) for v2 in res.get(k, {}).values())
print(f' "{k}" → "{v}": {items} items')
from worker.stylist_engine import StylistEngine
e = StylistEngine()
catalog = e._get_catalog()
# Check Ao phong for kids
print("=== Áo phông kids ===")
for p in catalog:
g = (p.get('gender') or '').lower()
ag = (p.get('age_group') or '').lower()
kid_kw = ['boy', 'girl', 'bé', 'trẻ em']
if p.get('product_line') == 'Áo phông' and any(k in g+ag for k in kid_kw):
print(f" {p['code']} | gender={p.get('gender')} | age_group={p.get('age_group')}")
print()
print("=== Áo mặc nhà kids ===")
for p in catalog:
g = (p.get('gender') or '').lower()
ag = (p.get('age_group') or '').lower()
kid_kw = ['boy', 'girl', 'bé', 'trẻ em']
if p.get('product_line') == 'Áo mặc nhà' and any(k in g+ag for k in kid_kw):
print(f" {p['code']} | gender={p.get('gender')} | age_group={p.get('age_group')}")
# Also check what happens when engine computes for this code
print()
print("=== compute_dynamic_rule_matches for 2LA26S004-FA160 ===")
result = e.compute_dynamic_rule_matches('2LA26S004-FA160')
print(f"Total occasions: {len(result)}")
for occ, roles in result.items():
for role, items in roles.items():
print(f" {occ} / {role}: {len(items)} items")
for it in items[:2]:
print(f" - {it['code']} {it['name'][:30]}")
This diff is collapsed.
from worker.stylist_engine import StylistEngine
from collections import Counter
e = StylistEngine()
catalog = e._get_catalog()
# Group product_lines by gender
lines_by_gender = {}
for p in catalog:
g = p.get('gender', 'unknown')
pl = p.get('product_line', '')
if not pl:
continue
if g not in lines_by_gender:
lines_by_gender[g] = Counter()
lines_by_gender[g][pl] += 1
print("=== WOMEN product_lines ===")
for pl, cnt in sorted(lines_by_gender.get('women', {}).items(), key=lambda x: -x[1]):
print(f" {cnt:3d}x {pl}")
print()
print("=== MEN product_lines ===")
for pl, cnt in sorted(lines_by_gender.get('men', {}).items(), key=lambda x: -x[1]):
print(f" {cnt:3d}x {pl}")
print()
print("=== UNISEX product_lines ===")
for pl, cnt in sorted(lines_by_gender.get('unisex', {}).items(), key=lambda x: -x[1]):
print(f" {cnt:3d}x {pl}")
print()
print("=== ALL unique genders in catalog ===")
print(set(p.get('gender','') for p in catalog))
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment