Commit f6c0445a authored by Hoanganhvu123's avatar Hoanganhvu123

chore: snapshot current workspace changes

parent eef27f6a
---
name: CuCu Note Full Feature E2E & DB Migration Loop
description: An autonomous workflow for the AI Agent to test all features of CuCu Note using the Browser tool, apply surgical code fixes, and run SQLite migrations where necessary.
---
# Mục tiêu chung
Thực hiện chạy kiểm thử "End-to-End" toàn bộ các tính năng của `CuCu Note` bằng cách thao tác trực tiếp trên giao diện Browser (sử dụng Browser Tool của Agent). Nếu phát hiện lỗi, Agent sẽ tự động dò lại code, sửa lỗi (Surgical Changes) hoặc bổ sung Database Migration, sau đó test lại cho tới khi chạy thành công 100%.
# Nguyên tắc hành động của Agent
Agent PHẢI CHẤP HÀNH 4 nguyên tắc sau trong mỗi vòng lặp thực thi:
1. **Khởi động và Test bằng Browser Tool:**
- Mặc định Backend (cổng 5000) và Frontend (Vite) đã chạy (Nếu chưa chạy, gọi lệnh chạy ngầm).
- Truy cập vào ứng dụng bằng `browser_subagent` và thao tác click/gõ phím như một User thật. Không chạy bằng script npm test ảo.
2. **Phát hiện lỗi và "Sửa theo kiểu Phẫu thuật" (Surgical Changes - CLAUDE.md):**
- Khi phát hiện luồng bị nghẽn mặt UI hoặc Backend báo lỗi 500.
- CHỈ SỬA TRÚNG CÁI CHỖ HỎNG.
- **CẤM** đụng chạm, refactor, hay "tối ưu" những đoạn code đang chạy ngon lành. Không động vào code cũ nếu nó không trực tiếp gây ra lỗi hiện tại.
3. **Cơ chế Khắc phục Database (Migration):**
- Rất nhiều tính năng hiện tại bị lỗi là do Database SQLite đang **thiếu bảng** (ví dụ thiếu `cuccu_inbox`, `cuccu_attachments`, `cuccu_teams_...`).
- Nếu lỗi trả về từ API/Backend là do thiếu DB/Column, Agent **PHẢI** tạo các Script Migration (code Python) ném vào thư mục `miniapp/cuccu_note/backend/db/migrate`.
- Chạy script migrate này trước khi test lại.
4. **Vòng lặp Vô hạn (Loop until Done):**
- Quy trình phải là: **[Test bằng Browser] -> [Thấy Lỗi] -> [Tìm nguyên nhân: API/DB/UI] -> [Sửa Code / Viết Migrate] -> [Test Lại bằng Browser]**.
- Nếu Xong -> Đánh dấu X vào ô Checkbox (Tích `[x]`) ở danh sách bên dưới và tiếp tục tính năng mới.
---
# Danh sách kiểm thử (Agent tự động tích [x] sau khi pass)
Dưới đây là các User Flow cần được mô phỏng test bằng Browser Tool:
## Core Database Setup
- [ ] Rà soát source file `sqlite_client.py` để lấy danh sách toàn bộ constants Tables (VD: TABLE_TEAMS, TABLE_INBOX,...). So khớp với `memos.db` và chạy script tạo Migration Tool cho các bảng còn thiếu.
- [ ] Database đã được migrate đầy đủ các bảng cần thiết.
## 1. Xác thực & Phân quyền (Auth)
- [ ] Đăng ký tài khoản User mới qua giao diện.
- [ ] Login (Đăng nhập) thành công và điều hướng vào trang Dashboard/Home.
- [ ] Truy xuất thông tin Profile cá nhân.
## 2. Quản lý Ghi chú (Memo Management)
- [ ] Viết một Memo mới, nhấn Submit và đảm bảo Memo hiện lên Timeline.
- [ ] Chỉnh sửa (Edit) nội dung Memo vừa tạo.
- [ ] Thay đổi trạng thái Visibility (Private / Public / Workspace).
- [ ] Pin (Ghim) một Memo và un-pin nó.
- [ ] Đính kèm file (Upload ảnh/tài liệu) vào Memo.
## 3. Quản lý Cộng đồng / Đội nhóm (Teams)
- [ ] Chuyển hướng sang page Teams / Workspace.
- [ ] Tạo một Team mới với Tên và Mô tả.
- [ ] Viết một Memo mới ở phạm vi không gian Team (Workspace Memo).
## 4. Tương tác Mạng Xã Hội (Reactions & Comments)
- [ ] Thả cảm xúc (Reaction - Emoji) vào một bài Memo có sẵn.
- [ ] Viết một bình luận (Comment) phản hồi vào Memo của người khác.
## 5. Trung tâm Thông báo (Inbox System)
- [ ] Agent tự kích hoạt một thông báo đẩy về hệ thống.
- [ ] Truy cập mục The Inbox, đảm bảo thông báo hiển thị.
- [ ] Click Mark as Read (Đánh dấu đã đọc).
## 6. Tính năng Trí Tuệ Nhân Tạo (Chatbot)
- [ ] Mở mục Chatbot (Panel hoặc Page riêng).
- [ ] Chat với AI một câu chào hỏi đơn giản, đợi AI Loading và hiển thị luồng stream.
- [ ] Viết một truy vấn RAG kêu AI tổng hợp nội dung các Memo bạn vừa tự tạo ở Bước 2. Đảm bảo trả kết quả chuẩn.
---
*(Lặp lại cho đến khi toàn bộ tích `[x]` được hoàn thiện! Cuối cùng in ra thông báo hoàn thành.)*
DONE
\ No newline at end of file
......@@ -31,8 +31,13 @@ class PooledConnectionWrapper:
def get_pooled_connection_compat():
"""Drop-in replacement for psycopg.connect() that returns to the pool on .close()"""
from common.db_pool import db_pool
from config import USE_LOCAL_SQLITE
import psycopg
if USE_LOCAL_SQLITE:
from common.sqlite_mock import get_mock_pg_conn_compat
return get_mock_pg_conn_compat()
if not db_pool._pool:
db_pool.initialize()
......
......@@ -11,7 +11,27 @@ def translate_query(query: str) -> str:
Dịch câu SQL đặc thù của Postgres/StarRocks sang dạng mà SQLite hiểu được.
"""
# 1. Thay thế Placeholder của Postgres/StarRocks (%s) thành của SQLite (?)
q = query.replace("%s", "?")
# Chỉ thay thế %s nếu nó KHÔNG nằm trong một chuỗi LIKE (ví dụ '%sơ mi%')
# Một cách đơn giản là chỉ thay %s nếu nó đứng độc lập hoặc theo sau bởi dấu phẩy/ngoặc
q = re.sub(r'(?<!%)\b%s\b(?!%)', '?', query)
# Tuy nhiên %s trong psycopg thường không phải là word boundary.
# Thử cách khác: chỉ thay %s nếu nó KHÔNG đứng cạnh ký tự chữ/số nào trừ khi là chính nó?
# Thực tế trong project này %s thường dùng làm placeholder cho tham số.
# Cách an toàn nhất cho Mock là replace %s nếu nó đứng sau dấu cách hoặc dấu mở ngoặc.
q = re.sub(r'([ \(\,])%s([ \)\,])', r'\1?\2', query)
# Xử lý trường hợp %s ở cuối câu
q = re.sub(r'([ \(\,])%s$', r'\1?', q)
# Nếu vẫn còn lỗi, ta có thể dùng một cách thủ công hơn cho dự án này:
# Nếu query có dùng LIKE, ta tạm thời không replace %s bừa bãi.
if "LIKE" in query.upper() and "%s" in query:
# Nếu có LIKE, khả năng cao %s là placeholder CỦA LIKE pattern nếu nó đứng cạnh %
# Nhưng ở đây %s là placeholder của psycopg.
# TRICK: TRong SQL query của stylist_engine, ta không dùng params cho LIKE pattern %s.
# Ta dùng f-string. Vậy %s CHẮC CHẮN là pattern hoặc lỗi.
pass
else:
q = q.replace("%s", "?")
# 2. Thay thế Postgres Schema (dashboard_canifa)
q = re.sub(r'"dashboard_canifa"\."([a-zA-Z0-9_]+)"', r'pg__dashboard_canifa__\1', q)
......@@ -72,28 +92,30 @@ class MockCursor:
return self._last_results[0]
return None
def close(self):
pass
# Hỗ trợ Async context manager cho một số hàm dùng `async with cursor()`
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
pass
self.close()
# Context manager đồng bộ (`with cursor()`)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
pass
self.close()
class MockConnection:
"""
Giả mạo kết nối DB của psycopg pool.
"""
@contextmanager
def cursor(self):
"""Mô phỏng `with conn.cursor() as cur:`"""
yield MockCursor()
"""Mô phỏng conn.cursor()"""
return MockCursor()
def transaction(self):
"""Mô phỏng transaction (Fake block)"""
......@@ -102,6 +124,15 @@ class MockConnection:
yield self
return dummy_transaction()
def close(self):
pass # Mock close
def commit(self):
pass
def rollback(self):
pass
@contextmanager
def get_mock_pg_conn():
"""
......@@ -109,3 +140,10 @@ def get_mock_pg_conn():
"""
logger.info("🛡️ [SQLITE MOCK] Đã đánh chặn PostgreSQL Connection -> Chạy truy vấn trên Local SQLite!")
yield MockConnection()
def get_mock_pg_conn_compat():
"""
Trực tiếp trả về MockConnection cho cơ chế không dùng context manager (compat)
"""
logger.info("🛡️ [SQLITE MOCK COMPAT] Đã đánh chặn PostgreSQL Connection -> Chạy truy vấn trên Local SQLite!")
return MockConnection()
import sqlite3
import os
db_path = r"C:\canifa-idea\chatbot-canifa-feedback\backend\database\canifa_ai_dump.sqlite"
if not os.path.exists(db_path):
print("Database not found!")
else:
db = sqlite3.connect(db_path)
c = db.cursor()
c.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = c.fetchall()
print("Tables in Sqlite:", tables)
import json
import psycopg
def check_data():
print("Initiating DB connection...")
try:
conn = psycopg.connect("host=160.191.50.138 port=5432 dbname=law_bot user=law_user password=zvPQhfGgYwhY0ihKOTRNjN4wH")
cur = conn.cursor()
query = """
SELECT magento_ref_code, name, gender, ai_matches
FROM dashboard_canifa.ultra_descriptions
WHERE ai_matches IS NOT NULL AND ai_matches != '{}'::jsonb
AND (gender = 'Nam' OR gender = 'Nữ' OR gender = 'nam' OR gender = 'nu')
LIMIT 10;
"""
cur.execute(query)
rows = cur.fetchall()
print(f"Found {len(rows)} products with ai_matches")
for row in rows:
code = row[0]
name = row[1]
gender = row[2]
matches = row[3]
print(f"\n--- Code: {code} | Name: {name} | Gender: {gender} ---")
if isinstance(matches, str):
matches = json.loads(matches)
if not matches:
print(" No match data!")
continue
has_issue = False
for occasion, roles in matches.items():
print(f" Occasion: {occasion}")
for role, items in roles.items():
for item in items[:3]:
item_gender = item.get('gender', 'unknown').lower()
item_name = item.get('name', 'unknown')
item_cat = item.get('category', 'unknown').lower()
is_kid = ('bé' in item_gender or 'be_' in item_gender or 'girl' in item_gender or
'boy' in item_gender or 'be gai' in item_gender or 'be trai' in item_gender)
is_kid_name = ('bé' in item_name.lower())
if is_kid or is_kid_name:
has_issue = True
print(f" -> {item.get('code')}: {item_name}")
print(f" *** ISSUE: KID'S ITEM (Gender: {item_gender}) IN ADULT MATCH ({gender}) ***")
if not has_issue:
print(" OK (No demographic mismatch found in top items).")
cur.close()
conn.close()
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
check_data()
import json
from worker.stylist_engine import StylistEngine
def test_engine():
engine = StylistEngine()
print("Catalog loaded. Finding adult product...")
catalog = engine._get_catalog()
# find an adult shirt
adult_product = None
for p in catalog:
gender = p.get('gender', '').lower()
if gender in ['nam', 'nữ', 'nu', 'women', 'men']:
adult_product = p
break
if not adult_product:
print("No adult product found.")
return
code = adult_product['code']
print(f"Testing Code: {code} | Name: {adult_product['name']} | Gender: {adult_product['gender']}")
print("\n--- 1. Testing 'compute_dynamic_rule_matches' (AI MATCHES) ---")
matches = engine.compute_dynamic_rule_matches(code)
if not matches:
print("No AI matches found.")
else:
for occ, roles in matches.items():
for role, items in roles.items():
for item in items[:2]:
item_gender = item.get('gender', 'unknown').lower()
item_name = item.get('name', 'unknown')
if 'bé' in item_name.lower() or 'bé' in item_gender or 'be' in item_gender:
print(f" [AI-MATCH] FOUND KID: {item['code']} - {item_name} (gender: {item_gender})")
else:
print(f" [AI-MATCH] OK: {item['code']} - {item_name} (gender: {item_gender})")
print("\n--- 2. Testing 'compute_super_classifications_sql' (SUPER CLASSIFICATIONS) ---")
classifications = engine.compute_super_classifications_sql(code)
if not classifications:
print("No classification matches found.")
else:
for group, groups_dict in classifications.items():
for key, items in groups_dict.items():
for item in items[:2]:
item_name = item.get('name', 'unknown')
if 'bé' in item_name.lower():
print(f" [SUPER-CLASS] FOUND KID: {item['code']} - {item_name}")
else:
print(f" [SUPER-CLASS] OK: {item['code']} - {item_name}")
if __name__ == "__main__":
test_engine()
import sqlite3
import os
import re
db_path = r"C:\canifa-idea\chatbot-canifa-feedback\backend\database\canifa_local.sqlite"
conn = sqlite3.connect(db_path)
cur = conn.cursor()
def translate_query(query: str) -> str:
q = query
q = re.sub(r'"dashboard_canifa"\."([a-zA-Z0-9_]+)"', r'pg__dashboard_canifa__\1', q)
q = re.sub(r'"canifa_chat"\."([a-zA-Z0-9_]+)"', r'pg__canifa_chat__\1', q)
q = re.sub(r'"public"\."([a-zA-Z0-9_]+)"', r'pg__public__\1', q)
q = re.sub(r'([a-zA-Z0-9_]+\.)?`?magento_product_dimension_with_text_embedding`?', r'sr__test_db__magento_product_dimension_with_text_embedding', q)
q = re.sub(r'TRUNCATE TABLE\s+(pg__[a-zA-Z0-9_]+)(\s+CASCADE)?;?', r'DELETE FROM \1;', q, flags=re.IGNORECASE)
q = q.replace("::jsonb", "").replace("::uuid", "")
return q
def ensure_tables():
cur.execute("""
CREATE TABLE IF NOT EXISTS pg__dashboard_canifa__ultra_descriptions (
id INTEGER PRIMARY KEY,
internal_ref_code TEXT,
product_name TEXT,
product_image_url TEXT,
product_line TEXT,
description_data TEXT,
phase TEXT,
created_at TEXT,
updated_at TEXT,
status TEXT,
clean_description TEXT,
tags TEXT,
size_scale TEXT,
magento_ref_code TEXT,
base_ref_code TEXT,
embedding TEXT,
ai_matches TEXT
)
""")
cur.execute("""
CREATE TABLE IF NOT EXISTS sr__test_db__magento_product_dimension_with_text_embedding (
internal_ref_code TEXT,
magento_ref_code TEXT,
product_color_code TEXT,
product_name TEXT,
color_code TEXT,
master_color TEXT,
product_color_name TEXT,
season_sale TEXT,
season TEXT,
style TEXT,
fitting TEXT,
size_scale TEXT,
graphic TEXT,
pattern TEXT,
weaving TEXT,
shape_detail TEXT,
form_neckline TEXT,
form_sleeve TEXT,
form_length TEXT,
form_waistline TEXT,
form_shoulderline TEXT,
material TEXT,
product_group TEXT,
product_line_vn TEXT,
unit_of_measure TEXT,
sale_price REAL,
original_price REAL,
discount_amount REAL,
material_group TEXT,
product_line_en TEXT,
age_by_product TEXT,
gender_by_product TEXT,
quantity_sold REAL,
is_new_product INTEGER,
product_image_url TEXT,
description_text TEXT,
product_image_url_thumbnail TEXT,
product_web_url TEXT,
product_web_material TEXT,
description_text_full TEXT,
tags TEXT,
suggest_items TEXT,
similar_items TEXT,
vector TEXT
)
""")
cur.execute("""
CREATE TABLE IF NOT EXISTS pg__dashboard_canifa__chatbot_fashion_rules (
id INTEGER PRIMARY KEY,
anchor_category TEXT,
occasion_tag TEXT,
match_role TEXT,
target_category TEXT,
ai_reason TEXT,
gender_target TEXT
)
""")
conn.commit()
ensure_tables()
with open("migration_log.txt", "w", encoding="utf-8") as log:
for folder in ['postgres', 'starrocks']:
path = os.path.join(r"C:\canifa-idea\chatbot-canifa-feedback\backend\database", folder)
if not os.path.exists(path): continue
for file in os.listdir(path):
if file.endswith('.sql'):
filepath = os.path.join(path, file)
log.write(f"Reading {file}...\n")
log.flush()
with open(filepath, 'r', encoding='utf-8') as f:
buffer = ""
in_statement = False
for line in f:
if line.startswith("INSERT") or line.startswith("TRUNCATE") or line.startswith("DELETE"):
in_statement = True
buffer = line
elif in_statement:
buffer += line
if in_statement and line.strip().endswith(";"):
sql = translate_query(buffer)
try:
cur.execute(sql)
except Exception as e:
log.write(f"Error executing statement in {file}: {e}\n")
log.flush()
in_statement = False
buffer = ""
conn.commit()
conn.close()
with open("migration_log.txt", "a", encoding="utf-8") as log:
log.write("Migration complete.\n")
Reading canifa_chat.lead_flow_history.sql...
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Reading dashboard_canifa.activity_logs.sql...
Reading dashboard_canifa.admin_users.sql...
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Reading dashboard_canifa.ai_outfit_tables.sql...
Reading dashboard_canifa.chatbot_fashion_rules.sql...
Error executing statement in dashboard_canifa.chatbot_fashion_rules.sql: no such table: pg__dashboard_canifa__chatbot_fashion_rules
Error executing statement in dashboard_canifa.chatbot_fashion_rules.sql: no such table: pg__dashboard_canifa__chatbot_fashion_rules
Reading dashboard_canifa.chat_history.sql...
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Reading dashboard_canifa.desc_field_config.sql...
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Reading dashboard_canifa.product_size_guide.sql...
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Reading dashboard_canifa.saved_reports.sql...
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Reading dashboard_canifa.sql_trace_sessions.sql...
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Reading dashboard_canifa.system_settings.sql...
Reading dashboard_canifa.ultra_descriptions.sql...
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Error executing statement in dashboard_canifa.ultra_descriptions.sql: table pg__dashboard_canifa__ultra_descriptions has no column named product_name
Reading public.chatbot_fashion_rules.sql...
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Reading public.prompt_rules.sql...
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Reading test_db.magento_product_dimension_with_text_embedding.sql...
import sys
import json
import logging
try:
from worker.stylist_engine import StylistEngine
except ImportError:
print("❌ LỖI: Không import được StylistEngine.")
sys.exit(1)
logging.basicConfig(level=logging.ERROR)
def dict_to_string(d):
return json.dumps(d, ensure_ascii=False, indent=2)
def main():
engine = StylistEngine()
catalog = engine._get_catalog()
adult_nam = None
adult_nu = None
for p in catalog:
gender = str(p.get("gender") or "").lower()
age = str(p.get("age_group") or "").lower()
name = str(p.get("name") or "").lower()
if not adult_nam and (gender == 'nam' or gender == 'men') and 'bé' not in name and 'kid' not in age:
adult_nam = p
elif not adult_nu and (gender == 'nữ' or gender == 'nu' or gender == 'women') and 'bé' not in name and 'kid' not in age:
adult_nu = p
if adult_nam and adult_nu:
break
print("================ PRODUCT MATCHES EXPORT =================\n")
for label, subj in [("Người lớn (Nam)", adult_nam), ("Người lớn (Nữ)", adult_nu)]:
if not subj: continue
code = subj['code']
print(f"--- TEST CASE: {label} ---")
print(f"Sản phẩm gốc: Tên={subj['name']} | Mã={code} | Giới tính={subj.get('gender')}")
matches = engine.compute_dynamic_rule_matches(code)
print("\n[BẢNG PHỐI ĐỒ (Dịp mặc)]:")
if matches:
for occ, roles in matches.items():
print(f" Dịp: {occ}")
for role, items in roles.items():
print(f" - Role: {role}")
for item in items[:3]:
item_name = str(item.get('name') or "")
item_gender = str(item.get('gender') or "")
print(f" -> {item['code']} | {item_name} (Gender: {item_gender})")
print("\n[PHÂN NHÓM MỞ RỘNG RAW SQL]:")
classifications = engine.compute_super_classifications_sql(code)
if classifications:
for group, g_dict in classifications.items():
for key, items in g_dict.items():
print(f" Nhóm: {group} -> {key}")
for item in items[:2]:
item_name = str(item.get('name') or "")
item_gender = str(item.get('gender_by_product') or "")
print(f" -> {item['magento_ref_code']} | {item_name} (Gender: {item_gender})")
print("\n\n")
if __name__ == "__main__":
main()
I will read the full content of `fashion_output.txt` to evaluate the product matches against the criteria specified in `temp_judge_prompt.txt`.
I will search for the string "PRODUCT MATCHES EXPORT" in the current directory to see if the matches were output to a different file or if there's more content elsewhere.
I will read `dump_matches.py` to understand how the product matches are generated and see if I should run it.
I'll run `dump_matches.py` and redirect its output to `fashion_output.txt` so I have something to evaluate.
I will check the `requirements.txt` file to determine the correct database driver and other dependencies.
I'll attempt to run `dump_matches.py` using the Python interpreter in the `.venv` directory to ensure all dependencies are available.
I'll set the `PYTHONIOENCODING` environment variable to `utf-8` to handle the Vietnamese characters and avoid the `UnicodeEncodeError`.
I'll read the contents of `fashion_output.txt` to evaluate the product matches for any errors.
I will search for the strings "be" and "kid" in the `fashion_output.txt` file to check for any mixed age group errors.
I'll use the `grep_search` tool to search for "be" and "kid" in `fashion_output.txt` to find any age group mismatches.
I will check the age group of the `Vietnam Youngster` products in the catalog to see if they are actually intended for kids.
I'll perform a thorough search for "be", "b├®", and "kid" in `fashion_output.txt` to ensure no age group mismatches were missed.
I will search for any product codes starting with "1" in `fashion_output.txt`, as these are typically kid items and would indicate an age group mismatch.
I'll re-read `fashion_output.txt` to ensure I've captured all the relevant details and errors before reporting them.
Kết quả đánh giá từ file `fashion_output.txt`:
1. **Lß╗ùi 1 (Trß╗Ön sai nh├│m tuß╗òi):** PASS_EVAL. Kh├┤ng ph├ít hiß╗çn m├│n ─æß╗ô n├áo c├│ chß╗®a chß╗» 'be' hoß║Àc 'kid' trong c├íc kß║┐t hß╗úp d├ánh cho ngã░ß╗Øi lß╗øn. (C├íc m├│n "Vietnam Youngster" ─æ├ú ─æã░ß╗úc kiß╗âm tra v├á thuß╗Öc nh├│m `age_group: adult`).
2. **Lß╗ùi 2 (Sai dß╗ïp mß║Àc / Mß║Ñt dß╗ïp mß║Àc):** **FAIL**.
- **Mß║Ñt ti├¬u occasion:** Mß╗Ñc `[Bß║óNG PHß╗ÉI ─Éß╗Æ (Dß╗ïp mß║Àc)]` ho├án to├án trß╗æng rß╗ùng cho cß║ú hai trã░ß╗Øng hß╗úp kiß╗âm tra (Nam v├á Nß╗»).
- **Sai dß╗ïp mß║Àc (M├╣a):** Trong nh├│m `Nh├│m: material -> winter` (M├╣a ─æ├┤ng), hß╗ç thß╗æng lß║íi gß╗úi ├¢ `5BS25S002 | Quß║ºn so├│c ngã░ß╗Øi lß╗øn VIETNAM ON THE PITCH` (Quß║ºn ─æ├╣i/so├│c kh├┤ng ph├╣ hß╗úp cho m├╣a ─æ├┤ng).
- **Sai dß╗ïp mß║Àc (C├┤ng viß╗çc):** Trong nh├│m `Nh├│m: occasion -> di_lam` (─Éi l├ám), hß╗ç thß╗æng gß╗úi ├¢ `5TH25W003 | ├üo sãí mi unisex ngã░ß╗Øi lß╗øn in h├¼nh Demon Slayer` (├üo in h├¼nh hoß║ít h├¼nh/manga thã░ß╗Øng kh├┤ng ph├╣ hß╗úp vß╗øi m├┤i trã░ß╗Øng c├┤ng sß╗ƒ trang trß╗ìng).
**Chi tiết các món đồ sai trái:**
- `5BS25S002-SA090 | Quß║ºn so├│c ngã░ß╗Øi lß╗øn VIETNAM ON THE PITCH` (Sai dß╗ïp mß║Àc: Xuß║Ñt hiß╗çn trong nh├│m Winter).
- `5TH25W003-SA952 | ├üo sãí mi unisex ngã░ß╗Øi lß╗øn in h├¼nh Demon Slayer` (Sai dß╗ïp mß║Àc: Xuß║Ñt hiß╗çn trong nh├│m ─Éi l├ám).
- To├án bß╗Ö mß╗Ñc `[Bß║óNG PHß╗ÉI ─Éß╗Æ (Dß╗ïp mß║Àc)]`: Trß╗æng rß╗ùng (Lß╗ùi thiß║┐u th├┤ng tin dß╗ïp mß║Àc).
import sys
import logging
try:
from worker.stylist_engine import StylistEngine
except ImportError:
print("❌ LỖI: Không import được StylistEngine. Chạy script này từ thư mục backend/.")
sys.exit(1)
logging.basicConfig(level=logging.INFO)
def main():
engine = StylistEngine()
catalog = engine._get_catalog()
# 1. Tìm các sản phẩm để test
adult_nam = None
adult_nu = None
kid_sp = None
for p in catalog:
gender = str(p.get("gender") or "").lower()
age = str(p.get("age_group") or "").lower()
name = str(p.get("name") or "").lower()
if not adult_nam and (gender == 'nam' or gender == 'men') and 'bé' not in name and 'kid' not in age:
adult_nam = p
elif not adult_nu and (gender == 'nữ' or gender == 'nu' or gender == 'women') and 'bé' not in name and 'kid' not in age:
adult_nu = p
elif not kid_sp and ('bé' in name or 'kid' in age):
kid_sp = p
if adult_nam and adult_nu and kid_sp:
break
test_subjects = [
("Người lớn (Nam)", adult_nam),
("Người lớn (Nữ)", adult_nu)
]
passed = True
print("\n" + "="*50)
print("🚀 BẮT ĐẦU CHẤM ĐIỂM FASHION MATCHES (EVAL) 🚀")
print("="*50)
for label, subj in test_subjects:
if not subj:
continue
code = subj['code']
print(f"\n👉 Test Case: {label}")
print(f" Sản phẩm gốc: {subj['name']} (Mã: {code})")
print(f" Gender: {subj.get('gender')}")
matches = engine.compute_dynamic_rule_matches(code)
# Test 1: Khắc khe loại trừ đồ trẻ em
found_kid_error = False
if matches:
for occ, roles in matches.items():
for role, items in roles.items():
for item in items:
item_name = str(item.get('name') or "").lower()
item_gender = str(item.get('gender') or "").lower()
item_age = str(item.get('age_group') or "").lower()
if 'bé' in item_name or 'bé' in item_gender or 'be' in item_gender or 'kid' in item_age:
print(f" ❌ FAILED [Nhầm độ tuổi]: Phối đồ {label} nhưng lòi ra đồ trẻ em!")
print(f" - Đồ bị lọt: {item['code']} | {item_name} (Gender: {item_gender})")
found_kid_error = True
passed = False
if not found_kid_error:
print(" ✅ PASS: Không dính đồ trẻ em trong gợi ý phối.")
# Test 2: Super Classifications SQL có lọt đồ trẻ con không?
classifications = engine.compute_super_classifications_sql(code)
found_sql_kid_error = False
if classifications:
for group, g_dict in classifications.items():
for key, items in g_dict.items():
for item in items:
item_name = str(item.get('name') or "").lower()
item_gender = str(item.get('gender_by_product') or "").lower()
if 'bé' in item_name or 'kid' in item_gender:
print(f" ❌ FAILED [Raw SQL SQL]: Bảng màu/chất liệu lôi lộn đồ kids lên.")
print(f" - Đồ bị lọt: {item['magento_ref_code']} | {item_name}")
found_sql_kid_error = True
passed = False
if not found_sql_kid_error:
print(" ✅ PASS: Raw SQL (Phân nhóm Color/Material) đã sạch đồ trẻ em.")
print("\n" + "="*50)
if passed:
print("🎉 TẤT CẢ TEST ĐỀU XANH! BÁO CÁO DONE VỚI RALPH LIỀN!! 🎉")
sys.exit(0)
else:
print("💥 CÒN RÁC! NGỒI ĐÓ MÀ SỬA TIẾP ĐI CLAUDE! 💥")
sys.exit(1)
if __name__ == "__main__":
main()
B================ PRODUCT MATCHES EXPORT =================
BUsage: gemini [options] [command]
Reading canifa_chat.lead_flow_history.sql...
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Error executing statement in canifa_chat.lead_flow_history.sql: no such table: pg__canifa_chat__lead_flow_history
Reading dashboard_canifa.activity_logs.sql...
Reading dashboard_canifa.admin_users.sql...
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Error executing statement in dashboard_canifa.admin_users.sql: no such table: pg__dashboard_canifa__admin_users
Reading dashboard_canifa.ai_outfit_tables.sql...
Reading dashboard_canifa.chatbot_fashion_rules.sql...
Reading dashboard_canifa.chat_history.sql...
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Error executing statement in dashboard_canifa.chat_history.sql: no such table: pg__dashboard_canifa__chat_history
Reading dashboard_canifa.desc_field_config.sql...
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Error executing statement in dashboard_canifa.desc_field_config.sql: no such table: pg__dashboard_canifa__desc_field_config
Reading dashboard_canifa.product_size_guide.sql...
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Error executing statement in dashboard_canifa.product_size_guide.sql: no such table: pg__dashboard_canifa__product_size_guide
Reading dashboard_canifa.saved_reports.sql...
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Error executing statement in dashboard_canifa.saved_reports.sql: no such table: pg__dashboard_canifa__saved_reports
Reading dashboard_canifa.sql_trace_sessions.sql...
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Error executing statement in dashboard_canifa.sql_trace_sessions.sql: no such table: pg__dashboard_canifa__sql_trace_sessions
Reading dashboard_canifa.system_settings.sql...
Reading dashboard_canifa.ultra_descriptions.sql...
Reading public.chatbot_fashion_rules.sql...
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Error executing statement in public.chatbot_fashion_rules.sql: no such table: pg__public__chatbot_fashion_rules
Reading public.prompt_rules.sql...
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Error executing statement in public.prompt_rules.sql: no such table: pg__public__prompt_rules
Reading test_db.magento_product_dimension_with_text_embedding.sql...
Migration complete.
# AI Judge Prompt - Fashion Matches Evaluator
Mày là một Chuyên gia Thời trang và AI Giám khảo (AI Judge) cấp cao.
Nhiệm vụ của mày là kiểm tra kết quả trả về từ thuật toán "Fashion Matches" (Gợi ý đồ bộ) của ứng dụng Canifa.
Luật kiểm tra (Bắt lỗi):
1. **Lỗi mix rác (Demographic Mismatch):** Nếu sản phẩm gốc là đồ Người lớn (ví dụ áo phông Nam, váy Nữ), tuyệt đối KHÔNG ĐƯỢC PHÉP gợi ý đi kèm các món đồ của trẻ em (các món có chữ "bé", "kid" trong tên sản phẩm hoặc metadata tuổi/giới tính).
2. **Lỗi phớt lờ Dịp Mặc (Occasion Ignore):** Đồ mặc nhà (đồ ngủ) không được phối cho dịp "Đi chơi" hay "Đi làm".
Đầu vào (Văn bản JSON hoặc String):
{Kết quả xuất ra từ hệ thống phối đồ cho một số sản phẩm test tiêu biểu}
Nhiệm vụ đầu ra:
- Cẩn thận soi từng dòng kết quả phối. Nếu mày phát hiện bất kỳ món đồ nào dính lỗi 1 hoặc 2, hãy TỪ CHỐI (FAIL) và liệt kê chính xác Tên sản phẩm, Mã sản phẩm và Lý do lỗi để Backend Developer sửa thuật toán.
- Nếu mày đọc hết và thấy KẾT QUẢ ĐÚNG 100% SẠCH SẼ, không dính đồ trẻ em vào người lớn, dịp mặc xếp đúng, MÀY PHẢI IN RA MỘT DÒNG DUY NHẤT LÀ:
`PASS_EVAL`
# Ralph Loop: Fix AI Stylist (Demographic Mismatch & Occasion Ignore)
## Mục Tiêu (Goal)
Khắc phục 2 lỗi lõi trong thuật toán phối đồ (worker/stylist_engine.py):
1. **Lỗi mix rác (Demographic Mismatch):** Đồ người lớn bị gợi ý mix chung với đồ trẻ con do query SQL thiếu lọc và do hàm check `_pass_hard_filter` bị lọt khe với các sản phẩm Unisex thiếu `age_group`.
2. **Lỗi phớt lờ Dịp Mặc (Occasion Ignored):** Gợi ý đồ không quan tâm áo/quần đó dành cho "mặc nhà", "đi làm" hay "đi chơi".
## Ranh Giới (Boundaries)
- **FILE ĐƯỢC PHÉP SỬA:** `backend/worker/stylist_engine.py` (Và các query sinh ra từ file này).
- **FILE CẤM ĐỤNG VÀO:** `eval_stylist.py`, `run_ralph_fashion.sh`.
- **YÊU CẦU ĐÔNG MÁC:** Mọi thay đổi đều phải được xác nhận bằng cách chạy `python backend/eval_stylist.py` sao cho trả về PASS (exit=0).
## Các Hàm Chức Năng Cần Khoan Đục Nhất Định:
1. `_pass_hard_filter()`: Phải vá chặt. Nếu nguồn là Người Cụ Thể (Nam/Nữ) -> Phối phải chắc chắn KHÔNG phải đồ có hint trẻ con. Hãy check sâu theo cây category nếu cần.
2. `compute_super_classifications_sql()`: Câu `SELECT` phải bọc thêm query chặn `product_name NOT LIKE '%bé%'` nếu `gender` gốc không phải dành cho trẻ em.
3. `_score()``_occasion_score()`: Bỏ comment hàm `_occasion_score()`. Bắt buộc thuật toán phải móc vào `dip_mac` NLP tag của sản phẩm để cộng/điểm.
## Kịch Bản Thoát:
Sau khi `python eval_stylist.py` PASS, hãy tạo file `DONE.flag` tại thư mục này để kết thúc vòng lặp.
## [C?P NH?T T? H? TH?NG] - D? Li?u Chu?n Ha
- build xong local sqlite database v?i T?T C? d? tu?i v occasions (magento_ref_code, product_name, chatbot_fashion_rules d du?c b?c vo).
- Agent c th? tr?c ti?p test k?t h?p thng qua dump_matches.py ho?c eval_stylist.py. DB m?i dang dng canifa_local.sqlite.
$ErrorActionPreference = "Stop"
Write-Host "--------------------------------------------------------"
Write-Host ">>> GEMINI AUTO-EVAL LOOP: AI JUDGE & BUILDER <<<"
Write-Host "--------------------------------------------------------"
$PLAN_FILE = "plan\doing\ai_stylist_plan.md"
$ITERATION = 1
if (Test-Path "DONE.flag") {
Remove-Item "DONE.flag"
}
while (!(Test-Path "DONE.flag")) {
Write-Host ">>> VONG LAP SO $ITERATION <<<" -ForegroundColor Cyan
Write-Host ">> 1. GOI GEMINI BUILDER SUA CODE..." -ForegroundColor Yellow
gemini --yolo -p "@${PLAN_FILE} @worker\stylist_engine.py Dao dien yeu cau doc loi ti vong truoc va tien hanh sua lai code trong worker\stylist_engine.py"
Write-Host ">> 2. DANG DUMP KET QUA CAU HINH..." -ForegroundColor Yellow
$env:PYTHONPATH = "C:\canifa-idea\chatbot-canifa-feedback\backend"
.venv\Scripts\python.exe dump_matches.py | Out-File -Encoding utf8 fashion_output.txt
Write-Host ">> 3. GOI GEMINI JUDGE CHAM DIEM..." -ForegroundColor Yellow
$judgeLines = @(
"@fashion_output.txt May la AI Judge Thoi Trang cap cao. Can than soi xem ket qua in ra trong file kia co bi 2 loi sau khong:",
"1. Do bi tron sai nhom tuoi: Ao quan cua Nguoi lon DANG ket hop nham voi mot mon nao do co chu 'be' hoac 'kid'.",
"2. Sai dip mac: Quan ao ngu/mac nha dua di choi... (Hoac mat tieu occasion).",
"Neu ket qua 100% SACH SE, khong dinh loi, hay in ra duy nhat mot dong: PASS_EVAL.",
"Nguoc lai, neu phat hien mon do nao sai trai, hay in ra chi tiet mon do de bao loi."
)
$judgeLines -join "`n" | Out-File -Encoding utf8 temp_judge_prompt.txt
gemini --yolo -p "`@temp_judge_prompt.txt `@fashion_output.txt" | Out-File -Encoding utf8 eval_feedback.txt
$hasPass = Select-String -Path "eval_feedback.txt" -Pattern "PASS_EVAL" -Quiet
if ($hasPass) {
Write-Host ">>> AI JUDGE PHAN QUYET: PASS !!!" -ForegroundColor Green
New-Item -Path "DONE.flag" -ItemType File | Out-Null
break
} else {
Write-Host ">>> AI JUDGE DANH FAIL. TRICH LOI:" -ForegroundColor Red
Get-Content eval_feedback.txt
Write-Host ">>> Chuan bi bat Gemini Builder sua tiep..." -ForegroundColor Yellow
Start-Sleep -Seconds 3
Add-Content -Path $PLAN_FILE -Value "`n--- LOI TU VONG $ITERATION ---"
Get-Content eval_feedback.txt | Add-Content -Path $PLAN_FILE
}
$ITERATION++
}
Write-Host ">>> SELF-EVAL LOOP THANH CONG! <<<" -ForegroundColor Green
#!/bin/bash
# Ralph Wiggum Loop (Gemini Subagent version)
# Vòng lặp bao gồm: 1 Gemini Sửa Code + 1 Gemini Chấm Điểm
PLAN_FILE="plan/doing/ai_stylist_plan.md"
SESSION_NAME="fashion_gemini_loop"
if [ -z "$TMUX" ]; then
echo "🛡️ Đang ném tác vụ vào tmux background '$SESSION_NAME'..."
rm -f DONE.flag
tmux new-session -d -s $SESSION_NAME "$0"
echo "✅ XONG! Subagent đang chạy ngầm trong tmux."
echo "👉 Xem subagent làm việc: tmux attach -t $SESSION_NAME"
exit 0
fi
echo "--------------------------------------------------------"
echo "🌀 GEMINI AUTO-EVAL LOOP: AI JUDGE & BUILDER 🌀"
echo "--------------------------------------------------------"
ITERATION=1
MAX_ITERATIONS=10
while [ ! -f "DONE.flag" ]; do
if [ $ITERATION -gt $MAX_ITERATIONS ]; then
echo "🚨 Đã chạy quá $MAX_ITERATIONS vòng. Dừng khẩn cấp!"
exit 1
fi
echo ">>> VÒNG LẶP SỐ $ITERATION <<<"
# Bước 1: Gọi Gemini Subagent Thợ Sửa Code (Builder)
echo ">> 🛠️ 1. GỌI GEMINI BUILDER ĐỂ SỬA CODE..."
# Giả định lệnh của Antigravity CLI là 'gemini prompt', nếu trên máy cài bản khác hãy đổi thành 'gemini ask' hoặc 'gemini code'
gemini prompt "@${PLAN_FILE} @worker/stylist_engine.py Đọc lỗi từ vòng trước và tiến hành sửa lại code trong worker/stylist_engine.py"
# Bước 2: Trích xuất Output
echo ">> 🔎 2. ĐANG DUMP KẾT QUẢ PHỐI ĐỒ ĐỂ CHẤM..."
./.venv/Scripts/python dump_matches.py > fashion_output.txt
# Bước 3: Gọi Gemini Subagent Giám Khảo (Judge) -> Giao việc phán xử cho LLM
echo ">> 🧑‍⚖️ 3. GỌI GEMINI JUDGE CHẤM ĐIỂM NGỮ CẢNH..."
gemini prompt "@fashion_output.txt Mày là AI Judge Thời Trang cấp cao. Cẩn thận soi xem kết quả in ra trong file kia có bị 2 lỗi sau không:
1. Đồ bị trộn sai nhóm tuổi: Áo quần của Người lớn ĐANG kết hợp nhầm với một món nào đó có chữ 'bé' hoặc 'kid'.
2. Sai dịp mặc: Quần áo ngủ/mặc nhà đưa đi chơi... (Hoặc mất tiêu occasion).
Nếu kết quả 100% SẠCH SẼ, không dính lỗi, hãy in ra duy nhất một dòng: PASS_EVAL.
Ngược lại, nếu phát hiện món đồ nào sai trái, hãy in ra chi tiết món đó để báo lỗi." > eval_feedback.txt
# Bước 4: Ai Judge phán quyết
if grep -q "PASS_EVAL" eval_feedback.txt; then
echo "💯 AI JUDGE PHÁN QUYẾT: PASS !!!"
touch DONE.flag
break
else
echo "❌ AI JUDGE ĐÁNH FAIL. TRÍCH LỖI:"
cat eval_feedback.txt
echo "🔄 Chuẩn bị quật roi bắt Gemini Builder sửa tiếp..."
sleep 3
# Ghi lại lỗi vòng trước vào plan để con AI Builder vòng sau rút kinh nghiệm
echo "--- LỖI TỪ VÒNG $ITERATION ---" >> $PLAN_FILE
cat eval_feedback.txt >> $PLAN_FILE
fi
ITERATION=$((ITERATION+1))
done
echo "🎉 SELF-EVAL LOOP THÀNH CÔNG RỰC RỠ! 🎉"
@fashion_output.txt May la AI Judge Thoi Trang cap cao. Can than soi xem ket qua in ra trong file kia co bi 2 loi sau khong:
1. Do bi tron sai nhom tuoi: Ao quan cua Nguoi lon DANG ket hop nham voi mot mon nao do co chu 'be' hoac 'kid'.
2. Sai dip mac: Quan ao ngu/mac nha dua di choi... (Hoac mat tieu occasion).
Neu ket qua 100% SACH SE, khong dinh loi, hay in ra duy nhat mot dong: PASS_EVAL.
Nguoc lai, neu phat hien mon do nao sai trai, hay in ra chi tiet mon do de bao loi.
......@@ -595,9 +595,10 @@
},
"_comment_weights": "Trọng số (tổng = 100). Thay đổi để tune theo bản 1.0.2.",
"score_weights": {
"color": 50,
"role": 30,
"material": 20
"color": 40,
"role": 25,
"material": 15,
"occasion": 20
},
"min_score": 35,
"version": "1.0.2",
......
This diff is collapsed.
......@@ -409,6 +409,31 @@ async def complete_memo(
raise HTTPException(status_code=400, detail=str(exc)) from exc
@router.patch("/{memo_id}/pin", summary="Pin/unpin memo", response_model=MemoResponse)
@router.post("/{memo_id}/pin", summary="Pin/unpin memo (compat)", response_model=MemoResponse)
async def pin_memo(
request: Request,
memo_id: str,
payload: dict = Body(default_factory=dict),
pinned: bool | None = Query(default=None),
memo_service=Depends(get_memo_service),
):
"""
Shortcut to pin/unpin a memo.
Supports POST/PATCH with payload {"pinned": true/false} or query ?pinned=true
"""
try:
user_id = get_current_user_id(request)
is_pinned = payload.get("pinned", True) if pinned is None else pinned
return await memo_service.update_memo(
memo_id,
MemoUpdate(pinned=is_pinned),
user_id=user_id,
)
except Exception as exc:
raise HTTPException(status_code=400, detail=str(exc)) from exc
@router.get("/{memo_id}/comments", summary="Get comments for a memo", response_model=List[MemoResponse])
async def list_memo_comments_simple(
request: Request,
......
......@@ -514,12 +514,14 @@ async def get_inbox_unread_count(
if not DISABLE_AUTH:
raise HTTPException(status_code=403, detail="You can only access your own inbox")
# Use count_documents with is_read=False filter
query = {"creator_id": user_id, "is_read": 0}
query = "SELECT COUNT(*) as count FROM inbox WHERE user_id = ? AND is_read = ?"
params = [user_id, 0]
if workspace_id:
query["workspace_id"] = workspace_id
query = "SELECT COUNT(*) as count FROM inbox WHERE user_id = ? AND is_read = ? AND message_type = ?"
params.append(workspace_id)
count = await mongodb_client.count_documents(query)
row = await mongodb_client.fetch_one(query, tuple(params))
count = row["count"] if row else 0
return {"unread_count": count}
except Exception as exc: # pragma: no cover
raise HTTPException(status_code=500, detail=str(exc)) from exc
......
import asyncio
import aiosqlite
async def check():
async with aiosqlite.connect('db/memos.db') as db:
db.row_factory = aiosqlite.Row
async with db.execute('SELECT id, email, username, password_hash FROM cuccu_users') as c:
rows = await c.fetchall()
for r in rows:
d = dict(r)
print(f"id={d['id']} email={d['email']} username={d['username']} hash={d['password_hash'][:20]}...")
asyncio.run(check())
......@@ -39,14 +39,64 @@ class InstanceService:
class AuthService:
async def sign_in(self, payload: schemas.AuthSignInRequest) -> schemas.AuthSignInResponse:
return schemas.AuthSignInResponse(accessToken="stub-token")
"""Real SQLite-backed sign in — verifies password and returns a JWT."""
from common.jwt_auth import verify_password, create_access_token
from .mongodb import mongodb_client as db
email = payload.email
password = payload.password
if not email or not password:
raise ValueError("Email and password are required")
# Try lookup by email first, then by username column
row = await db.fetch_one(
"SELECT * FROM cuccu_users WHERE email = ?", (email,)
)
if not row:
row = await db.fetch_one(
"SELECT * FROM cuccu_users WHERE username = ?", (email,)
)
if not row:
raise ValueError("Incorrect email/username or password")
if not verify_password(password, row["password_hash"]):
raise ValueError("Incorrect email/username or password")
user_id = str(row["id"])
access_token = create_access_token(data={"sub": user_id})
return schemas.AuthSignInResponse(accessToken=access_token)
async def sign_up(self, payload: schemas.AuthSignUpRequest) -> schemas.AuthSignUpResponse:
return schemas.AuthSignUpResponse(user_id="1")
"""Real SQLite-backed sign up — creates user and returns JWT."""
from datetime import datetime, timezone
from common.jwt_auth import get_password_hash, create_access_token
from .mongodb import mongodb_client as db
email = str(payload.email)
# Check duplicate email
existing = await db.fetch_one(
"SELECT id FROM cuccu_users WHERE email = ?", (email,)
)
if existing:
raise ValueError("Email already registered")
hashed = get_password_hash(payload.password)
username = email.split("@")[0]
now = datetime.now(timezone.utc).isoformat()
cursor = await db.execute(
"INSERT INTO cuccu_users (username, email, password_hash, nickname, created_at) VALUES (?, ?, ?, ?, ?)",
(username, email, hashed, username, now),
)
user_id = str(cursor.lastrowid)
access_token = create_access_token(data={"sub": user_id})
return schemas.AuthSignUpResponse(user_id=user_id)
async def get_me(self, token: str | None = None) -> schemas.AuthMeResponse:
from config import DISABLE_AUTH
from common.jwt_auth import decode_token
from .mongodb import mongodb_client as db
if DISABLE_AUTH:
logging.warning("⚠️ DISABLE_AUTH=true -> returning stub user for memo frontend")
......@@ -58,11 +108,22 @@ class AuthService:
try:
payload = decode_token(token)
if payload and "sub" in payload:
user_id = payload.get("sub")
# For JWT auth, we might not have full user profile, so return basic info
user_id = str(payload.get("sub"))
# Look up real user from DB
row = await db.fetch_one(
"SELECT id, email, username, nickname FROM cuccu_users WHERE id = ?",
(user_id,)
)
if row:
return schemas.AuthMeResponse(
id=str(row["id"]),
email=row["email"],
username=row["username"] or row["email"].split("@")[0],
)
# Fallback if user not found
return schemas.AuthMeResponse(
id=str(user_id),
email=f"user{user_id}@example.com",
id=user_id,
email=f"user{user_id}@local",
username=f"user{user_id}",
)
else:
......
......@@ -75,8 +75,23 @@ class CuCuAuthMiddleware:
path = request.url.path
method = request.method
# Temporary bypass: skip auth/rate-limit when DISABLE_AUTH=true
# Temporary bypass: populate mock auth state when DISABLE_AUTH=true
if DISABLE_AUTH:
auth_header = request.headers.get("Authorization")
mock_id = "testuser_local"
if auth_header and auth_header.startswith("Bearer "):
token = auth_header.replace("Bearer ", "")
try:
payload = decode_token(token)
if payload and "sub" in payload:
mock_id = str(payload.get("sub"))
except Exception:
if len(token) < 50:
mock_id = token
scope["state"]["user"] = {"sub": mock_id}
scope["state"]["user_id"] = mock_id
scope["state"]["is_authenticated"] = True
scope["state"]["token"] = auth_header.replace("Bearer ", "") if auth_header else ""
await self.app(scope, receive, send)
return
......
......@@ -130,7 +130,7 @@ class SQLiteClient:
row = await self.fetch_one(sql, tuple(params))
return dict(row) if row else None
async def find(self, query: dict, projection: dict | None = None) -> Any:
def find(self, query: dict, projection: dict | None = None) -> Any:
table = self._current_table
conditions = []
params = []
......@@ -253,7 +253,7 @@ class SQLiteClient:
rows = await self.fetch_all(sql)
return [row[field] for row in rows]
async def aggregate(self, pipeline: list[dict]) -> Any:
def aggregate(self, pipeline: list[dict]) -> Any:
table = self._current_table
match_q = {}
group_q = {}
......@@ -355,8 +355,21 @@ async def init_sqlite():
(TABLE_TEAM_COMMENTS, "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, team_id INTEGER, creator_id TEXT, content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_TEAM_REACTIONS, "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, user_id TEXT, emoji TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UNIQUE(memo_id, user_id, emoji)"),
(TABLE_USER_PROFILES, "user_id TEXT PRIMARY KEY, username TEXT, first_name TEXT, last_name TEXT, avatar_url TEXT, cached_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_ACTIVITIES, "id INTEGER PRIMARY KEY AUTOINCREMENT, type TEXT, creator_id TEXT, memo_id TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_CHAT_HISTORY, "id INTEGER PRIMARY KEY, identity_key TEXT, message TEXT, is_human BOOLEAN, timestamp TIMESTAMP")
(TABLE_ACTIVITIES, "id INTEGER PRIMARY KEY AUTOINCREMENT, type TEXT, creator_id TEXT, memo_id TEXT, related_memo_id TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_CHAT_HISTORY, "id INTEGER PRIMARY KEY, identity_key TEXT, message TEXT, is_human BOOLEAN, timestamp TIMESTAMP"),
(TABLE_ATTACHMENTS, "id INTEGER PRIMARY KEY AUTOINCREMENT, uid TEXT UNIQUE, memo_id TEXT, file_name TEXT, file_type TEXT, file_size INTEGER, blob_id TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_MEMO_RELATIONS, "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, related_memo_id TEXT, type TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_REACTIONS, "id INTEGER PRIMARY KEY AUTOINCREMENT, content_id TEXT, creator_id TEXT, reaction_type TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_MEMO_EMBEDDINGS, "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, content TEXT, tags TEXT, date_key TEXT, embedding TEXT, dim INTEGER, model TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_INBOX, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, message TEXT, is_read BOOLEAN DEFAULT 0, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_USER_SETTINGS, "user_id TEXT PRIMARY KEY, settings_json TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_SHORTCUTS, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, title TEXT, payload TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_NOTIFICATIONS, "id INTEGER PRIMARY KEY AUTOINCREMENT, recipient_id TEXT, sender_id TEXT, activity_id INTEGER, notification_type TEXT, status TEXT DEFAULT 'UNREAD', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_MEMO_VERSIONS, "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, version INTEGER, content TEXT, created_by TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_PERSONAL_ACCESS_TOKENS, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, description TEXT, token_hash TEXT, expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_WEBHOOKS, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, name TEXT, url TEXT, events TEXT, is_active BOOLEAN DEFAULT 1, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_DOCUMENTS, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, title TEXT, content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
(TABLE_REFRESH_TOKENS, "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, token TEXT UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, expires_at TIMESTAMP")
]
for name, schema in tables:
await db.execute(f"CREATE TABLE IF NOT EXISTS {name} ({schema})")
......
......@@ -10,6 +10,7 @@ from __future__ import annotations
import logging
import secrets
import string
import json
from datetime import datetime, timezone
from typing import Any
......@@ -338,7 +339,7 @@ class TeamService:
"team_id": team_id,
"creator_id": user_id,
"content": content,
"tags": tags or [],
"tags": json.dumps(tags or []),
"space": "draft",
"status": "draft",
"pinned": False,
......@@ -377,42 +378,46 @@ class TeamService:
profiles = await self.resolve_user_profiles(list(user_ids))
async def _get_memo_metrics(self, memo_ids: list[str], user_id: str) -> tuple[dict, dict, dict]:
"""Fetch comment counts, reaction counts, and user reactions using SQLite."""
if not memo_ids:
return {}, {}, {}
from backend.common.sqlite_client import sqlite_client
qs = ",".join("?" for _ in memo_ids)
params = tuple(memo_ids)
# 1. Comment counts
comment_counts = {}
rows = await sqlite_client.fetch_all(
f"SELECT memo_id, count(*) as cnt FROM cuccu_team_comments WHERE memo_id IN ({qs}) GROUP BY memo_id", params
)
for r in rows:
comment_counts[str(r["memo_id"])] = r["cnt"]
# 2. Reaction counts
reaction_counts = {}
r_rows = await sqlite_client.fetch_all(
f"SELECT memo_id, emoji, count(*) as cnt FROM cuccu_team_reactions WHERE memo_id IN ({qs}) GROUP BY memo_id, emoji", params
)
for r in r_rows:
mid = str(r["memo_id"])
reaction_counts.setdefault(mid, {})[r["emoji"]] = r["cnt"]
# 3. User reactions
user_reactions = {}
u_rows = await sqlite_client.fetch_all(
f"SELECT memo_id, emoji FROM cuccu_team_reactions WHERE memo_id IN ({qs}) AND user_id = ?", params + tuple([user_id])
)
for r in u_rows:
mid = str(r["memo_id"])
user_reactions.setdefault(mid, []).append(r["emoji"])
return comment_counts, reaction_counts, user_reactions
# Get comment counts + reactions in batch
memo_ids = [str(doc["id"]) for doc in docs]
# Comment counts
comment_counts = {}
if memo_ids:
pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}}},
{"$group": {"_id": "$memo_id", "count": {"$sum": 1}}},
]
comment_results = await mongodb_client.team_comments.aggregate(pipeline).to_list(length=100)
for item in comment_results:
comment_counts[item["_id"]] = item["count"]
# Reaction counts
reaction_counts: dict[str, dict] = {}
reaction_pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}}},
{"$group": {"_id": {"memo_id": "$memo_id", "emoji": "$emoji"}, "count": {"$sum": 1}}},
]
async for item in mongodb_client.team_reactions.aggregate(reaction_pipeline):
mid = item["_id"]["memo_id"]
emoji = item["_id"]["emoji"]
reaction_counts.setdefault(mid, {})[emoji] = item["count"]
# User's own reactions
user_reactions: dict[str, list] = {}
user_reactions_pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}, "user_id": user_id}},
{"$group": {"_id": "$memo_id", "emojis": {"$push": "$emoji"}}},
]
async for item in mongodb_client.team_reactions.aggregate(user_reactions_pipeline):
user_reactions[item["_id"]] = item["emojis"]
else:
reaction_counts = {}
user_reactions = {}
comment_counts, reaction_counts, user_reactions = await self._get_memo_metrics(memo_ids, user_id)
results = []
for doc in docs:
......@@ -448,41 +453,7 @@ class TeamService:
profiles = await self.resolve_user_profiles(list(user_ids))
memo_ids = [str(doc["id"]) for doc in docs]
# Comment counts
comment_counts = {}
if memo_ids:
pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}}},
{"$group": {"_id": "$memo_id", "count": {"$sum": 1}}},
]
comment_results = await mongodb_client.team_comments.aggregate(pipeline).to_list(length=100)
for item in comment_results:
comment_counts[item["_id"]] = item["count"]
# Reaction counts
reaction_counts: dict[str, dict] = {}
reaction_pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}}},
{"$group": {"_id": {"memo_id": "$memo_id", "emoji": "$emoji"}, "count": {"$sum": 1}}},
]
async for item in mongodb_client.team_reactions.aggregate(reaction_pipeline):
mid = item["_id"]["memo_id"]
emoji = item["_id"]["emoji"]
reaction_counts[mid] = reaction_counts.get(mid, {})
reaction_counts[mid][emoji] = item["count"]
# User reactions
user_reactions: dict[str, list] = {}
user_reactions_pipeline = [
{"$match": {"memo_id": {"$in": memo_ids}, "user_id": user_id}},
{"$group": {"_id": "$memo_id", "emojis": {"$push": "$emoji"}}},
]
async for item in mongodb_client.team_reactions.aggregate(user_reactions_pipeline):
user_reactions[item["_id"]] = item["emojis"]
else:
reaction_counts = {}
user_reactions = {}
comment_counts, reaction_counts, user_reactions = await self._get_memo_metrics(memo_ids, user_id)
results = []
for doc in docs:
......@@ -588,7 +559,7 @@ class TeamService:
if content is not None:
update["content"] = content
if tags is not None:
update["tags"] = tags
update["tags"] = json.dumps(tags)
await mongodb_client.team_memos.update_one(
{"id": doc["id"]},
......@@ -931,10 +902,20 @@ class TeamService:
if merged_by:
merged_by_name, _ = self._get_profile(profiles, merged_by)
tags_raw = doc.get("tags", "[]")
tags_list = []
if isinstance(tags_raw, str):
try:
tags_list = json.loads(tags_raw) if tags_raw else []
except Exception:
tags_list = []
elif isinstance(tags_raw, list):
tags_list = tags_raw
return {
"id": str(doc["id"]),
"content": doc.get("content", ""),
"tags": doc.get("tags", []),
"tags": tags_list,
"creator_id": creator_id,
"creator_name": creator_name,
"creator_avatar": creator_avatar,
......
import asyncio
import os
import sys
# Ensure backend dir is in path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
from common.sqlite_client import sqlite_client, init_sqlite, TABLE_ACTIVITIES
async def run_migration():
print("Running migration for missing tables & columns...")
# Run the core init_sqlite which will create missing tables based on the updated SQLiteClient list
await init_sqlite()
# Need to manually add the 'related_memo_id' column to cuccu_activities if it doesn't exist
db = sqlite_client.db
cursor = await db.execute(f"PRAGMA table_info({TABLE_ACTIVITIES})")
rows = await cursor.fetchall()
cols = [r[1] for r in rows]
if "related_memo_id" not in cols:
print(f"Adding 'related_memo_id' column to {TABLE_ACTIVITIES}...")
await db.execute(f"ALTER TABLE {TABLE_ACTIVITIES} ADD COLUMN related_memo_id TEXT")
await db.commit()
print("Column added successfully.")
else:
print(f"Column 'related_memo_id' already exists in {TABLE_ACTIVITIES}.")
await sqlite_client.close()
print("Migration completed.")
if __name__ == "__main__":
asyncio.run(run_migration())
import asyncio
import sqlite3
import os
import sys
# Thêm đường dẫn project vào sys.path để import
sys.path.append(os.path.abspath(os.path.dirname(__file__)))
from config import MEMO_DB_PATH
from common.sqlite_client import init_sqlite, sqlite_client
async def migrate_missing_columns():
"""
Tự động đọc cấu trúc mong đợi trong init_sqlite, so khớp với DB thật,
nếu thiếu cột thì tự động chạy ALTER TABLE ADD COLUMN.
"""
print(f"[*] Checking database at {MEMO_DB_PATH}...")
# Đảm bảo các bảng cơ bản tồn tại
await init_sqlite()
db = sqlite_client.db
# Đọc raw text từ sqlite_client.py để trích xuất schema
client_path = os.path.join(os.path.dirname(__file__), "common", "sqlite_client.py")
with open(client_path, 'r', encoding='utf-8') as f:
code = f.read()
# Regex cùi bắp để bắt đoạn tables = [ ... ]
import ast
# Parse mảng tables thủ công vì eval nguy hiểm, nhưng ta biết code nằm trong init_sqlite
# Nhanh nhất là dùng sqlite3 để lấy pragma, so cột
tables_definition = [
("cuccu_users", "id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT UNIQUE, password_hash TEXT, nickname TEXT, username TEXT UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_memos", "id INTEGER PRIMARY KEY AUTOINCREMENT, uid TEXT UNIQUE, creator_id TEXT, content TEXT, visibility TEXT DEFAULT 'PRIVATE', pinned BOOLEAN DEFAULT 0, row_status TEXT DEFAULT 'NORMAL', parent TEXT, payload TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, workspace_id TEXT DEFAULT 'PERSONAL', is_read BOOLEAN DEFAULT 0"),
("cuccu_teams", "id INTEGER PRIMARY KEY AUTOINCREMENT, owner_id TEXT, name TEXT, description TEXT, invite_code TEXT UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_team_members", "id INTEGER PRIMARY KEY AUTOINCREMENT, team_id INTEGER, user_id TEXT, role TEXT DEFAULT 'MEMBER', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UNIQUE(team_id, user_id)"),
("cuccu_team_memos", "id INTEGER PRIMARY KEY AUTOINCREMENT, team_id INTEGER, creator_id TEXT, space TEXT, content TEXT, visibility TEXT DEFAULT 'PRIVATE', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_team_comments", "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, team_id INTEGER, creator_id TEXT, content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_team_reactions", "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, user_id TEXT, emoji TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UNIQUE(memo_id, user_id, emoji)"),
("cuccu_user_profiles", "user_id TEXT PRIMARY KEY, username TEXT, first_name TEXT, last_name TEXT, avatar_url TEXT, cached_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_activities", "id INTEGER PRIMARY KEY AUTOINCREMENT, type TEXT, creator_id TEXT, memo_id TEXT, related_memo_id TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_chat_history", "id INTEGER PRIMARY KEY, identity_key TEXT, message TEXT, is_human BOOLEAN, timestamp TIMESTAMP"),
("cuccu_attachments", "id INTEGER PRIMARY KEY AUTOINCREMENT, uid TEXT UNIQUE, memo_id TEXT, file_name TEXT, file_type TEXT, file_size INTEGER, blob_id TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_memo_relations", "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, related_memo_id TEXT, type TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_reactions", "id INTEGER PRIMARY KEY AUTOINCREMENT, content_id TEXT, creator_id TEXT, reaction_type TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_memo_embeddings", "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, content TEXT, tags TEXT, date_key TEXT, embedding TEXT, dim INTEGER, model TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_inbox", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, message TEXT, is_read BOOLEAN DEFAULT 0, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_user_settings", "user_id TEXT PRIMARY KEY, settings_json TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_shortcuts", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, title TEXT, payload TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_notifications", "id INTEGER PRIMARY KEY AUTOINCREMENT, recipient_id TEXT, sender_id TEXT, activity_id INTEGER, notification_type TEXT, status TEXT DEFAULT 'UNREAD', created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_memo_versions", "id INTEGER PRIMARY KEY AUTOINCREMENT, memo_id TEXT, version INTEGER, content TEXT, created_by TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_personal_access_tokens", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, description TEXT, token_hash TEXT, expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_webhooks", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, name TEXT, url TEXT, events TEXT, is_active BOOLEAN DEFAULT 1, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_documents", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, title TEXT, content TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP"),
("cuccu_refresh_tokens", "id INTEGER PRIMARY KEY AUTOINCREMENT, user_id TEXT, token TEXT UNIQUE, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, expires_at TIMESTAMP")
]
for table_name, schema in tables_definition:
expected_cols = []
for col_def in schema.split(","):
col_def = col_def.strip()
if col_def.upper().startswith("UNIQUE"):
continue
col_name = col_def.split(" ")[0]
expected_cols.append((col_name, col_def))
cursor = await db.execute(f"PRAGMA table_info({table_name})")
rows = await cursor.fetchall()
existing_cols = [r[1] for r in rows]
for col_name, full_def in expected_cols:
if col_name not in existing_cols:
# Add missing column
print(f"[+] Missing column '{col_name}' in table '{table_name}'. Migrating...")
try:
await db.execute(f"ALTER TABLE {table_name} ADD COLUMN {full_def}")
print(f" -> Added {col_name} to {table_name}")
except Exception as e:
print(f" -> Error adding column: {e}")
await db.commit()
print("[*] Migration completed!")
await sqlite_client.close()
if __name__ == "__main__":
asyncio.run(migrate_missing_columns())
import asyncio
from common.sqlite_client import init_sqlite, close_sqlite, sqlite_client
async def migrate():
await init_sqlite()
try:
queries = [
"ALTER TABLE cuccu_team_memos ADD COLUMN status TEXT DEFAULT 'draft'",
"ALTER TABLE cuccu_team_memos ADD COLUMN pinned INTEGER DEFAULT 0",
"ALTER TABLE cuccu_team_memos ADD COLUMN merged_at TIMESTAMP",
"ALTER TABLE cuccu_team_memos ADD COLUMN merged_by TEXT",
"ALTER TABLE cuccu_team_memos ADD COLUMN review_reason TEXT"
]
for query in queries:
try:
await sqlite_client.execute(query)
print("Executed:", query)
except Exception as e:
print("Failed to execute", query, e)
finally:
await close_sqlite()
if __name__ == "__main__":
asyncio.run(migrate())
import asyncio
from common.sqlite_client import init_sqlite, close_sqlite, sqlite_client
async def migrate():
await init_sqlite()
try:
# Add tags column to team_memos
await sqlite_client.execute("ALTER TABLE cuccu_team_memos ADD COLUMN tags TEXT DEFAULT '[]'")
print("✅ Added tags column to cuccu_team_memos")
except Exception as e:
print("Error during migration:", e)
finally:
await close_sqlite()
if __name__ == "__main__":
asyncio.run(migrate())
"""E2E API Test Script - Test all CuCu Note endpoints after auth fix"""
import asyncio
import sys
import json
import urllib.request
import urllib.error
BASE = "http://localhost:5000"
def request(method, path, body=None, token=None):
url = BASE + path
data = json.dumps(body).encode() if body else None
headers = {"Content-Type": "application/json"}
if token:
headers["Authorization"] = f"Bearer {token}"
req = urllib.request.Request(url, data=data, headers=headers, method=method)
try:
with urllib.request.urlopen(req, timeout=10) as resp:
raw = resp.read()
return resp.status, json.loads(raw) if raw.strip() else {}
except urllib.error.HTTPError as e:
raw = e.read()
try:
return e.code, json.loads(raw) if raw.strip() else {"http_error": e.code}
except Exception:
return e.code, {"raw": raw.decode(errors='replace')[:200]}
except Exception as ex:
return 0, {"error": str(ex)}
def ok(code):
return "PASS" if code in (200, 201) else "FAIL"
results = []
print("=" * 60)
print("CuCu Note API E2E Test Suite")
print("=" * 60)
# =============================================
# 1. AUTH - Sign In
# =============================================
print("\n[1] AUTH - Sign In")
code, data = request("POST", "/api/v1/auth/signin", {"email": "test@example.com", "password": "Password123!"})
token = data.get("accessToken", "")
print(f" POST /auth/signin -> {code} {ok(code)} | token={token[:30] if token else 'NONE'}...")
results.append(("Auth: Sign In", ok(code)))
# =============================================
# 2. AUTH - Get Me (Profile)
# =============================================
print("\n[2] AUTH - Get Me")
code, data = request("GET", "/api/v1/auth/me", token=token)
user_id = data.get("id", "")
print(f" GET /auth/me -> {code} {ok(code)} | user_id={user_id} email={data.get('email')}")
results.append(("Auth: Get Me / Profile", ok(code)))
# =============================================
# 3. MEMOS - Create
# =============================================
print("\n[3] MEMOS - Create new memo")
code, data = request("POST", "/api/v1/memos", {"content": "Hello from E2E test! #test", "visibility": "PRIVATE"}, token=token)
memo_uid = data.get("uid") or data.get("id", "")
memo_id = data.get("id", "")
print(f" POST /memos -> {code} {ok(code)} | memo_id={memo_id} uid={memo_uid}")
results.append(("Memos: Create Memo", ok(code)))
# =============================================
# 4. MEMOS - List
# =============================================
print("\n[4] MEMOS - List memos")
code, data = request("GET", "/api/v1/memos", token=token)
memo_count = len(data) if isinstance(data, list) else data.get("total", "?")
print(f" GET /memos -> {code} {ok(code)} | count={memo_count}")
results.append(("Memos: List Memos", ok(code)))
# =============================================
# 5. MEMOS - Edit
# =============================================
print("\n[5] MEMOS - Edit memo")
if memo_id:
code, data = request("PATCH", f"/api/v1/memos/{memo_id}", {"content": "Updated by E2E test! #test #edited"}, token=token)
print(f" PATCH /memos/{memo_id} -> {code} {ok(code)}")
results.append(("Memos: Edit Memo", ok(code)))
else:
print(" SKIP (no memo_id)")
results.append(("Memos: Edit Memo", "SKIP"))
# =============================================
# 6. MEMOS - Change Visibility
# =============================================
print("\n[6] MEMOS - Change Visibility to PUBLIC")
if memo_id:
code, data = request("PATCH", f"/api/v1/memos/{memo_id}", {"visibility": "PUBLIC"}, token=token)
print(f" PATCH /memos/{memo_id} visibility=PUBLIC -> {code} {ok(code)}")
results.append(("Memos: Change Visibility", ok(code)))
else:
results.append(("Memos: Change Visibility", "SKIP"))
# =============================================
# 7. MEMOS - Pin
# =============================================
print("\n[7] MEMOS - Pin memo")
if memo_id:
code, data = request("PATCH", f"/api/v1/memos/{memo_id}/pin", None, token=token)
if code == 0:
code, data = request("POST", f"/api/v1/memos/{memo_id}/pin", {"pinned": True}, token=token)
print(f" PIN /memos/{memo_id}/pin -> {code} {ok(code)}")
results.append(("Memos: Pin Memo", ok(code)))
else:
results.append(("Memos: Pin Memo", "SKIP"))
# =============================================
# 8. REACTIONS - Add emoji reaction
# =============================================
print("\n[8] REACTIONS - Add emoji reaction")
if memo_id:
code, data = request("POST", f"/api/v1/memos/{memo_id}/reactions", {"reaction_type": "THUMBS_UP"}, token=token)
print(f" POST /memos/{memo_id}/reactions -> {code} {ok(code)} | {data}")
results.append(("Reactions: Add Emoji", ok(code)))
else:
results.append(("Reactions: Add Emoji", "SKIP"))
# =============================================
# 9. COMMENTS - Post a comment
# =============================================
print("\n[9] COMMENTS - Post a comment on memo")
if memo_uid:
code, data = request("POST", "/api/v1/memos", {"content": "This is a test comment!", "parent": str(memo_id)}, token=token)
print(f" POST /memos (comment) -> {code} {ok(code)} | {data.get('id')}")
results.append(("Comments: Post Comment", ok(code)))
else:
results.append(("Comments: Post Comment", "SKIP"))
# =============================================
# 10. TEAMS - List teams
# =============================================
print("\n[10] TEAMS - List teams")
code, data = request("GET", "/api/v1/teams", token=token)
print(f" GET /teams -> {code} {ok(code)} | {data}")
results.append(("Teams: List Teams", ok(code)))
# =============================================
# 11. TEAMS - Create team
# =============================================
print("\n[11] TEAMS - Create a team")
code, data = request("POST", "/api/v1/teams", {"name": "E2E Test Team", "description": "Auto-created by E2E test"}, token=token)
team_id = data.get("id")
print(f" POST /teams -> {code} {ok(code)} | team_id={team_id}")
results.append(("Teams: Create Team", ok(code)))
# =============================================
# 12. INBOX - List notifications
# =============================================
print("\n[12] INBOX - List notifications")
if user_id:
code, data = request("GET", f"/api/v1/users/{user_id}/inbox/unread_count", token=token)
print(f" GET /inbox -> {code} {ok(code)} | {data}")
results.append(("Inbox: List Notifications", ok(code)))
else:
results.append(("Inbox: List Notifications", "SKIP"))
# =============================================
# 13. INSTANCE - Get instance info
# =============================================
print("\n[13] INSTANCE - Get instance info")
code, data = request("GET", "/api/v1/instance")
print(f" GET /instance -> {code} {ok(code)} | {data}")
results.append(("Instance: Get Info", ok(code)))
# =============================================
# SUMMARY
# =============================================
print("\n" + "=" * 60)
print("TEST RESULTS SUMMARY")
print("=" * 60)
passed = sum(1 for _, r in results if r == "PASS")
failed = sum(1 for _, r in results if r == "FAIL")
skipped = sum(1 for _, r in results if r == "SKIP")
for name, result in results:
icon = "[+]" if result == "PASS" else "[-]" if result == "FAIL" else "[~]"
print(f" {icon} {name}: {result}")
print(f"\nTotal: {passed} PASS / {failed} FAIL / {skipped} SKIP")
B============================================================
"""Fix: Update NULL password_hash for test user."""
import asyncio, sys
sys.path.insert(0, '.')
import aiosqlite
from common.jwt_auth import get_password_hash, verify_password
async def fix():
async with aiosqlite.connect('db/memos.db') as db:
db.row_factory = aiosqlite.Row
async with db.execute('SELECT id, email, username, password_hash FROM cuccu_users') as c:
rows = await c.fetchall()
print("=== USERS IN DB ===")
for r in rows:
d = dict(r)
ph = d['password_hash']
print(f"id={d['id']} email={d['email']} username={d['username']} hash={'NULL' if ph is None else ph[:25]}")
new_hash = get_password_hash("Password123!")
await db.execute("UPDATE cuccu_users SET password_hash = ? WHERE email = ?", (new_hash, "test@example.com"))
await db.commit()
print("Updated password_hash for test@example.com")
async with db.execute('SELECT id, password_hash FROM cuccu_users WHERE email = ?', ("test@example.com",)) as c:
row = await c.fetchone()
if row:
ok = verify_password("Password123!", row['password_hash'])
print(f"verify_password OK: {ok}")
asyncio.run(fix())
......@@ -5,6 +5,7 @@ import { MemoFilterProvider } from "./contexts/MemoFilterContext";
import { useUserLocale } from "./hooks/useUserLocale";
import { useUserTheme } from "./hooks/useUserTheme";
import { cleanupExpiredOAuthState } from "./utils/oauth";
import { WorkspaceProvider } from "./contexts/WorkspaceContext";
const App = () => {
const { generalSetting: instanceGeneralSetting } = useInstance();
......@@ -49,9 +50,11 @@ const App = () => {
}, [instanceGeneralSetting.customProfile]);
return (
<MemoFilterProvider>
<Outlet />
</MemoFilterProvider>
<WorkspaceProvider>
<MemoFilterProvider>
<Outlet />
</MemoFilterProvider>
</WorkspaceProvider>
);
};
......
......@@ -159,8 +159,10 @@ export function useInboxUnreadCount() {
return 0;
}
// Call the new endpoint: GET /users/{user_id}/inbox/unread_count
const response = await fetch(`/api/v1/users/${currentUser.name}/inbox/unread_count?workspace_id=${workspaceId}`);
const userId = currentUser.name.replace('users/', '');
const response = await fetch(`/api/v1/users/${userId}/inbox/unread_count?workspace_id=${workspaceId}`);
if (!response.ok) {
if (response.status === 404) return 0;
throw new Error(`Failed to fetch unread count: ${response.statusText}`);
}
const data = await response.json();
......
......@@ -4,7 +4,7 @@ import { defineConfig } from "vite";
import tailwindcss from "@tailwindcss/vite";
// Get dev proxy server from environment variable
const devProxyServer = process.env.DEV_PROXY_SERVER || process.env.VITE_API_BASE_URL || "http://localhost:8000";
const devProxyServer = process.env.DEV_PROXY_SERVER || process.env.VITE_API_BASE_URL || "http://127.0.0.1:8000";
if (process.env.DEV_PROXY_SERVER || process.env.VITE_API_BASE_URL) {
console.log("Using devProxyServer from environment: ", devProxyServer);
}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment