Quellcode durchsuchen

(feat) init project

Brin AMONKONAN vor 1 Monat
Commit
c88d399948

+ 103 - 0
README.md

@@ -0,0 +1,103 @@
+# labML – Stack hybride Next.js + NestJS + FastAPI + Jupyter (Docker Compose)
+
+Ce projet met en place une architecture moderne et modulaire pour le ML et le web :
+
+- Frontend: Next.js (TypeScript)
+- Backend applicatif: NestJS (TypeScript)
+- Backend ML: FastAPI (Python)
+- Notebooks: JupyterLab
+- Orchestration: Docker Compose
+
+Chaînage prévu: Notebook → API (FastAPI) → API (NestJS) → Interface Web (Next.js)
+
+## Structure
+
+```
+labML/
+├─ web/                # Frontend Next.js (TS)
+├─ app-api/            # Backend NestJS (TS)
+├─ ml-api/             # Backend FastAPI (Python)
+│  ├─ app/
+│  │  ├─ main.py       # API FastAPI (predict/train)
+│  │  └─ train_iris.py # Script d'entraînement (Iris)
+│  └─ environment.yml  # Environnement conda/mamba
+├─ notebooks/          # Notebooks Jupyter
+├─ shared/
+│  └─ models/          # Artefacts modèles (joblib, etc.)
+├─ docker-compose.yml  # Orchestration des services
+└─ README.md
+```
+
+## Prérequis
+
+- Docker Desktop + Docker Compose
+- Node.js (pour travailler hors Docker si besoin)
+
+## Démarrage rapide
+
+1. Générer les applications Next.js (web) et NestJS (app-api) si ce n'est pas déjà fait:
+   - Next: `npx create-next-app@latest web --ts --eslint --src-dir --app --import-alias "@/*" --yes`
+   - Nest: `npx @nestjs/cli@latest new app-api -p npm`
+
+2. Lancer l'ensemble des services:
+
+```
+docker compose up
+```
+
+Cela va démarrer:
+- Next.js (web) sur http://localhost:3000
+- NestJS (app-api) sur http://localhost:4000
+- FastAPI (ml-api) sur http://localhost:8000
+- JupyterLab sur http://localhost:8888 (token désactivé pour dev)
+
+3. Entraîner un modèle de démonstration (Iris) via notebook ou API:
+   - Notebook: ouvrir JupyterLab, exécuter un notebook qui appelle `ml-api/app/train_iris.py` ou utilise son contenu.
+   - API: `POST http://localhost:8000/train`
+
+4. Tester la prédiction:
+   - `POST http://localhost:8000/predict` avec le payload:
+     ```json
+     {
+       "sepal_length": 5.1,
+       "sepal_width": 3.5,
+       "petal_length": 1.4,
+       "petal_width": 0.2
+     }
+     ```
+   - Le frontend (web) aura une page de démo pour saisir ces valeurs et afficher la prédiction, en passant par NestJS.
+
+## Paquets inclus côté Python
+
+- jupyterlab, ipykernel
+- matplotlib, seaborn
+- pandas, numpy
+- scikit-learn
+- optuna, mlflow
+- PyTorch (cpuonly) OU possibilité d'ajouter TensorFlow/Keras
+- fastapi, uvicorn, pydantic, joblib
+- streamlit, gradio (interfaces rapides de test)
+- black, flake8, isort, pytest, pre-commit
+
+Note: l'installation via mamba dans les conteneurs peut être longue la première fois.
+
+## Qualité et CI locale
+
+- pre-commit configuré (black/flake8/isort pour Python; possibilité d'ajouter ESLint/Prettier côté TypeScript)
+- pytest pour tests Python
+
+## Variables d'environnement
+
+- Frontend (web): `APP_API_BASE_URL=http://localhost:4000`
+- Backend (app-api): `ML_API_BASE_URL=http://ml-api:8000`
+
+## Développement
+
+- Les services Node (web, app-api) sont montés en volumes (hot reload via `npm run dev`/`start:dev`).
+- Les services Python utilisent mambaforge et installent les dépendances au démarrage (peut être externalisé dans une image Docker dédiée pour accélérer).
+
+## Étapes suivantes
+
+- Ajouter la page Next.js de démo et l'endpoint NestJS `/predict` qui proxie vers FastAPI
+- Ajuster la sécurité (CORS, tokens Jupyter, etc.) pour la prod
+- Intégrer MLflow tracking et Optuna dans le script d'entraînement

+ 65 - 0
docker-compose.yml

@@ -0,0 +1,65 @@
+version: "3.9"
+services:
+  web:
+    image: node:18
+    working_dir: /usr/src/app
+    volumes:
+      - ./web:/usr/src/app
+    ports:
+      - "3000:3000"
+    command: bash -lc "npm install && npm run dev"
+    profiles: [dev]
+
+  app-api:
+    image: node:18
+    working_dir: /usr/src/app
+    environment:
+      - ML_API_BASE_URL=http://ml-api:8000
+      - WEB_ORIGIN=http://localhost:3000
+    ports:
+      - "4000:4000"
+    volumes:
+      - ./app-api:/usr/src/app
+    command: bash -lc "npm install && npm run start:dev"
+    profiles: [dev]
+
+  ml-api:
+    image: condaforge/mambaforge
+    working_dir: /work
+    environment:
+      - TARGET_LANG_CODE=lin_Latn
+    volumes:
+      - ./ml-api:/work
+      - ./shared:/work/models
+    ports:
+      - "8000:8000"
+    command: >
+      bash -lc "
+      mamba env update -n base -f environment.yml || true &&
+      mamba install -y -c conda-forge ffmpeg libsndfile &&
+      pip install fastapi uvicorn[standard] python-multipart transformers sentencepiece accelerate langdetect openai-whisper TTS huggingface_hub &&
+      uvicorn app.main:app --host 0.0.0.0 --port 8000
+      "
+
+  jupyter:
+    image: jupyter/base-notebook:latest
+    working_dir: /home/jovyan
+    volumes:
+      - ./notebooks:/home/jovyan/work
+    ports:
+      - "8888:8888"
+    command: start-notebook.sh --NotebookApp.token=''
+
+  redis:
+    image: redis:7
+    ports:
+      - "6379:6379"
+
+  postgres:
+    image: postgres:15
+    environment:
+      - POSTGRES_USER=ml
+      - POSTGRES_PASSWORD=ml
+      - POSTGRES_DB=ml
+    ports:
+      - "5432:5432"

+ 33 - 0
ml-api/Untitled.ipynb

@@ -0,0 +1,33 @@
+{
+ "cells": [
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "id": "80cf82f2-4fd7-4773-815b-a363165f06b4",
+   "metadata": {},
+   "outputs": [],
+   "source": []
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3 (ipykernel)",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.12.7"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}

+ 16 - 0
ml-api/app/detect_lang.py

@@ -0,0 +1,16 @@
+from typing import Optional
+from langdetect import detect
+
+
+def detect_lang(text: str, whisper_lang: Optional[str] = None) -> str:
+    """
+    Détecte la langue du texte. Utilise d'abord Whisper si fourni, sinon langdetect.
+    Retourne un code simple ('fr' pour français, sinon code ISO approximatif).
+    """
+    if whisper_lang:
+        return whisper_lang
+    try:
+        code = detect(text)
+        return code
+    except Exception:
+        return "unknown"

+ 55 - 0
ml-api/app/main.py

@@ -0,0 +1,55 @@
+from fastapi import FastAPI, UploadFile, File
+from fastapi.responses import JSONResponse
+from fastapi.middleware.cors import CORSMiddleware
+from starlette.staticfiles import StaticFiles
+from pathlib import Path
+
+from .translate_pipeline import process_audio_file, AUDIO_DIR
+from .train_translate import train_from_local_dataset
+
+
+app = FastAPI(title="ML API - Speech-to-Speech")
+
+# CORS (peut être ajusté via env WEB_ORIGIN si nécessaire)
+app.add_middleware(
+    CORSMiddleware,
+    allow_origins=["*"],
+    allow_credentials=True,
+    allow_methods=["*"],
+    allow_headers=["*"],
+)
+
+# Fichiers statiques pour audio
+AUDIO_DIR.mkdir(parents=True, exist_ok=True)
+app.mount("/audio", StaticFiles(directory=str(AUDIO_DIR)), name="audio")
+
+
+@app.get("/health")
+def health():
+    return {"status": "ok"}
+
+
+@app.post("/translate")
+async def translate(file: UploadFile = File(...)):
+    data = await file.read()
+    result = process_audio_file(data, file.filename)
+    return JSONResponse(result)
+
+
+@app.post("/train")
+def train():
+    out_dir = train_from_local_dataset()
+    return {"status": "ok", "model_path": out_dir}
+
+
+@app.post("/predict")
+async def predict_compat(file: UploadFile = File(None)):
+    """
+    Endoint de compatibilité: si on reçoit un fichier audio, traite comme /translate.
+    Sinon, renvoie un message d’orientation.
+    """
+    if file is not None:
+        data = await file.read()
+        result = process_audio_file(data, file.filename)
+        return JSONResponse(result)
+    return JSONResponse({"detail": "Utilisez /translate pour la traduction vocale."}, status_code=400)

+ 49 - 0
ml-api/app/stt.py

@@ -0,0 +1,49 @@
+from pathlib import Path
+from typing import Tuple
+import tempfile
+import subprocess
+
+import whisper
+
+
+_WHISPER_MODEL = None
+
+
+def _get_model(name: str = "base"):
+    global _WHISPER_MODEL
+    if _WHISPER_MODEL is None:
+        _WHISPER_MODEL = whisper.load_model(name)
+    return _WHISPER_MODEL
+
+
+def ensure_wav(input_path: Path) -> Path:
+    """Convertit l'audio en WAV si nécessaire via ffmpeg."""
+    if input_path.suffix.lower() == ".wav":
+        return input_path
+    out = Path(tempfile.mkstemp(suffix=".wav")[1])
+    cmd = [
+        "ffmpeg",
+        "-i",
+        str(input_path),
+        "-ar",
+        "16000",
+        "-ac",
+        "1",
+        str(out),
+        "-y",
+    ]
+    subprocess.run(cmd, check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+    return out
+
+
+def transcribe(audio_path: Path) -> Tuple[str, str]:
+    """
+    Transcrit le fichier audio avec Whisper.
+    Retourne (texte, langue_detectee_par_whisper).
+    """
+    model = _get_model()
+    wav_path = ensure_wav(audio_path)
+    result = model.transcribe(str(wav_path))
+    text = result.get("text", "").strip()
+    lang = result.get("language", "")  # e.g., 'fr'
+    return text, lang

+ 32 - 0
ml-api/app/train_iris.py

@@ -0,0 +1,32 @@
+"""Script d'entraînement Iris hors-API, utilisable depuis un notebook ou la console.
+Sauvegarde le modèle dans /work/models/iris_model.pkl.
+"""
+
+from pathlib import Path
+import joblib
+from sklearn.datasets import load_iris
+from sklearn.model_selection import train_test_split
+from sklearn.linear_model import LogisticRegression
+
+MODELS_DIR = Path("/work/models")
+MODEL_PATH = MODELS_DIR / "iris_model.pkl"
+
+
+def main():
+    iris = load_iris()
+    X = iris.data
+    y = iris.target
+
+    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
+
+    clf = LogisticRegression(max_iter=1000)
+    clf.fit(X_train, y_train)
+    acc = clf.score(X_test, y_test)
+
+    MODELS_DIR.mkdir(parents=True, exist_ok=True)
+    joblib.dump(clf, MODEL_PATH)
+    print({"status": "trained", "accuracy": float(acc), "model_path": str(MODEL_PATH)})
+
+
+if __name__ == "__main__":
+    main()

+ 26 - 0
ml-api/app/train_translate.py

@@ -0,0 +1,26 @@
+import os
+from pathlib import Path
+
+from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
+
+
+DATASET_PATH = Path("/work/models/data/dataset.json")
+OUTPUT_DIR = Path("/work/models/nllb-custom")
+BASE_MODEL = "facebook/nllb-200-distilled-600M"
+
+
+def train_from_local_dataset() -> str:
+    """
+    Placeholder de fine-tuning NLLB. Pour un environnement CPU et rapide, on se contente
+    de préparer le répertoire custom avec le tokenizer et le modèle de base afin
+    de permettre les tests de pipeline. Dans un environnement GPU, remplacer par un
+    entraînement réel (Trainer, datasets, etc.).
+    """
+    OUTPUT_DIR.mkdir(parents=True, exist_ok=True)
+
+    tok = AutoTokenizer.from_pretrained(BASE_MODEL)
+    mdl = AutoModelForSeq2SeqLM.from_pretrained(BASE_MODEL)
+
+    tok.save_pretrained(OUTPUT_DIR)
+    mdl.save_pretrained(OUTPUT_DIR)
+    return str(OUTPUT_DIR)

+ 47 - 0
ml-api/app/translate.py

@@ -0,0 +1,47 @@
+from functools import lru_cache
+from typing import Optional
+
+from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
+
+
+BASE_MODEL_NAME = "facebook/nllb-200-distilled-600M"
+CUSTOM_MODEL_DIR = "/work/models/nllb-custom"
+
+
+@lru_cache(maxsize=1)
+def get_tokenizer(model_name_or_path: Optional[str] = None):
+    name = model_name_or_path or BASE_MODEL_NAME
+    return AutoTokenizer.from_pretrained(name)
+
+
+@lru_cache(maxsize=1)
+def get_model(model_name_or_path: Optional[str] = None):
+    name = model_name_or_path or BASE_MODEL_NAME
+    return AutoModelForSeq2SeqLM.from_pretrained(name)
+
+
+def load_custom_or_base():
+    """Charge le modèle custom s'il existe, sinon le modèle de base."""
+    import os
+
+    if os.path.isdir(CUSTOM_MODEL_DIR):
+        tok = get_tokenizer(CUSTOM_MODEL_DIR)
+        mdl = get_model(CUSTOM_MODEL_DIR)
+    else:
+        tok = get_tokenizer(BASE_MODEL_NAME)
+        mdl = get_model(BASE_MODEL_NAME)
+    return tok, mdl
+
+
+def translate_text(text: str, src_lang: str, tgt_lang: str) -> str:
+    """
+    Traduit le texte de src_lang vers tgt_lang via NLLB.
+    Les codes de langue NLLB ressemblent à 'fra_Latn', 'lin_Latn', etc.
+    """
+    tok, mdl = load_custom_or_base()
+    inputs = tok(text, return_tensors="pt")
+    # Indication des langues (optionnelle, selon le tokenizer)
+    # Quelques tokenizers NLLB utilisent des tokens de langue spécifiques.
+    generated_tokens = mdl.generate(**inputs, max_new_tokens=200)
+    out = tok.batch_decode(generated_tokens, skip_special_tokens=True)
+    return out[0] if out else ""

+ 54 - 0
ml-api/app/translate_pipeline.py

@@ -0,0 +1,54 @@
+from pathlib import Path
+from typing import Dict
+import os
+import tempfile
+
+from .stt import transcribe
+from .detect_lang import detect_lang
+from .translate import translate_text
+from .tts import synthesize
+
+
+AUDIO_DIR = Path("/work/models/audio")
+
+
+def map_to_nllb(code: str) -> str:
+    """
+    Mappe un code comme 'fr' vers le code NLLB 'fra_Latn'.
+    Pour la langue cible, on lit TARGET_LANG_CODE dans l’environnement.
+    """
+    if code.startswith("fr"):
+        return "fra_Latn"
+    target = os.getenv("TARGET_LANG_CODE", "lin_Latn")
+    return target
+
+
+def process_audio_file(file_bytes: bytes, filename: str) -> Dict:
+    # Sauvegarde temporaire
+    tmp_path = Path(tempfile.mkstemp(suffix=os.path.splitext(filename)[1] or ".wav")[1])
+    with open(tmp_path, "wb") as f:
+        f.write(file_bytes)
+
+    # STT
+    text, whisper_lang = transcribe(tmp_path)
+
+    # Détection de langue
+    code = detect_lang(text, whisper_lang)
+
+    # Déterminer direction
+    src_nllb = map_to_nllb(code)
+    tgt_nllb = "fra_Latn" if src_nllb != "fra_Latn" else os.getenv("TARGET_LANG_CODE", "lin_Latn")
+
+    # Traduction
+    translated = translate_text(text, src_nllb, tgt_nllb)
+
+    # TTS
+    tts_lang = "fr" if tgt_nllb == "fra_Latn" else "xx"  # 'xx' = code générique
+    out_path = synthesize(translated, tts_lang, AUDIO_DIR)
+
+    return {
+        "source_text": text,
+        "detected_lang": src_nllb,
+        "translated_text": translated,
+        "audio_url": f"/audio/{out_path.name}",
+    }

+ 35 - 0
ml-api/app/tts.py

@@ -0,0 +1,35 @@
+from pathlib import Path
+from typing import Optional
+import tempfile
+
+from TTS.api import TTS
+
+
+_TTS_CACHE = {}
+
+
+def _get_tts_model(lang_code: str) -> TTS:
+    """
+    Retourne un modèle TTS en fonction de la langue.
+    - Français: modèle VITS fr
+    - Sinon: modèle multilingue générique
+    """
+    if lang_code not in _TTS_CACHE:
+        if lang_code.startswith("fr"):
+            model_name = "tts_models/fr/css10/vits"
+        else:
+            # Multilingue par défaut
+            model_name = "tts_models/multilingual/multi-dataset/your_tts"
+        _TTS_CACHE[lang_code] = TTS(model_name=model_name)
+    return _TTS_CACHE[lang_code]
+
+
+def synthesize(text: str, lang_code: str, out_dir: Path) -> Path:
+    out_dir.mkdir(parents=True, exist_ok=True)
+    tmp = Path(tempfile.mkstemp(suffix=".wav")[1])
+    tts = _get_tts_model(lang_code)
+    tts.tts_to_file(text=text, file_path=str(tmp))
+    # Renomme vers un nom stable
+    target = out_dir / f"output_{tmp.name.split('.')[-2]}.wav"
+    tmp.rename(target)
+    return target

+ 25 - 0
ml-api/environment.yml

@@ -0,0 +1,25 @@
+name: ml-api
+channels:
+  - conda-forge
+  - pytorch
+dependencies:
+  - python=3.10
+  - pip
+  - ffmpeg
+  - libsndfile
+  - pytorch::pytorch
+  - pytorch::torchvision
+  - pytorch::torchaudio
+  - cpuonly
+  - numpy
+  - librosa
+  - pip:
+      - fastapi
+      - uvicorn[standard]
+      - transformers
+      - sentencepiece
+      - accelerate
+      - langdetect
+      - openai-whisper
+      - TTS
+      - huggingface_hub

+ 14 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/meta.yaml

@@ -0,0 +1,14 @@
+artifact_uri: file:///work/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/artifacts
+end_time: 1760613866781
+entry_point_name: ''
+experiment_id: '0'
+lifecycle_stage: active
+run_id: 1cfebc88bc8142aa88bf5c26ce196f61
+run_name: iris_logreg
+source_name: ''
+source_type: 4
+source_version: ''
+start_time: 1760613866127
+status: 3
+tags: []
+user_id: root

+ 1 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/metrics/accuracy

@@ -0,0 +1 @@
+1760613866646 1.0 0

+ 1 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/tags/mlflow.runName

@@ -0,0 +1 @@
+iris_logreg

+ 1 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/tags/mlflow.source.name

@@ -0,0 +1 @@
+/opt/conda/bin/uvicorn

+ 1 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/tags/mlflow.source.type

@@ -0,0 +1 @@
+LOCAL

+ 1 - 0
ml-api/mlruns/0/1cfebc88bc8142aa88bf5c26ce196f61/tags/mlflow.user

@@ -0,0 +1 @@
+root

+ 14 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/meta.yaml

@@ -0,0 +1,14 @@
+artifact_uri: file:///work/mlruns/0/efce7f2174214cf582a6f964129ec345/artifacts
+end_time: 1760613876914
+entry_point_name: ''
+experiment_id: '0'
+lifecycle_stage: active
+run_id: efce7f2174214cf582a6f964129ec345
+run_name: iris_logreg
+source_name: ''
+source_type: 4
+source_version: ''
+start_time: 1760613876341
+status: 3
+tags: []
+user_id: root

+ 1 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/metrics/accuracy

@@ -0,0 +1 @@
+1760613876808 1.0 0

+ 1 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/tags/mlflow.runName

@@ -0,0 +1 @@
+iris_logreg

+ 1 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/tags/mlflow.source.name

@@ -0,0 +1 @@
+/opt/conda/bin/uvicorn

+ 1 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/tags/mlflow.source.type

@@ -0,0 +1 @@
+LOCAL

+ 1 - 0
ml-api/mlruns/0/efce7f2174214cf582a6f964129ec345/tags/mlflow.user

@@ -0,0 +1 @@
+root

+ 6 - 0
ml-api/mlruns/0/meta.yaml

@@ -0,0 +1,6 @@
+artifact_location: file:///work/mlruns/0
+creation_time: 1760613865693
+experiment_id: '0'
+last_update_time: 1760613865693
+lifecycle_stage: active
+name: Default

+ 1 - 0
notebooks/.gitkeep

@@ -0,0 +1 @@
+# Placeholder to version the notebooks directory

+ 84 - 0
notebooks/iris_demo.ipynb

@@ -0,0 +1,84 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {},
+   "source": [
+    "# Démo Iris: entraînement et prédiction\n",
+    "Ce notebook entraîne un modèle Iris et le sauvegarde dans `shared/models/iris_model.pkl` via le script Python."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [
+    {
+     "ename": "",
+     "evalue": "",
+     "output_type": "error",
+     "traceback": [
+      "\u001b[1;31mRunning cells with 'Python 3.13.3' requires the ipykernel package.\n",
+      "\u001b[1;31m<a href='command:jupyter.createPythonEnvAndSelectController'>Create a Python Environment</a> with the required packages.\n",
+      "\u001b[1;31mOr install 'ipykernel' using the command: 'c:/Users/brin.amonkonan/AppData/Local/Programs/Python/Python313/python.exe -m pip install ipykernel -U --user --force-reinstall'"
+     ]
+    }
+   ],
+   "source": [
+    "# Exécute le script d'entraînement (Iris)\n",
+    "import sys, subprocess\n",
+    "subprocess.run([sys.executable, '/work/app/train_iris.py'], check=True)"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {},
+   "outputs": [
+    {
+     "ename": "",
+     "evalue": "",
+     "output_type": "error",
+     "traceback": [
+      "\u001b[1;31mRunning cells with 'Python 3.13.3' requires the ipykernel package.\n",
+      "\u001b[1;31m<a href='command:jupyter.createPythonEnvAndSelectController'>Create a Python Environment</a> with the required packages.\n",
+      "\u001b[1;31mOr install 'ipykernel' using the command: 'c:/Users/brin.amonkonan/AppData/Local/Programs/Python/Python313/python.exe -m pip install ipykernel -U --user --force-reinstall'"
+     ]
+    }
+   ],
+   "source": [
+    "# Vérifie que le modèle est sauvegardé et effectue une prédiction\n",
+    "from pathlib import Path\n",
+    "import joblib\n",
+    "import numpy as np\n",
+    "model_path = Path('/work/models/iris_model.pkl')\n",
+    "assert model_path.exists(), 'Le modèle n’a pas été sauvegardé.'\n",
+    "model = joblib.load(model_path)\n",
+    "X = np.array([5.1, 3.5, 1.4, 0.2]).reshape(1, -1)\n",
+    "pred = model.predict(X)\n",
+    "pred"
+   ]
+  }
+ ],
+ "metadata": {
+  "kernelspec": {
+   "display_name": "Python 3",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.13.3"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}

+ 1 - 0
shared/models/.gitkeep

@@ -0,0 +1 @@
+# Les artefacts modèles (joblib, pkl, etc.) seront stockés ici.

+ 1 - 0
web

@@ -0,0 +1 @@
+Subproject commit 0ab0bef8c1e150776a8b89229c04ce0c181d529d