feat: 语音确认、联调与运维增强
- 语音:序数解析(第一个/第二个等)、解析失败计数与 API detail.retry_remaining; 百度 ASR 固定 dev_pid 为普通话;SurgeryPipelineError 支持 extra 并入 HTTP detail。 - Demo:demo 路由与假 RTSP、客户端 index 与 README;BackendResolver 与配置调整。 - 可观测:消耗 TSV 日志、语音文件日志、终端 Markdown 辅助;相关测试与依赖更新。 - 注意:.env 仍被 gitignore,本地密钥不会进入本提交。 Made-with: Cursor
This commit is contained in:
@@ -6,10 +6,77 @@
|
||||
|
||||
```
|
||||
scripts/demo_client/
|
||||
server.py # 基于 stdlib 的静态服务器;额外暴露 /labels.json
|
||||
index.html # 单文件页面(原生 JS,零构建依赖)
|
||||
server.py # 基于 stdlib 的静态服务器;额外暴露 /labels.json
|
||||
index.html # 单文件页面(原生 JS,零构建依赖)
|
||||
fake_rtsp_from_file.py # 无真摄像头时:把本地视频循环发布为 RTSP(ffmpeg + Docker MediaMTX)
|
||||
```
|
||||
|
||||
## 调试:无真实摄像头,用录好的视频模拟 RTSP
|
||||
|
||||
监控服务**只从 RTSP URL 拉流**(`cv2.VideoCapture`),**没有**「上传视频文件」的 HTTP 接口;在不改 Python 后端的前提下,只能让「摄像头地址」指向一个**真实可连的 RTSP 源。
|
||||
|
||||
推荐做法:在**本机**把视频文件用 **ffmpeg** 推到本机上的 **RTSP 服务**(脚本用 Docker 启动 [MediaMTX](https://github.com/bluenviron/mediamtx)),得到 `rtsp://127.0.0.1:<端口>/<路径>`,再通过**环境变量**告诉后端(**只改配置,不改仓库里的后端代码**):
|
||||
|
||||
**单路**(一个文件、一个 `camera_id`,兼容旧命令):
|
||||
|
||||
```bash
|
||||
# 依赖:ffmpeg、Docker(首次会拉取 bluenviron/mediamtx)
|
||||
cd /path/to/operation-room-monitor-server
|
||||
python3 scripts/demo_client/fake_rtsp_from_file.py /path/to/recording.mp4 --port 18554 --path demo
|
||||
```
|
||||
|
||||
**两路**(两路不同视频、两个 `camera_id`;**一个** MediaMTX、**两路** ffmpeg;每路用不同的 `RTSP_PATH`):
|
||||
|
||||
```bash
|
||||
python3 scripts/demo_client/fake_rtsp_from_file.py --port 18554 \
|
||||
--stream 'or-cam-01|./a.mp4|demo1' \
|
||||
--stream 'or-cam-02|./b.mp4|demo2'
|
||||
```
|
||||
|
||||
`--stream` 格式为 `CAMERA_ID|文件路径|RTSP_PATH`(竖线分隔,整条加引号),生成的 `VIDEO_RTSP_URLS_JSON` 会同时包含 `or-cam-01` 与 `or-cam-02`。
|
||||
|
||||
在**另一终端**启动监控服务前 `source` 或手动 `export` 上述变量,使 `POST /client/surgeries/start` 里使用的 `camera_ids`(如 `or-cam-01,or-cam-02`)能解析到对应 URL。Demo 页里「将 camera_id 填到开始手术」可一键同步两路 id。
|
||||
|
||||
### 监控在 Docker、假 RTSP 在宿主机(推荐联调拓扑)
|
||||
|
||||
常见安排是:**假摄像头脚本**(`fake_rtsp_from_file.py` + ffmpeg + MediaMTX)在**宿主机**终端里跑,推流地址是 `rtsp://127.0.0.1:<端口>/...`;**监控 API 服务**在 **Docker 容器**里跑,容器里的进程要访问宿主机上的 RTSP,应使用:
|
||||
|
||||
- **macOS / Windows Docker Desktop**:`rtsp://host.docker.internal:<端口>/<路径>`
|
||||
- **Linux**:`host.docker.internal` 可能未预置,可任选其一:
|
||||
- 给该服务容器加 `--add-host=host.docker.internal:host-gateway`(Docker 20.10+),或
|
||||
- 直接把 URL 写成宿主在 **docker0/桥接网** 上可达的局域网 IP(如 `192.168.x.x`),保证从容器内 `curl`/`ffprobe` 能通
|
||||
|
||||
`docker-compose` 里可将 `VIDEO_RTSP_URLS_JSON` 写进 `environment:` 或 env 文件;**不要**在仅容器可解析的配置里写 `127.0.0.1` 去指宿主机上的 RTSP(`127.0.0.1` 在容器内是容器自己)。
|
||||
|
||||
若监控与假 RTSP **都在宿主机同一系统**里直接跑(非容器),则用 `rtsp://127.0.0.1:...` 即可;否则应使用上面「容器连宿主」的写法。
|
||||
|
||||
发布失败时,可尝试把输入转码后再推流(示例,需自行调整):
|
||||
|
||||
```bash
|
||||
ffmpeg -re -stream_loop -1 -i recording.mp4 -c:v libx264 -pix_fmt yuv420p -f rtsp -rtsp_transport tcp rtsp://127.0.0.1:18554/demo
|
||||
```
|
||||
|
||||
(仍须先自行启动 MediaMTX 或等价 RTSP 服务端。)
|
||||
|
||||
Demo 页面「调试:两路视频」中可用 **选择视频** / **拖放** 为路1/路2 指定文件,并配合下面 **一键开录** 上传,无需在页面里手抄 `python3` / `export` 命令。若必须完全手跑 `fake_rtsp_from_file.py`,请在上文命令示例与 `export VIDEO_RTSP_URLS_JSON=...` 方式自行在终端完成。
|
||||
|
||||
## 一键开录(不再手抄命令)
|
||||
|
||||
在 §4.1 勾选 **「一键联调」** 后,在「调试」里为**路1/路2**各选一段视频,再点 **开始手术**,浏览器会把两路视频 **multipart 上传到监控 API**(`POST /internal/demo/orchestrate-and-start`),由服务进程依次:
|
||||
|
||||
1. 落盘两路视频到临时目录
|
||||
2. 用 Docker 起 MediaMTX、两路 ffmpeg 推 RTSP(与 `fake_rtsp_from_file.py` 等效)
|
||||
3. 把 `{"or-cam-01":"rtsp://127.0.0.1:…","or-cam-02":"rtsp://127.0.0.1:…"}` 写入 `VIDEO_RTSP_URLS_JSON_FILE`(与开录/拉流同进程,固定本机回环;`DEMO_ORCHESTRATOR_RTSP_JSON_HOST` 仅影响你**手配**假流、给另一进程读 JSON 的用法)
|
||||
4. 调用与普通开录相同逻辑
|
||||
|
||||
**需同时满足**:
|
||||
|
||||
- `.env` 中 `DEMO_ORCHESTRATOR_ENABLED=true`(并重启 API)
|
||||
- 已设置 `VIDEO_RTSP_URLS_JSON_FILE` 指向**可写**的 JSON 文件;Docker 中请用 **bind-mount** 到容器内同一路径
|
||||
- **运行 `main.py` 的进程**能执行本机 `docker` 与 `ffmpeg`(与手动跑 `fake_rtsp_from_file` 相同)。**仅将 API 放 Docker、且不挂载** ` /var/run/docker.sock` 时,容器内往往无法为你在宿主机起 MediaMTX,此时请继续用手动假流方式。
|
||||
|
||||
由于每次解析都会重新读取 `video_rtsp_url_map()`,覆盖 JSON 后**无需重启**主服务即可被下一次开录用到。
|
||||
|
||||
## 运行方式
|
||||
|
||||
```bash
|
||||
@@ -35,6 +102,7 @@ open http://localhost:38081/
|
||||
- §4.3 `GET /client/surgeries/{id}/result` — 以表格渲染 `details` 与 `summary`
|
||||
- §4.4 `GET /client/surgeries/{id}/pending-confirmation` — 支持手动拉取与 2s 自动轮询
|
||||
- §4.5 `POST .../resolve` — 本地麦克风录音 → 16 kHz 单声道 WAV → `multipart/form-data` 上传
|
||||
- **调试:无摄像头** — 两路视频选择与 `camera_id`;一键联调见上文;手跑假流见 `fake_rtsp_from_file.py` 与本文「调试:无真实摄像头」
|
||||
|
||||
右侧「响应日志」按时间倒序展示每次请求的 method/url/status/body,便于联调截图。
|
||||
|
||||
|
||||
266
scripts/demo_client/fake_rtsp_from_file.py
Normal file
266
scripts/demo_client/fake_rtsp_from_file.py
Normal file
@@ -0,0 +1,266 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Publish local video file(s) as looping RTSP stream(s) (fake camera) for local dev.
|
||||
|
||||
The Operation Room server only opens RTSP URLs (OpenCV); there is no video-upload API.
|
||||
This script does NOT change the application backend: it runs ffmpeg + a small
|
||||
RTSP server (MediaMTX) so you can point VIDEO_RTSP_URLS_JSON to rtsp://.../yourpath.
|
||||
|
||||
Requires:
|
||||
- ffmpeg in PATH
|
||||
- Docker, with the image pulled: bluenviron/mediamtx (recommended), OR a local
|
||||
`mediamtx` binary in PATH (advanced).
|
||||
|
||||
Single stream (legacy)::
|
||||
python3 scripts/demo_client/fake_rtsp_from_file.py /path/to/video.mp4
|
||||
python3 scripts/demo_client/fake_rtsp_from_file.py video.mp4 --port 18554 --path demo
|
||||
|
||||
Multiple streams (one MediaMTX, one ffmpeg per camera; different RTSP path per stream)::
|
||||
|
||||
python3 scripts/demo_client/fake_rtsp_from_file.py --port 18554 \\
|
||||
--stream 'or-cam-01|./a.mp4|demo1' \\
|
||||
--stream 'or-cam-02|./b.mp4|demo2'
|
||||
|
||||
--stream format: ``CAMERA_ID|FILE|RTSP_PATH`` (use quotes in shell; RTSP path is
|
||||
the last segment, e.g. ``demo1`` -> ``rtsp://127.0.0.1:<port>/demo1``).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import atexit
|
||||
import json
|
||||
import os
|
||||
import signal
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
MEDIAMTX_IMAGE = os.environ.get("MEDIAMTX_DOCKER_IMAGE", "bluenviron/mediamtx:latest")
|
||||
CONTAINER_NAME = "orm-fake-rtsp-mediamtx"
|
||||
|
||||
|
||||
def _has_docker() -> bool:
|
||||
return shutil.which("docker") is not None
|
||||
|
||||
|
||||
def _has_ffmpeg() -> bool:
|
||||
return shutil.which("ffmpeg") is not None
|
||||
|
||||
|
||||
def _stop_mediamtx_container() -> None:
|
||||
if not _has_docker():
|
||||
return
|
||||
try:
|
||||
subprocess.run(
|
||||
["docker", "rm", "-f", CONTAINER_NAME],
|
||||
capture_output=True,
|
||||
check=False,
|
||||
timeout=30,
|
||||
)
|
||||
except (OSError, subprocess.SubprocessError):
|
||||
pass
|
||||
|
||||
|
||||
def _start_mediamtx_docker(host_port: int) -> bool:
|
||||
_stop_mediamtx_container()
|
||||
cmd = [
|
||||
"docker", "run", "-d",
|
||||
"--name", CONTAINER_NAME,
|
||||
"-p", f"127.0.0.1:{host_port}:8554",
|
||||
MEDIAMTX_IMAGE,
|
||||
]
|
||||
print("[fake-rtsp] Starting MediaMTX:", " ".join(cmd), file=sys.stderr)
|
||||
try:
|
||||
proc = subprocess.run(cmd, capture_output=True, text=True, timeout=120)
|
||||
except (OSError, subprocess.SubprocessError) as exc:
|
||||
print(f"[fake-rtsp] docker run failed: {exc}", file=sys.stderr)
|
||||
return False
|
||||
if proc.returncode != 0:
|
||||
err = (proc.stderr or proc.stdout or "").strip()
|
||||
print(f"[fake-rtsp] docker run exit {proc.returncode}: {err}", file=sys.stderr)
|
||||
return False
|
||||
atexit.register(_stop_mediamtx_container)
|
||||
return True
|
||||
|
||||
|
||||
def _parse_stream_arg(spec: str) -> tuple[str, Path, str]:
|
||||
parts = spec.split("|", 2)
|
||||
if len(parts) != 3:
|
||||
raise ValueError(
|
||||
f"Invalid --stream {spec!r}; expected CAM|FILE|RTSP_PATH (three fields separated by |)"
|
||||
)
|
||||
cam = parts[0].strip()
|
||||
fpath = Path(parts[1].strip()).expanduser()
|
||||
rpath = parts[2].strip().strip("/")
|
||||
if not cam:
|
||||
raise ValueError("empty camera id in --stream")
|
||||
if not rpath:
|
||||
rpath = "demo"
|
||||
return cam, fpath, rpath
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Loop video file(s) to RTSP URL(s) (dev fake camera; no backend code change).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"video",
|
||||
nargs="?",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="(single-stream mode) Path to a video file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--path",
|
||||
default="demo",
|
||||
help="(single-stream mode) RTSP path segment (rtsp://host:port/<path>)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
default=18554,
|
||||
help="Host port mapped to MediaMTX RTSP (container internal 8554). Default: 18554",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--stream",
|
||||
action="append",
|
||||
default=None,
|
||||
help=(
|
||||
"Multi-stream mode. Repeat for each camera. "
|
||||
"Format: CAM|FILE|RTSP_PATH e.g. or-cam-01|./a.mp4|demo1"
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--no-docker",
|
||||
action="store_true",
|
||||
help="Do not start Docker; run MediaMTX yourself on the host port mapping.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
if not _has_ffmpeg():
|
||||
print("ffmpeg not found in PATH. Install ffmpeg and retry.", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
streams: list[tuple[str, Path, str]] = []
|
||||
if args.stream:
|
||||
for s in args.stream:
|
||||
try:
|
||||
streams.append(_parse_stream_arg(s))
|
||||
except ValueError as exc:
|
||||
print(f"[fake-rtsp] {exc}", file=sys.stderr)
|
||||
return 1
|
||||
elif args.video is not None:
|
||||
fpath = args.video.resolve()
|
||||
sp = (args.path or "demo").strip().strip("/") or "demo"
|
||||
streams = [("or-cam-01", fpath, sp)]
|
||||
else:
|
||||
parser.error("Provide a video file (single mode) or one or more --stream CAM|FILE|RTSP_PATH")
|
||||
|
||||
for cam, fpath, rpath in streams:
|
||||
rp_file = fpath.resolve()
|
||||
if not rp_file.is_file():
|
||||
print(f"File not found: {rp_file} (camera {cam!r})", file=sys.stderr)
|
||||
return 1
|
||||
for ch in rpath:
|
||||
if ch not in "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.-":
|
||||
print(
|
||||
f"[fake-rtsp] RTSP path segment {rpath!r} for {cam!r} should be "
|
||||
r"[a-zA-Z0-9_.-] only; adjust --path/--stream",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
|
||||
host_port: int = args.port
|
||||
if not args.no_docker:
|
||||
if not _has_docker():
|
||||
print("Docker not found. Use --no-docker and start MediaMTX manually.", file=sys.stderr)
|
||||
return 1
|
||||
if not _start_mediamtx_docker(host_port):
|
||||
return 1
|
||||
print("[fake-rtsp] MediaMTX container started. Waiting for RTSP…", file=sys.stderr)
|
||||
time.sleep(1.0)
|
||||
else:
|
||||
print(
|
||||
f"[fake-rtsp] --no-docker: ensure an RTSP server is listening for publish on port {host_port}.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
procs: list[subprocess.Popen] = []
|
||||
url_map: dict[str, str] = {}
|
||||
|
||||
for cam, fpath, stream_path in streams:
|
||||
fp = fpath.resolve()
|
||||
dest_url = f"rtsp://127.0.0.1:{host_port}/{stream_path}"
|
||||
url_map[cam] = dest_url
|
||||
publish_cmd: list[str] = [
|
||||
"ffmpeg",
|
||||
"-hide_banner", "-loglevel", "info",
|
||||
"-re",
|
||||
"-stream_loop", "-1",
|
||||
"-i", str(fp),
|
||||
"-c", "copy",
|
||||
"-f", "rtsp",
|
||||
"-rtsp_transport", "tcp",
|
||||
dest_url,
|
||||
]
|
||||
print("---", file=sys.stderr)
|
||||
print(f"Publish {cam} -> {dest_url}", file=sys.stderr)
|
||||
print(" " + " ".join(publish_cmd), file=sys.stderr)
|
||||
p = subprocess.Popen(publish_cmd) # noqa: S603
|
||||
procs.append(p)
|
||||
|
||||
j_compact = json.dumps(url_map, ensure_ascii=False, separators=(",", ":"))
|
||||
print("---", file=sys.stderr)
|
||||
print("RTSP mapping (set on monitoring server):", file=sys.stderr)
|
||||
for k, u in url_map.items():
|
||||
print(f" {k}: {u}", file=sys.stderr)
|
||||
print("", file=sys.stderr)
|
||||
print("export (same machine as monitoring server, env snippet):", file=sys.stderr)
|
||||
print(f" export VIDEO_RTSP_URLS_JSON='{j_compact}'", file=sys.stderr)
|
||||
print("", file=sys.stderr)
|
||||
print("If the server runs in Docker on Mac/Win, use host.docker.internal, e.g.:", file=sys.stderr)
|
||||
for cam, u in url_map.items():
|
||||
h = u.replace("127.0.0.1", "host.docker.internal", 1)
|
||||
print(f" {cam}: {h}", file=sys.stderr)
|
||||
print("---", file=sys.stderr)
|
||||
print("Fake RTSP running (Ctrl+C to stop; MediaMTX container removed on exit).", file=sys.stderr)
|
||||
|
||||
def on_sigint(_sig: int, _frame) -> None:
|
||||
for p in procs:
|
||||
if p.poll() is None:
|
||||
p.terminate()
|
||||
_stop_mediamtx_container()
|
||||
raise SystemExit(130)
|
||||
|
||||
signal.signal(signal.SIGINT, on_sigint)
|
||||
signal.signal(signal.SIGTERM, on_sigint)
|
||||
|
||||
try:
|
||||
while True:
|
||||
time.sleep(0.5)
|
||||
for p in procs:
|
||||
if p.poll() is not None:
|
||||
print(
|
||||
f"[fake-rtsp] ffmpeg ended (code {p.returncode}), stopping all.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
raise KeyboardInterrupt
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
finally:
|
||||
for p in procs:
|
||||
if p.poll() is None:
|
||||
p.terminate()
|
||||
try:
|
||||
p.wait(timeout=5)
|
||||
except subprocess.TimeoutExpired:
|
||||
p.kill()
|
||||
_stop_mediamtx_container()
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
@@ -137,6 +137,25 @@
|
||||
white-space: pre-wrap;
|
||||
word-break: break-word;
|
||||
}
|
||||
.log-hint {
|
||||
margin-top: 6px;
|
||||
padding: 6px 8px;
|
||||
font-size: 11px;
|
||||
line-height: 1.4;
|
||||
color: #fcd34d;
|
||||
background: rgba(245, 158, 11, 0.12);
|
||||
border: 1px solid rgba(245, 158, 11, 0.35);
|
||||
border-radius: 4px;
|
||||
}
|
||||
#orch-status-banner { border: 1px solid var(--border); }
|
||||
.callout-ok {
|
||||
background: rgba(34, 197, 94, 0.12);
|
||||
border: 1px solid rgba(34, 197, 94, 0.4);
|
||||
border-radius: 8px;
|
||||
padding: 10px 12px;
|
||||
margin: 0 0 10px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
.log-time { color: var(--muted); font-size: 11px; }
|
||||
.badge {
|
||||
display: inline-block;
|
||||
@@ -171,6 +190,7 @@
|
||||
.muted { color: var(--muted); }
|
||||
.err { color: var(--danger); }
|
||||
.ok { color: var(--accent-2); }
|
||||
.warn { color: var(--warn); }
|
||||
.small { font-size: 12px; }
|
||||
.grow { flex: 1; }
|
||||
audio { width: 100%; margin-top: 8px; }
|
||||
@@ -180,6 +200,18 @@
|
||||
.layout { grid-template-columns: 1fr; }
|
||||
.log { position: static; height: auto; max-height: 50vh; }
|
||||
}
|
||||
pre.cmd {
|
||||
background: var(--panel-2);
|
||||
border: 1px solid var(--border);
|
||||
border-radius: 6px;
|
||||
padding: 10px 12px;
|
||||
font-size: 11px;
|
||||
line-height: 1.45;
|
||||
overflow-x: auto;
|
||||
margin: 8px 0 0;
|
||||
white-space: pre-wrap;
|
||||
word-break: break-all;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
@@ -187,6 +219,7 @@
|
||||
<main>
|
||||
<section class="card">
|
||||
<h1>Operation Room Monitor · Demo Client</h1>
|
||||
<p id="orch-status-banner" class="small" style="display:none;margin:8px 0 0;padding:8px 10px;border-radius:6px"></p>
|
||||
<p class="muted small">手动触发 <code>/client/*</code> 5 个接口;本地麦克风录音后生成 WAV 上传语音确认接口。</p>
|
||||
<div class="row" style="margin-top:10px">
|
||||
<div>
|
||||
@@ -200,16 +233,74 @@
|
||||
</div>
|
||||
<div class="actions">
|
||||
<button id="btn-health" class="secondary">GET /health</button>
|
||||
<button type="button" class="secondary" id="btn-orch-status" title="检查一键联调接口是否已注册">GET 联调状态</button>
|
||||
<span id="health-status" class="small muted"></span>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="card">
|
||||
<h2>调试:两路视频(与一键联调 / 无真摄像头)</h2>
|
||||
<p class="callout-ok small">
|
||||
在<strong>路1 / 路2</strong>选好视频、§4.1 勾选「一键联调」后点「开始手术」即可;服务端会起假 RTSP 并写 <code>VIDEO_RTSP_URLS_JSON_FILE</code>。无法使用一键时,请按 <code>scripts/demo_client/README.md</code> 在宿主机手跑
|
||||
<code>fake_rtsp_from_file.py</code> 并配置环境变量。
|
||||
</p>
|
||||
<h3>两路视频(为 §4.1 一键选文件;两路 <code>RTSP_PATH</code> / <code>camera_id</code> 须与 API 配置一致,如 <code>demo1</code> / <code>demo2</code>)</h3>
|
||||
<div class="row" style="margin-top:10px; align-items:stretch; grid-template-columns:1fr 1fr">
|
||||
<div class="debug-stream" id="debug-stream-1" style="border:1px solid var(--border); border-radius:8px; padding:10px">
|
||||
<h3 style="margin:0 0 8px; color:var(--accent)">路 1</h3>
|
||||
<label>视频(一键上传优先;可选手填本地路径作备注)</label>
|
||||
<input id="debug-vpath-1" type="text" placeholder="/path/a.mp4 或 ./a.mp4" />
|
||||
<div class="actions" style="margin-top:6px; align-items:center">
|
||||
<input type="file" id="debug-vfile-1" accept="video/*" hidden />
|
||||
<button type="button" class="secondary" id="btn-dbg-pick-1">选择…</button>
|
||||
<span id="debug-hint-1" class="small muted"></span>
|
||||
</div>
|
||||
<div class="row" style="margin-top:8px">
|
||||
<div>
|
||||
<label>RTSP 路径名 <code>RTSP_PATH</code>(URL 最后一段,两路须不同,如 <code>demo1</code>)</label>
|
||||
<input id="debug-rpath-1" type="text" value="demo1" />
|
||||
</div>
|
||||
<div>
|
||||
<label>camera_id</label>
|
||||
<input id="debug-cam-1" type="text" value="or-cam-01" />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="debug-stream" id="debug-stream-2" style="border:1px solid var(--border); border-radius:8px; padding:10px">
|
||||
<h3 style="margin:0 0 8px; color:var(--accent)">路 2</h3>
|
||||
<label>视频(一键上传优先;可选手填本地路径作备注)</label>
|
||||
<input id="debug-vpath-2" type="text" placeholder="/path/b.mp4 或 ./b.mp4" />
|
||||
<div class="actions" style="margin-top:6px; align-items:center">
|
||||
<input type="file" id="debug-vfile-2" accept="video/*" hidden />
|
||||
<button type="button" class="secondary" id="btn-dbg-pick-2">选择…</button>
|
||||
<span id="debug-hint-2" class="small muted"></span>
|
||||
</div>
|
||||
<div class="row" style="margin-top:8px">
|
||||
<div>
|
||||
<label>RTSP 路径名 <code>RTSP_PATH</code></label>
|
||||
<input id="debug-rpath-2" type="text" value="demo2" />
|
||||
</div>
|
||||
<div>
|
||||
<label>camera_id</label>
|
||||
<input id="debug-cam-2" type="text" value="or-cam-02" />
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p id="debug-file-note" class="muted small" style="margin:8px 0 0">
|
||||
一键联调会<strong>直接上传</strong>你在此为路1/路2选择的文件。选文件时会把框内填成 <code>./文件名</code>,仅作展示;真正上传以文件选择器为准,无需在框里改路径。
|
||||
</p>
|
||||
<div class="actions" style="margin-top:8px">
|
||||
<button type="button" class="secondary" id="btn-debug-apply-cams" title="把两路 camera_id 写进 §4.1 的 camera_ids">将 camera_id 填到开始手术</button>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="card">
|
||||
<h2>§4.1 开始手术</h2>
|
||||
<div class="row">
|
||||
<div>
|
||||
<label>camera_ids(逗号分隔,至少一个)</label>
|
||||
<input id="camera-ids" type="text" value="or-cam-01" />
|
||||
<input id="camera-ids" type="text" value="or-cam-01,or-cam-02" />
|
||||
</div>
|
||||
<div>
|
||||
<label>candidate_consumables<span id="labels-hint" class="badge">loading…</span></label>
|
||||
@@ -218,8 +309,14 @@
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<p class="small muted" style="margin:8px 0 0">
|
||||
<label style="display:inline-flex;align-items:flex-start;gap:8px;cursor:pointer;max-width:52rem">
|
||||
<input type="checkbox" id="orch-oneclick" style="margin-top:2px" />
|
||||
<span><strong>一键联调</strong>:点下面按钮时上传 §「调试」里为<strong>路1/路2</strong>选好的两个视频,由监控服务在<strong>能执行 docker+ffmpeg 的环境</strong>里自动起假 RTSP、写 <code>VIDEO_RTSP_URLS_JSON_FILE</code> 并开录(需 <code>DEMO_ORCHESTRATOR_ENABLED=true</code> 且该文件为可写挂载;详见 README)。不勾选时仍为普通 JSON 开录(需自行先起假流)。</span>
|
||||
</label>
|
||||
</p>
|
||||
<div class="actions">
|
||||
<button id="btn-start">POST /client/surgeries/start</button>
|
||||
<button id="btn-start">开始手术</button>
|
||||
<button id="btn-load-all-labels" class="secondary" type="button">载入全部标签</button>
|
||||
<button id="btn-clear-labels" class="secondary" type="button">清空</button>
|
||||
</div>
|
||||
@@ -245,10 +342,14 @@
|
||||
<div class="actions">
|
||||
<button id="btn-pending" class="secondary">拉一条待确认</button>
|
||||
<label class="small" style="display:flex;align-items:center;gap:6px;cursor:pointer">
|
||||
<input id="auto-poll" type="checkbox" /> 自动轮询(2s)
|
||||
<input id="auto-poll" type="checkbox" checked /> 自动轮询(10s)
|
||||
</label>
|
||||
<label class="small" style="display:flex;align-items:center;gap:6px;cursor:pointer" title="拉取到待确认时朗读 prompt(百度 TTS 或浏览器)">
|
||||
<input id="tts-pending" type="checkbox" checked /> 有待确认时 TTS
|
||||
</label>
|
||||
<span id="voice-status" class="small muted"></span>
|
||||
</div>
|
||||
<p id="voice-pipeline-hint" class="small muted" style="margin:6px 0 0">默认策略:<strong>Top1 置信度 < 0.9</strong> 且达语音下沿时多会<strong>入队待确认</strong>;≥ <code>VIDEO_AUTO_CONFIRM_CONFIDENCE</code>(默认 0.9)且标签在 <code>candidate_consumables</code> 内则<strong>直接记 vision</strong>,拉取待确认为 404。可在环境变量中调整 <code>VIDEO_AUTO_CONFIRM_CONFIDENCE</code>。确认时在「语音确认(录音)」上传 WAV 即可。</p>
|
||||
<div id="pending-render" class="pending-box" hidden></div>
|
||||
</section>
|
||||
|
||||
@@ -296,7 +397,7 @@
|
||||
const surgeryId = () => $("surgery-id").value.trim();
|
||||
|
||||
const logEl = $("log");
|
||||
function addLog(method, url, status, body, { error = false } = {}) {
|
||||
function addLog(method, url, status, body, { error = false, hint = "" } = {}) {
|
||||
const item = document.createElement("div");
|
||||
item.className = "log-item";
|
||||
const time = new Date().toLocaleTimeString();
|
||||
@@ -318,6 +419,12 @@
|
||||
catch { bodyEl.textContent = String(body); }
|
||||
}
|
||||
item.appendChild(bodyEl);
|
||||
if (hint) {
|
||||
const h = document.createElement("div");
|
||||
h.className = "log-hint";
|
||||
h.textContent = hint;
|
||||
item.appendChild(h);
|
||||
}
|
||||
logEl.insertBefore(item, logEl.children[1] ?? null);
|
||||
}
|
||||
|
||||
@@ -345,6 +452,40 @@
|
||||
return { res, body: parsed };
|
||||
}
|
||||
|
||||
async function apiMultipart(path, formData) {
|
||||
const url = baseUrl() + path;
|
||||
const bu = baseUrl();
|
||||
console.info("[demo-client] orchestrate request", { baseUrl: bu, path, fullUrl: url });
|
||||
let res;
|
||||
try {
|
||||
res = await fetch(url, { method: "POST", body: formData });
|
||||
} catch (e) {
|
||||
console.error("[demo-client] orchestrate network error", e);
|
||||
const netHint = "无法连接 " + url + "。请确认「服务端 Base URL」指向监控 API(默认 :38080),且本页在 :38081 打开时勿把 Base URL 填成 demo 页自身。";
|
||||
addLog("POST (orchestrate)", url, "NETWORK", String(e), { error: true, hint: netHint });
|
||||
throw e;
|
||||
}
|
||||
const text = await res.text();
|
||||
let parsed;
|
||||
try { parsed = text ? JSON.parse(text) : null; } catch { parsed = text; }
|
||||
const err = !res.ok;
|
||||
let hint = "";
|
||||
if (res.status === 404) {
|
||||
hint = "HTTP 404:本路径在服务端未注册。常见原因:1) 未设 DEMO_ORCHESTRATOR_ENABLED=true 并重启主进程,POST /internal/demo/orchestrate-and-start 未挂载;2)「服务端 Base URL」填错(须指向主 API 如 http://127.0.0.1:38080,不是本 demo 静态站 :38081)。可点「GET 联调状态」或打开浏览器控制台查看 [demo-client] 日志。";
|
||||
} else if (res.status === 400 && parsed && (parsed.detail || "").toString().indexOf("VIDEO_RTSP") >= 0) {
|
||||
hint = "需配置可写的 VIDEO_RTSP_URLS_JSON_FILE,且 Docker 下请 bind-mount 到容器内同路径。";
|
||||
} else if (res.status === 503) {
|
||||
hint = "合成假 RTSP 或开录失败,请见响应体与主服务终端 log(demo orchestrate-and-start / ffmpeg / docker)。";
|
||||
}
|
||||
if (err) {
|
||||
console.error("[demo-client] orchestrate response", { status: res.status, statusText: res.statusText, body: parsed, url });
|
||||
} else {
|
||||
console.info("[demo-client] orchestrate ok", { status: res.status, url });
|
||||
}
|
||||
addLog("POST (orchestrate)", url, res.status, parsed, { error: err, hint });
|
||||
return { res, body: parsed };
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// Surgery ID validation
|
||||
// ============================================================
|
||||
@@ -427,6 +568,42 @@
|
||||
};
|
||||
$("btn-clear-labels").onclick = () => { tags = []; renderTags(); };
|
||||
|
||||
// ============================================================
|
||||
// 联调状态(不依赖一键开关,用于诊断 404)
|
||||
// ============================================================
|
||||
async function refreshOrchStatus() {
|
||||
const b = $("orch-status-banner");
|
||||
const url = baseUrl() + "/internal/demo/orchestrator-status";
|
||||
try {
|
||||
const res = await fetch(url);
|
||||
const text = await res.text();
|
||||
let data;
|
||||
try { data = text ? JSON.parse(text) : null; } catch { data = { raw: text }; }
|
||||
console.info("[demo-client] GET orchestrator-status", { url, httpStatus: res.status, data });
|
||||
addLog("GET (联调状态)", url, res.status, data, { error: !res.ok });
|
||||
b.style.display = "block";
|
||||
if (!res.ok) {
|
||||
b.style.background = "rgba(239, 68, 68, 0.1)";
|
||||
b.style.color = "var(--text)";
|
||||
b.textContent = "无法拉取 " + url + "(HTTP " + res.status + ")。请把「服务端 Base URL」设为主 API(如 http://127.0.0.1:38080)。";
|
||||
return;
|
||||
}
|
||||
const on = data.orchestrator_enabled === true;
|
||||
const fset = data.video_rtsp_urls_json_file_set === true;
|
||||
b.style.background = on && fset ? "rgba(34, 197, 94, 0.1)" : "rgba(245, 158, 11, 0.12)";
|
||||
b.style.color = "var(--text)";
|
||||
const fp = data.video_rtsp_urls_json_file || "(未设)";
|
||||
b.innerHTML = on
|
||||
? ("一键 <code>POST " + (data.orchestrate_path || "/internal/demo/orchestrate-and-start") + "</code>:" + (fset ? "已开放;RTSP 映射文件 " : "未设 ") + "<code>" + fp + "</code>")
|
||||
: ("一键开录 <strong>未注册</strong>:请在主服务 .env 设 <code>DEMO_ORCHESTRATOR_ENABLED=true</code> 并<strong>重启</strong>。当前 " + (data.orchestrate_path || "") + " 会 404。");
|
||||
} catch (e) {
|
||||
console.error("[demo-client] orchestrator-status failed", e);
|
||||
b.style.display = "block";
|
||||
b.style.background = "rgba(239, 68, 68, 0.1)";
|
||||
b.textContent = "联调状态请求失败: " + e;
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// §health
|
||||
// ============================================================
|
||||
@@ -435,6 +612,7 @@
|
||||
$("health-status").textContent = `HTTP ${res.status}`;
|
||||
$("health-status").className = "small " + (res.ok ? "ok" : "err");
|
||||
};
|
||||
$("btn-orch-status").onclick = () => { refreshOrchStatus(); };
|
||||
|
||||
// ============================================================
|
||||
// §4.1 start
|
||||
@@ -442,6 +620,30 @@
|
||||
$("btn-start").onclick = async () => {
|
||||
const sid = ensureSurgeryId();
|
||||
if (!sid) return;
|
||||
if ($("orch-oneclick") && $("orch-oneclick").checked) {
|
||||
const f1 = $("debug-vfile-1").files[0];
|
||||
const f2 = $("debug-vfile-2").files[0];
|
||||
if (!f1 || !f2) {
|
||||
alert("请先在上方「调试」里为 路1 / 路2 各「选择…」一个视频文件。");
|
||||
return;
|
||||
}
|
||||
const fd = new FormData();
|
||||
fd.append("video1", f1, f1.name);
|
||||
fd.append("video2", f2, f2.name);
|
||||
fd.append("surgery_id", sid);
|
||||
fd.append("camera_1", ($("debug-cam-1").value || "or-cam-01").trim() || "or-cam-01");
|
||||
fd.append("camera_2", ($("debug-cam-2").value || "or-cam-02").trim() || "or-cam-02");
|
||||
fd.append("rtsp_path_1", ($("debug-rpath-1").value || "demo1").trim() || "demo1");
|
||||
fd.append("rtsp_path_2", ($("debug-rpath-2").value || "demo2").trim() || "demo2");
|
||||
fd.append("candidate_consumables_json", JSON.stringify([...tags]));
|
||||
const { res, body } = await apiMultipart("/internal/demo/orchestrate-and-start", fd);
|
||||
if (!res.ok) {
|
||||
const detail = (body && (body.detail !== undefined)) ? body.detail : body;
|
||||
const errText = (typeof detail === "object" && detail !== null) ? JSON.stringify(detail, null, 2) : String(detail || body || "错误");
|
||||
alert("一键开录失败 HTTP " + res.status + "\n\n" + errText);
|
||||
}
|
||||
return;
|
||||
}
|
||||
const camera_ids = $("camera-ids").value.split(",").map(s => s.trim()).filter(Boolean);
|
||||
if (camera_ids.length === 0) { alert("camera_ids 至少要 1 个"); return; }
|
||||
await apiJson("POST", "/client/surgeries/start", {
|
||||
@@ -508,14 +710,111 @@
|
||||
};
|
||||
|
||||
// ============================================================
|
||||
// §4.4 pending-confirmation
|
||||
// §4.4 pending-confirmation + 可选 TTS
|
||||
// ============================================================
|
||||
let pollTimer = null;
|
||||
let lastTtsConfirmationId = null;
|
||||
|
||||
function pickZhTtsVoice() {
|
||||
if (!window.speechSynthesis) return null;
|
||||
const vs = window.speechSynthesis.getVoices() || [];
|
||||
return (
|
||||
vs.find((v) => /^zh/i.test((v.lang || "") + (v.voiceURI || ""))) ||
|
||||
vs.find((v) => (v.lang || "").startsWith("zh")) ||
|
||||
null
|
||||
);
|
||||
}
|
||||
|
||||
function speakTextPromise(text) {
|
||||
return new Promise((resolve, reject) => {
|
||||
if (!text || !window.speechSynthesis) {
|
||||
resolve();
|
||||
return;
|
||||
}
|
||||
try {
|
||||
window.speechSynthesis.cancel();
|
||||
const u = new SpeechSynthesisUtterance(text);
|
||||
u.lang = "zh-CN";
|
||||
const v = pickZhTtsVoice();
|
||||
if (v) u.voice = v;
|
||||
u.rate = 0.95;
|
||||
u.onend = () => resolve();
|
||||
u.onerror = (ev) => reject(ev.error || new Error("tts"));
|
||||
window.speechSynthesis.speak(u);
|
||||
} catch (e) {
|
||||
reject(e);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/** 优先 GET /prompt-audio 播放百度 MP3,失败时 speechSynthesis */
|
||||
async function playPromptTts(surgeryId, confirmationId, textFallback) {
|
||||
const path = `/client/surgeries/${surgeryId}/pending-confirmation/${encodeURIComponent(confirmationId)}/prompt-audio`;
|
||||
const u = baseUrl() + path;
|
||||
try {
|
||||
const res = await fetch(u);
|
||||
if (res.ok) {
|
||||
const blob = await res.blob();
|
||||
const o = URL.createObjectURL(blob);
|
||||
return new Promise((resolve, reject) => {
|
||||
const a = new Audio();
|
||||
a.preload = "auto";
|
||||
a.src = o;
|
||||
a.onended = () => {
|
||||
URL.revokeObjectURL(o);
|
||||
resolve();
|
||||
};
|
||||
a.onerror = () => {
|
||||
URL.revokeObjectURL(o);
|
||||
reject(new Error("Audio 元素播放失败"));
|
||||
};
|
||||
const p = a.play();
|
||||
if (p && typeof p.catch === "function") {
|
||||
p.catch((err) => {
|
||||
URL.revokeObjectURL(o);
|
||||
reject(err);
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
} catch (e) {
|
||||
console.warn("[demo-client] prompt-audio 不可用,回退浏览器 TTS", e);
|
||||
}
|
||||
return speakTextPromise((textFallback || "").trim());
|
||||
}
|
||||
|
||||
if (window.speechSynthesis) {
|
||||
window.speechSynthesis.addEventListener("voiceschanged", () => {});
|
||||
}
|
||||
|
||||
$("surgery-id").addEventListener("input", () => {
|
||||
lastTtsConfirmationId = null;
|
||||
});
|
||||
|
||||
async function fetchPendingOnce() {
|
||||
const sid = surgeryId();
|
||||
if (!/^\d{6}$/.test(sid)) return;
|
||||
const { res, body } = await apiJson("GET", `/client/surgeries/${sid}/pending-confirmation`);
|
||||
const path = `/client/surgeries/${sid}/pending-confirmation`;
|
||||
const url = baseUrl() + path;
|
||||
let res;
|
||||
try {
|
||||
res = await fetch(url);
|
||||
} catch (e) {
|
||||
addLog("GET", url, "NETWORK", String(e), { error: true });
|
||||
return;
|
||||
}
|
||||
const raw = await res.text();
|
||||
let body;
|
||||
try {
|
||||
body = raw ? JSON.parse(raw) : null;
|
||||
} catch {
|
||||
body = raw;
|
||||
}
|
||||
if (res.status === 404) {
|
||||
// 无待确认为常态,不写入右侧「响应日志」,减少刷屏
|
||||
} else {
|
||||
addLog("GET", url, res.status, body);
|
||||
}
|
||||
const box = $("pending-render");
|
||||
if (res.status === 200 && body && body.confirmation_id) {
|
||||
box.hidden = false;
|
||||
@@ -528,6 +827,12 @@
|
||||
<div style="margin-top:4px"><strong>prompt_text:</strong> ${body.prompt_text || ""}</div>
|
||||
<div style="margin-top:4px"><strong>Top1:</strong> ${body.model_top1_label} <span class="muted">(${(body.model_top1_confidence * 100).toFixed(1)}%)</span></div>
|
||||
<div style="margin-top:6px"><strong>options:</strong>${opts || '<div class="muted">(无)</div>'}</div>`;
|
||||
const pt = (body.prompt_text || "").trim();
|
||||
const ttsOn = $("tts-pending") && $("tts-pending").checked;
|
||||
if (ttsOn && pt && body.confirmation_id !== lastTtsConfirmationId) {
|
||||
lastTtsConfirmationId = body.confirmation_id;
|
||||
void playPromptTts(sid, body.confirmation_id, pt).catch((e) => console.warn(e));
|
||||
}
|
||||
} else if (res.status === 404) {
|
||||
box.hidden = false;
|
||||
box.innerHTML = '<span class="muted">暂无待确认项。</span>';
|
||||
@@ -538,16 +843,20 @@
|
||||
}
|
||||
|
||||
$("btn-pending").onclick = fetchPendingOnce;
|
||||
$("auto-poll").onchange = (e) => {
|
||||
function applyAutoPoll() {
|
||||
if (pollTimer) { clearInterval(pollTimer); pollTimer = null; }
|
||||
if (e.target.checked) {
|
||||
if ($("auto-poll") && $("auto-poll").checked) {
|
||||
$("voice-status").textContent = "自动轮询中…";
|
||||
pollTimer = setInterval(fetchPendingOnce, 2000);
|
||||
pollTimer = setInterval(fetchPendingOnce, 10000);
|
||||
fetchPendingOnce();
|
||||
} else {
|
||||
$("voice-status").textContent = "";
|
||||
}
|
||||
};
|
||||
}
|
||||
$("auto-poll").onchange = applyAutoPoll;
|
||||
if ($("auto-poll") && $("auto-poll").checked) {
|
||||
applyAutoPoll();
|
||||
}
|
||||
|
||||
// ============================================================
|
||||
// §4.5 Recording (mic → WAV 16kHz mono PCM)
|
||||
@@ -706,12 +1015,91 @@
|
||||
let parsed;
|
||||
try { parsed = text ? JSON.parse(text) : null; } catch { parsed = text; }
|
||||
addLog("POST (multipart)", url, res.status, parsed);
|
||||
if (res.ok) {
|
||||
recordingWav = null;
|
||||
$("btn-resolve").disabled = true;
|
||||
$("audio-preview").hidden = true;
|
||||
$("btn-download").style.display = "none";
|
||||
lastTtsConfirmationId = null;
|
||||
$("rec-info").textContent = "已提交,正在拉取下一条待确认…";
|
||||
$("rec-info").className = "ok small";
|
||||
await fetchPendingOnce();
|
||||
if ($("auto-poll") && $("auto-poll").checked) {
|
||||
$("voice-status").textContent = "自动轮询中…";
|
||||
}
|
||||
} else if (res.status === 422 && parsed && parsed.detail && typeof parsed.detail === "object") {
|
||||
const d = parsed.detail;
|
||||
if (d.message) {
|
||||
let line = "解析未通过:" + d.message;
|
||||
if (typeof d.retry_remaining === "number") {
|
||||
line += "(retry_remaining=" + d.retry_remaining + ")";
|
||||
}
|
||||
$("rec-info").textContent = line;
|
||||
$("rec-info").className = "warn small";
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// ============================================================
|
||||
// Debug: two streams for one-click upload (路1/路2)
|
||||
// ============================================================
|
||||
$("btn-dbg-pick-1").onclick = () => $("debug-vfile-1").click();
|
||||
$("debug-vfile-1").addEventListener("change", (e) => {
|
||||
const f = e.target.files && e.target.files[0];
|
||||
if (!f) return;
|
||||
$("debug-vpath-1").value = "./" + f.name;
|
||||
$("debug-hint-1").textContent = "已选: " + f.name;
|
||||
});
|
||||
$("btn-dbg-pick-2").onclick = () => $("debug-vfile-2").click();
|
||||
$("debug-vfile-2").addEventListener("change", (e) => {
|
||||
const f = e.target.files && e.target.files[0];
|
||||
if (!f) return;
|
||||
$("debug-vpath-2").value = "./" + f.name;
|
||||
$("debug-hint-2").textContent = "已选: " + f.name;
|
||||
});
|
||||
|
||||
$("btn-debug-apply-cams").onclick = () => {
|
||||
const a = ($("debug-cam-1").value || "or-cam-01").trim() || "or-cam-01";
|
||||
const b = ($("debug-cam-2").value || "or-cam-02").trim() || "or-cam-02";
|
||||
$("camera-ids").value = a + "," + b;
|
||||
};
|
||||
|
||||
(function setupDebugVideoDrop() {
|
||||
function bindStreamCard(el, vpathId, hintId) {
|
||||
if (!el) return;
|
||||
el.addEventListener("dragover", (ev) => {
|
||||
ev.preventDefault();
|
||||
el.style.outline = "1px dashed var(--accent)";
|
||||
});
|
||||
el.addEventListener("dragleave", () => {
|
||||
el.style.outline = "";
|
||||
});
|
||||
el.addEventListener("drop", (ev) => {
|
||||
ev.preventDefault();
|
||||
el.style.outline = "";
|
||||
const f = ev.dataTransfer && ev.dataTransfer.files && ev.dataTransfer.files[0];
|
||||
const looksVideo =
|
||||
f &&
|
||||
(/^video\//.test(f.type || "") ||
|
||||
/\.(mp4|mov|mkv|avi|webm|m4v|mpeg|mpg)$/i.test(f.name || ""));
|
||||
if (!looksVideo) {
|
||||
$(hintId).textContent = "请拖入视频文件";
|
||||
return;
|
||||
}
|
||||
$(vpathId).value = "./" + f.name;
|
||||
$(hintId).textContent = "已选: " + f.name + "(拖放)";
|
||||
});
|
||||
}
|
||||
bindStreamCard($("debug-stream-1"), "debug-vpath-1", "debug-hint-1");
|
||||
bindStreamCard($("debug-stream-2"), "debug-vpath-2", "debug-hint-2");
|
||||
})();
|
||||
|
||||
// ============================================================
|
||||
// Boot
|
||||
// ============================================================
|
||||
loadLabels();
|
||||
$("base-url").addEventListener("change", () => { refreshOrchStatus(); });
|
||||
refreshOrchStatus();
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
63
scripts/start_fresh.py
Normal file
63
scripts/start_fresh.py
Normal file
@@ -0,0 +1,63 @@
|
||||
#!/usr/bin/env python3
|
||||
"""清空本应用写入的 PostgreSQL 业务表(开发用;表结构保留)。
|
||||
|
||||
直接执行即可:``uv run python scripts/start_fresh.py``
|
||||
|
||||
``./start_fresh.sh`` 与 ``./start.sh`` 一致,仅在启动 uvicorn 前多执行本脚本。
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
|
||||
# 允许从任意 cwd 以 `uv run python scripts/start_fresh.py` 运行
|
||||
_REPO_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
|
||||
if _REPO_ROOT not in sys.path:
|
||||
sys.path.insert(0, _REPO_ROOT)
|
||||
|
||||
from sqlalchemy import text
|
||||
|
||||
from app.config import settings
|
||||
from app.database import engine, init_db_schema
|
||||
|
||||
|
||||
# 与 app/db/models.py 一致;有 FK 时子表排前面
|
||||
_TABLES = (
|
||||
"surgery_result_details",
|
||||
"surgery_final_results",
|
||||
"voice_confirmation_audits",
|
||||
)
|
||||
|
||||
_TRUNCATE_SQL = text(
|
||||
"TRUNCATE TABLE "
|
||||
+ ", ".join(_TABLES)
|
||||
+ " RESTART IDENTITY"
|
||||
)
|
||||
|
||||
|
||||
async def _run() -> None:
|
||||
# 确保新库也有表
|
||||
await init_db_schema()
|
||||
async with engine.begin() as conn:
|
||||
await conn.execute(_TRUNCATE_SQL)
|
||||
dsn = settings.sqlalchemy_database_url
|
||||
safe = dsn
|
||||
if "@" in dsn:
|
||||
# 隐藏 user:pass
|
||||
at = dsn.rfind("@")
|
||||
if "://" in dsn:
|
||||
parts = dsn.split("://", 1)
|
||||
safe = f"{parts[0]}://***@{dsn[at + 1:]}"
|
||||
print("已清空表:", ", ".join(_TABLES))
|
||||
print("数据库:", safe)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
asyncio.run(_run())
|
||||
print("完成。")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user