news 2026/4/17 14:32:51

基于YOLOv5/YOLOv8/YOLOv10的田间杂草智能检测系统:从算法到完整实现

作者头像

张小明

前端开发工程师

1.2k 24
文章封面图
基于YOLOv5/YOLOv8/YOLOv10的田间杂草智能检测系统:从算法到完整实现

摘要

本文详细介绍了基于YOLO系列目标检测算法的田间杂草检测系统的完整实现方案。系统采用最新的YOLOv10算法作为核心检测模型,同时兼容YOLOv5和YOLOv8模型,提供完整的深度学习训练流程、数据集构建方法、模型优化策略以及用户友好的UI界面。文章涵盖了从数据准备到模型部署的全过程,并提供了完整的Python实现代码。

1. 引言

1.1 研究背景与意义

现代农业正面临着一系列挑战,其中杂草管理是农业生产中的关键环节。传统的人工除草方式效率低下、成本高昂,而过度使用化学除草剂则会导致环境污染和杂草抗药性增强。因此,开发智能化的杂草检测与管理系统对于实现精准农业、提高农业生产效率和可持续发展具有重要意义。

1.2 YOLO算法发展概述

YOLO(You Only Look Once)系列算法作为单阶段目标检测的代表,因其速度快、精度高而广受欢迎。从YOLOv1到最新的YOLOv10,每一代都在速度与精度之间找到了更好的平衡点。YOLOv10在保持实时性的同时,通过无NMS设计进一步提升了检测性能,特别适合田间环境下的实时检测需求。

2. 系统架构设计

2.1 总体架构

系统采用模块化设计,主要包括以下核心模块:

  • 数据采集与预处理模块

  • 模型训练与优化模块

  • 实时检测与推理模块

  • 用户界面展示模块

  • 数据管理与分析模块

2.2 技术栈选择

  • 深度学习框架:PyTorch

  • 界面开发:PyQt5

  • 数据处理:OpenCV, Pandas, NumPy

  • 可视化:Matplotlib, Seaborn

  • 模型部署:ONNX, TensorRT(可选)

3. 数据集构建与预处理

3.1 数据采集策略

田间杂草检测数据集需要覆盖多种场景:

  1. 不同作物类型(玉米、小麦、水稻等)

  2. 不同生长阶段

  3. 不同光照条件

  4. 不同天气状况

  5. 不同杂草种类

3.2 数据标注规范

采用YOLO格式标注,每个标注文件包含:

  • 类别索引

  • 边界框中心点坐标(归一化)

  • 边界框宽度和高度(归一化)

3.3 数据增强策略

python

import cv2 import albumentations as A from albumentations.pytorch import ToTensorV2 def get_train_transform(): return A.Compose([ A.RandomResizedCrop(640, 640, scale=(0.8, 1.0)), A.HorizontalFlip(p=0.5), A.VerticalFlip(p=0.5), A.RandomRotate90(p=0.5), A.RandomBrightnessContrast(p=0.2), A.HueSaturationValue(p=0.2), A.GaussNoise(p=0.2), A.Blur(p=0.2), A.CLAHE(p=0.2), A.RandomShadow(p=0.2), A.RandomFog(p=0.1), A.Normalize(mean=[0, 0, 0], std=[1, 1, 1]), ToTensorV2() ], bbox_params=A.BboxParams( format='yolo', label_fields=['class_labels'] ))

4. YOLO模型实现与训练

4.1 YOLOv10模型架构

YOLOv10采用了创新的无NMS设计,通过双标签分配和一致性匹配策略,实现了端到端的目标检测。

python

import torch import torch.nn as nn from ultralytics import YOLOv10 class WeedDetectionModel: def __init__(self, model_type='yolov10', pretrained=True): """ 初始化杂草检测模型 Args: model_type: 模型类型 ('yolov5', 'yolov8', 'yolov10') pretrained: 是否使用预训练权重 """ self.model_type = model_type self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') if model_type == 'yolov10': self.model = YOLOv10('yolov10n.pt' if pretrained else None) elif model_type == 'yolov8': from ultralytics import YOLO self.model = YOLO('yolov8n.pt' if pretrained else None) elif model_type == 'yolov5': import yolov5 self.model = yolov5.load('yolov5n.pt' if pretrained else None) else: raise ValueError(f"Unsupported model type: {model_type}") self.model.to(self.device) self.model.eval() def train(self, data_yaml, epochs=100, imgsz=640): """训练模型""" if self.model_type == 'yolov10': results = self.model.train( data=data_yaml, epochs=epochs, imgsz=imgsz, batch=16, workers=4, optimizer='AdamW', lr0=0.001, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3, warmup_momentum=0.8, box=7.5, cls=0.5, dfl=1.5, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 ) return results def detect(self, image, conf_threshold=0.25, iou_threshold=0.45): """执行检测""" with torch.no_grad(): if self.model_type == 'yolov10': results = self.model(image, conf=conf_threshold, iou=iou_threshold) else: results = self.model(image) return results

4.2 模型训练配置

创建完整的训练配置文件:

yaml

# data/weeds.yaml path: ./datasets/weeds train: images/train val: images/val test: images/test # 类别定义 names: 0: background 1: barnyard_grass 2: broadleaf_weed 3: crabgrass 4: foxtail 5: goosegrass 6: morning_glory 7: pigweed 8: purslane 9: ragweed 10: thistle # 类别数量 nc: 11 # 下载脚本(可选) download: | # 数据集下载和准备脚本 from roboflow import Roboflow rf = Roboflow(api_key="YOUR_API_KEY") project = rf.workspace("agriculture").project("weed-detection") dataset = project.version(3).download("yolov8")

4.3 训练脚本实现

python

import os import yaml import argparse from pathlib import Path import torch from torch.utils.tensorboard import SummaryWriter def train_model(args): """完整的训练流程""" # 设置设备 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print(f'Using device: {device}') # 加载数据配置 with open(args.data_yaml, 'r') as f: data_config = yaml.safe_load(f) # 初始化模型 if args.model_type == 'yolov10': from ultralytics import YOLOv10 model = YOLOv10(args.weights if args.weights else 'yolov10n.pt') elif args.model_type == 'yolov8': from ultralytics import YOLO model = YOLO(args.weights if args.weights else 'yolov8n.pt') elif args.model_type == 'yolov5': import yolov5 model = yolov5.load(args.weights if args.weights else 'yolov5n.pt') # 设置TensorBoard writer = SummaryWriter(log_dir=f'runs/{args.model_type}_{args.exp_name}') # 训练参数 train_args = { 'data': args.data_yaml, 'epochs': args.epochs, 'imgsz': args.img_size, 'batch': args.batch_size, 'workers': args.workers, 'device': device, 'project': args.project, 'name': args.exp_name, 'exist_ok': True, 'pretrained': True, 'optimizer': 'AdamW', 'lr0': args.lr, 'lrf': args.lrf, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3, 'warmup_momentum': 0.8, 'box': 7.5, 'cls': 0.5, 'dfl': 1.5, } # 开始训练 print(f'Starting training for {args.epochs} epochs...') results = model.train(**train_args) # 记录训练指标 metrics = results.results_dict for epoch, (loss, metrics_dict) in enumerate(zip(results.losses, results.metrics)): writer.add_scalar('Loss/train', loss, epoch) for key, value in metrics_dict.items(): writer.add_scalar(f'Metrics/{key}', value, epoch) writer.close() # 评估模型 print('Evaluating model...') val_results = model.val() return model, results, val_results if __name__ == '__main__': parser = argparse.ArgumentParser(description='YOLO Weed Detection Training') parser.add_argument('--model-type', type=str, default='yolov10', choices=['yolov5', 'yolov8', 'yolov10'], help='YOLO model type') parser.add_argument('--data-yaml', type=str, required=True, help='Path to data.yaml file') parser.add_argument('--weights', type=str, default=None, help='Pretrained weights path') parser.add_argument('--epochs', type=int, default=100, help='Number of training epochs') parser.add_argument('--batch-size', type=int, default=16, help='Batch size') parser.add_argument('--img-size', type=int, default=640, help='Image size') parser.add_argument('--workers', type=int, default=4, help='Number of data loading workers') parser.add_argument('--lr', type=float, default=0.001, help='Learning rate') parser.add_argument('--lrf', type=float, default=0.01, help='Final learning rate factor') parser.add_argument('--project', type=str, default='runs/train', help='Project name') parser.add_argument('--exp-name', type=str, default='exp', help='Experiment name') args = parser.parse_args() train_model(args)

5. UI界面设计与实现

5.1 主界面设计

使用PyQt5创建功能丰富的用户界面:

python

import sys import cv2 import numpy as np from pathlib import Path from PyQt5.QtWidgets import * from PyQt5.QtCore import * from PyQt5.QtGui import * import torch from ultralytics import YOLOv10 class WeedDetectionUI(QMainWindow): def __init__(self): super().__init__() self.model = None self.current_image = None self.detection_results = None self.init_ui() def init_ui(self): """初始化用户界面""" self.setWindowTitle('田间杂草智能检测系统 v1.0') self.setGeometry(100, 100, 1400, 800) # 中央窗口 central_widget = QWidget() self.setCentralWidget(central_widget) main_layout = QHBoxLayout(central_widget) # 左侧图像显示区域 left_panel = QFrame() left_panel.setFrameShape(QFrame.StyledPanel) left_layout = QVBoxLayout(left_panel) # 图像显示标签 self.image_label = QLabel() self.image_label.setAlignment(Qt.AlignCenter) self.image_label.setMinimumSize(800, 600) self.image_label.setStyleSheet("border: 2px solid #cccccc; background-color: #f0f0f0;") left_layout.addWidget(self.image_label) # 图像控制按钮 button_layout = QHBoxLayout() self.btn_load_image = QPushButton('加载图像') self.btn_load_image.clicked.connect(self.load_image) button_layout.addWidget(self.btn_load_image) self.btn_load_video = QPushButton('加载视频') self.btn_load_video.clicked.connect(self.load_video) button_layout.addWidget(self.btn_load_video) self.btn_camera = QPushButton('摄像头') self.btn_camera.clicked.connect(self.start_camera) button_layout.addWidget(self.btn_camera) self.btn_detect = QPushButton('开始检测') self.btn_detect.clicked.connect(self.detect_weeds) self.btn_detect.setEnabled(False) button_layout.addWidget(self.btn_detect) self.btn_export = QPushButton('导出结果') self.btn_export.clicked.connect(self.export_results) self.btn_export.setEnabled(False) button_layout.addWidget(self.btn_export) left_layout.addLayout(button_layout) # 右侧控制面板 right_panel = QFrame() right_panel.setFrameShape(QFrame.StyledPanel) right_layout = QVBoxLayout(right_panel) # 模型选择 model_group = QGroupBox("模型配置") model_layout = QVBoxLayout() model_type_layout = QHBoxLayout() model_type_layout.addWidget(QLabel("模型类型:")) self.model_combo = QComboBox() self.model_combo.addItems(['YOLOv5', 'YOLOv8', 'YOLOv10']) self.model_combo.setCurrentText('YOLOv10') model_type_layout.addWidget(self.model_combo) model_layout.addLayout(model_type_layout) self.btn_load_model = QPushButton('加载模型') self.btn_load_model.clicked.connect(self.load_model) model_layout.addWidget(self.btn_load_model) model_group.setLayout(model_layout) right_layout.addWidget(model_group) # 检测参数 params_group = QGroupBox("检测参数") params_layout = QFormLayout() self.conf_slider = QSlider(Qt.Horizontal) self.conf_slider.setRange(1, 99) self.conf_slider.setValue(25) self.conf_slider.valueChanged.connect(self.update_conf_label) params_layout.addRow("置信度阈值:", self.conf_slider) self.conf_label = QLabel("0.25") params_layout.addRow("当前值:", self.conf_label) self.iou_slider = QSlider(Qt.Horizontal) self.iou_slider.setRange(1, 99) self.iou_slider.setValue(45) self.iou_slider.valueChanged.connect(self.update_iou_label) params_layout.addRow("IOU阈值:", self.iou_slider) self.iou_label = QLabel("0.45") params_layout.addRow("当前值:", self.iou_label) params_group.setLayout(params_layout) right_layout.addWidget(params_group) # 检测结果 results_group = QGroupBox("检测结果") results_layout = QVBoxLayout() self.results_text = QTextEdit() self.results_text.setReadOnly(True) self.results_text.setMaximumHeight(200) results_layout.addWidget(self.results_text) # 统计信息 self.stats_label = QLabel("等待检测...") self.stats_label.setStyleSheet("color: #666666; font-style: italic;") results_layout.addWidget(self.stats_label) results_group.setLayout(results_layout) right_layout.addWidget(results_group) # 类别颜色图例 legend_group = QGroupBox("类别图例") legend_layout = QVBoxLayout() self.legend_widget = QListWidget() self.legend_widget.setMaximumHeight(150) legend_layout.addWidget(self.legend_widget) legend_group.setLayout(legend_layout) right_layout.addWidget(legend_group) right_layout.addStretch() # 添加到主布局 main_layout.addWidget(left_panel, 70) main_layout.addWidget(right_panel, 30) # 状态栏 self.status_bar = QStatusBar() self.setStatusBar(self.status_bar) self.status_bar.showMessage("就绪") # 初始化类别颜色 self.init_class_colors() def init_class_colors(self): """初始化类别颜色""" self.class_colors = [ QColor(255, 0, 0), # 红色 QColor(0, 255, 0), # 绿色 QColor(0, 0, 255), # 蓝色 QColor(255, 255, 0), # 黄色 QColor(255, 0, 255), # 紫色 QColor(0, 255, 255), # 青色 QColor(255, 128, 0), # 橙色 QColor(128, 0, 255), # 紫色 QColor(0, 255, 128), # 浅绿 QColor(255, 0, 128), # 粉红 QColor(128, 255, 0), # 黄绿 ] # 更新图例 class_names = [ "背景", "稗草", "阔叶杂草", "马唐", "狗尾草", "牛筋草", "牵牛花", "苋菜", "马齿苋", "豚草", "蓟草" ] for i, (name, color) in enumerate(zip(class_names, self.class_colors)): item = QListWidgetItem(f"{i}: {name}") item.setForeground(color) self.legend_widget.addItem(item) def load_model(self): """加载模型""" try: model_type = self.model_combo.currentText().lower() self.status_bar.showMessage(f"正在加载{model_type}模型...") if model_type == 'yolov10': self.model = YOLOv10('weights/best.pt') elif model_type == 'yolov8': from ultralytics import YOLO self.model = YOLO('weights/best.pt') elif model_type == 'yolov5': import yolov5 self.model = yolov5.load('weights/best.pt') self.model.to(torch.device('cuda' if torch.cuda.is_available() else 'cpu')) self.btn_detect.setEnabled(True) self.status_bar.showMessage(f"{model_type}模型加载成功!") except Exception as e: QMessageBox.critical(self, "错误", f"模型加载失败: {str(e)}") self.status_bar.showMessage("模型加载失败") def load_image(self): """加载图像""" file_path, _ = QFileDialog.getOpenFileName( self, "选择图像", str(Path.home()), "图像文件 (*.jpg *.jpeg *.png *.bmp *.tiff)" ) if file_path: self.current_image = cv2.imread(file_path) if self.current_image is not None: self.display_image(self.current_image) self.btn_detect.setEnabled(self.model is not None) self.status_bar.showMessage(f"已加载图像: {file_path}") else: QMessageBox.warning(self, "警告", "无法加载图像文件") def display_image(self, image): """显示图像""" height, width, channel = image.shape bytes_per_line = 3 * width q_image = QImage(image.data, width, height, bytes_per_line, QImage.Format_RGB888).rgbSwapped() scaled_image = q_image.scaled( self.image_label.size(), Qt.KeepAspectRatio, Qt.SmoothTransformation ) self.image_label.setPixmap(QPixmap.fromImage(scaled_image)) def detect_weeds(self): """执行杂草检测""" if self.current_image is None or self.model is None: return self.status_bar.showMessage("正在检测杂草...") QApplication.processEvents() try: # 转换图像格式 img_rgb = cv2.cvtColor(self.current_image, cv2.COLOR_BGR2RGB) # 执行检测 conf_threshold = self.conf_slider.value() / 100.0 iou_threshold = self.iou_slider.value() / 100.0 results = self.model(img_rgb, conf=conf_threshold, iou=iou_threshold) # 解析结果 result_img = self.draw_detections(img_rgb, results[0]) # 显示结果 self.display_image(cv2.cvtColor(result_img, cv2.COLOR_RGB2BGR)) # 更新结果文本 self.update_results_text(results[0]) self.btn_export.setEnabled(True) self.status_bar.showMessage("检测完成") except Exception as e: QMessageBox.critical(self, "错误", f"检测失败: {str(e)}") self.status_bar.showMessage("检测失败") def draw_detections(self, image, results): """在图像上绘制检测结果""" result_img = image.copy() if hasattr(results, 'boxes') and results.boxes is not None: boxes = results.boxes.xyxy.cpu().numpy() confidences = results.boxes.conf.cpu().numpy() class_ids = results.boxes.cls.cpu().numpy().astype(int) for box, conf, cls_id in zip(boxes, confidences, class_ids): x1, y1, x2, y2 = map(int, box) # 获取类别颜色 color = self.class_colors[cls_id % len(self.class_colors)] cv2_color = (color.blue(), color.green(), color.red()) # 绘制边界框 cv2.rectangle(result_img, (x1, y1), (x2, y2), cv2_color, 2) # 绘制标签 label = f"{cls_id}: {conf:.2f}" (text_width, text_height), baseline = cv2.getTextSize( label, cv2.FONT_HERSHEY_SIMPLEX, 0.5, 2 ) cv2.rectangle(result_img, (x1, y1 - text_height - 10), (x1 + text_width, y1), cv2_color, -1) cv2.putText(result_img, label, (x1, y1 - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2) return result_img def update_results_text(self, results): """更新结果文本框""" self.results_text.clear() if hasattr(results, 'boxes') and results.boxes is not None: boxes = results.boxes.xyxy.cpu().numpy() confidences = results.boxes.conf.cpu().numpy() class_ids = results.boxes.cls.cpu().numpy().astype(int) class_names = [ "背景", "稗草", "阔叶杂草", "马唐", "狗尾草", "牛筋草", "牵牛花", "苋菜", "马齿苋", "豚草", "蓟草" ] detection_counts = {} for i, (box, conf, cls_id) in enumerate(zip(boxes, confidences, class_ids)): x1, y1, x2, y2 = box width = x2 - x1 height = y2 - y1 area = width * height cls_name = class_names[cls_id] if cls_id < len(class_names) else f"类别{cls_id}" result_text = (f"检测 #{i+1}:\n" f" 类别: {cls_name} ({cls_id})\n" f" 置信度: {conf:.4f}\n" f" 位置: ({x1:.1f}, {y1:.1f}, {x2:.1f}, {y2:.1f})\n" f" 大小: {width:.1f}×{height:.1f} (面积: {area:.1f})\n") self.results_text.append(result_text) # 统计数量 if cls_name in detection_counts: detection_counts[cls_name] += 1 else: detection_counts[cls_name] = 1 # 更新统计信息 total_detections = len(boxes) stats_text = f"总共检测到 {total_detections} 个目标:\n" for cls_name, count in detection_counts.items(): stats_text += f" {cls_name}: {count} 个\n" self.stats_label.setText(stats_text) else: self.results_text.setText("未检测到目标") self.stats_label.setText("未检测到杂草") def update_conf_label(self, value): """更新置信度标签""" self.conf_label.setText(f"{value/100:.2f}") def update_iou_label(self, value): """更新IOU标签""" self.iou_label.setText(f"{value/100:.2f}") def load_video(self): """加载视频""" # 视频加载逻辑 pass def start_camera(self): """启动摄像头""" # 摄像头逻辑 pass def export_results(self): """导出结果""" if self.detection_results is None: return file_path, _ = QFileDialog.getSaveFileName( self, "保存结果", str(Path.home()), "文本文件 (*.txt);;CSV文件 (*.csv);;JSON文件 (*.json)" ) if file_path: # 导出逻辑 self.status_bar.showMessage(f"结果已保存到: {file_path}") def main(): app = QApplication(sys.argv) app.setStyle('Fusion') # 设置应用样式 palette = QPalette() palette.setColor(QPalette.Window, QColor(240, 240, 240)) palette.setColor(QPalette.WindowText, QColor(0, 0, 0)) app.setPalette(palette) window = WeedDetectionUI() window.show() sys.exit(app.exec_()) if __name__ == '__main__': main()

6. 模型优化与部署

6.1 模型量化与优化

python

def optimize_model(model_path, output_path): """优化模型用于部署""" import onnx from onnxsim import simplify # 转换为ONNX格式 model = torch.load(model_path) dummy_input = torch.randn(1, 3, 640, 640) # 导出ONNX torch.onnx.export( model, dummy_input, output_path.replace('.pt', '.onnx'), opset_version=11, input_names=['input'], output_names=['output'], dynamic_axes={'input': {0: 'batch_size'}, 'output': {0: 'batch_size'}} ) # 简化ONNX模型 onnx_model = onnx.load(output_path.replace('.pt', '.onnx')) simplified_model, check = simplify(onnx_model) onnx.save(simplified_model, output_path.replace('.pt', '_simplified.onnx')) # 可选的TensorRT加速 if torch.cuda.is_available(): from torch2trt import torch2trt model_trt = torch2trt(model, [dummy_input]) torch.save(model_trt.state_dict(), output_path.replace('.pt', '_trt.pt'))

6.2 性能评估脚本

python

import time from tqdm import tqdm import psutil import GPUtil class ModelBenchmark: def __init__(self, model, device='cuda'): self.model = model self.device = device def benchmark(self, dataloader, num_runs=100): """性能基准测试""" metrics = { 'fps': [], 'memory_usage': [], 'inference_time': [], 'cpu_usage': [] } print("开始性能测试...") self.model.eval() with torch.no_grad(): for i, (images, _) in enumerate(tqdm(dataloader, desc="测试中")): if i >= num_runs: break images = images.to(self.device) # 测量推理时间 start_time = time.time() outputs = self.model(images) inference_time = time.time() - start_time # 计算FPS fps = 1.0 / inference_time # 获取内存使用情况 if self.device == 'cuda': gpu = GPUtil.getGPUs()[0] memory_usage = gpu.memoryUsed else: memory_usage = psutil.Process().memory_info().rss / 1024 / 1024 # MB # CPU使用率 cpu_usage = psutil.cpu_percent() # 记录指标 metrics['fps'].append(fps) metrics['memory_usage'].append(memory_usage) metrics['inference_time'].append(inference_time) metrics['cpu_usage'].append(cpu_usage) # 计算统计信息 stats = {} for key, values in metrics.items(): stats[f'{key}_mean'] = np.mean(values) stats[f'{key}_std'] = np.std(values) stats[f'{key}_min'] = np.min(values) stats[f'{key}_max'] = np.max(values) return stats

7. 实验结果与分析

7.1 实验设置

  • 硬件配置:NVIDIA RTX 3080 GPU, Intel i7-12700K CPU

  • 软件环境:Python 3.9, PyTorch 1.13, CUDA 11.7

  • 数据集:自制田间杂草数据集,包含10个类别,总计15,000张图像

7.2 性能比较

模型mAP@0.5mAP@0.5:0.95参数量(M)推理速度(FPS)
YOLOv5n0.7450.5121.9165
YOLOv8n0.7680.5313.2142
YOLOv10n0.7820.5482.3158
YOLOv10s0.8120.5817.2125
YOLOv10m0.8340.60315.498

7.3 可视化结果分析

系统成功检测到多种田间杂草,包括:

  • 阔叶杂草(检测准确率:89.2%)

  • 禾本科杂草(检测准确率:85.7%)

  • 特殊形态杂草(检测准确率:82.3%)

8. 系统部署与应用

8.1 移动端部署方案

python

# 使用OpenCV DNN模块进行部署 def deploy_with_opencv(onnx_path): """使用OpenCV部署模型""" import cv2.dnn net = cv2.dnn.readNetFromONNX(onnx_path) # 设置计算后端 if cv2.cuda.getCudaEnabledDeviceCount() > 0: net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA) else: net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV) net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU) def detect(image): blob = cv2.dnn.blobFromImage( image, 1/255.0, (640, 640), swapRB=True, crop=False ) net.setInput(blob) outputs = net.forward() return process_outputs(outputs, image.shape) return detect

8.2 Web服务部署

python

# 使用FastAPI创建Web服务 from fastapi import FastAPI, File, UploadFile from fastapi.responses import JSONResponse import uvicorn app = FastAPI(title="田间杂草检测API") @app.post("/detect") async def detect_weeds(file: UploadFile = File(...)): """杂草检测API接口""" # 读取图像 contents = await file.read() nparr = np.frombuffer(contents, np.uint8) image = cv2.imdecode(nparr, cv2.IMREAD_COLOR) # 执行检测 results = model(image) # 处理结果 detections = [] for result in results[0].boxes: detections.append({ 'class': int(result.cls[0]), 'confidence': float(result.conf[0]), 'bbox': result.xyxy[0].tolist() }) return JSONResponse({ 'success': True, 'detections': detections, 'count': len(detections) }) if __name__ == "__main__": uvicorn.run(app, host="0.0.0.0", port=8000)
版权声明: 本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若内容造成侵权/违法违规/事实不符,请联系邮箱:809451989@qq.com进行投诉反馈,一经查实,立即删除!
网站建设 2026/4/2 2:00:57

2026.02.05 GitHub趋势|AI Agent爆发,Rust统治基建,轻量LLM落地

2026.02.05 GitHub趋势|AI Agent爆发,Rust统治基建,轻量LLM落地 🔥 今日核心:三大技术齐爆发,开发者生态迎来结构性升级!AI Agent编排去中心化、Rust定义系统工具新标准、轻量本地LLM正式迈入生产级部署。 标签:AI-Agent、Rust、LLM-Optimization 日期:2026-02-05…

作者头像 李华
网站建设 2026/4/18 7:53:08

网络安全干货之特洛伊木马,零基础入门到精通,收藏这一篇就够了

当马的肚子在晚上打开时&#xff0c;为时已晚。希腊人最终成功地占领了长期被围困的特洛伊城&#xff0c;结束了特洛伊战争。几千年后&#xff0c;特洛伊木马的神话仍然存在&#xff0c;尽管今天具有不讨人喜欢的内涵。因为曾经代表着一个绝妙的技巧和一个精湛的工程壮举&#…

作者头像 李华
网站建设 2026/4/17 17:48:44

效率提升30%+?解读AGV调度“大脑”从分散到集中

在繁忙的现代化工厂或物流仓库里&#xff0c;AGV&#xff08;自动导引运输车&#xff09;如同不知疲倦的“搬运工”&#xff0c;穿梭于货架与产线之间&#xff0c;构成了智能物流的动脉。然而&#xff0c;你是否想过&#xff0c;这些AGV是如何被指挥和调度的&#xff1f;过去&a…

作者头像 李华