基于PyQt的YOLOv5+DeepSORT可视化界面,可实现目标跟踪、模型更换、结果保存和轨迹隐藏等功能。
基于PyQt的YOLOv5DeepSORT可视化界面可实现目标跟踪、模型更换、结果保存和轨迹隐藏等功能。代码示例仅供参考包括界面设计和功能实现。文章目录**如何训练呢车辆行人检测数据集**1. 准备数据2. 配置YOLOv53. 开始训练4. 使用DeepSORT进行跟踪总结1. 安装依赖2. 创建主界面YOLOv5Deepsort可视化界面基于pyqt预实现功能特点勾选跟踪目标/更换自己的模型/保存结果/隐藏轨迹基于PyQt的YOLOv5DeepSORT可视化界面可实现目标跟踪、模型更换、结果保存和轨迹隐藏等功能。代码示例仅供参考包括界面设计和功能实现。如何训练呢车辆行人检测数据集训练YOLOv5模型以识别车辆和行人两个类别的数据集涉及几个步骤准备数据、配置YOLOv5模型、训练模型以及使用DeepSORT进行跟踪。以下是一个详细的指南仅供参考吧1. 准备数据确保同学数据集已经按照YOLO格式准备好每个图像文件都有一个对应的.txt标签文件内容如下每行代表一个对象class_index x_center y_center width heightclass_index: 类别的索引例如0代表行人1代表车辆。x_center,y_center: 边界框中心点相对于图像宽度和高度的比例。width,height: 边界框的宽度和高度相对于图像宽度和高度的比例。假设你的数据结构如下dataset/ ├── images/ │ ├── train/ │ └── val/ └── labels/ ├── train/ └── val/2. 配置YOLOv5创建一个名为data.yaml的数据集配置文件内容如下train:./dataset/images/train# 训练集图片路径val:./dataset/images/val# 验证集图片路径nc:2# 类别数量这里是2因为我们有两个类别行人和车辆names:[person,car]# 类别名称列表3. 开始训练在YOLOv5目录下运行以下命令开始训练模型python train.py--img640--batch16--epochs100--datadata.yaml--weightsyolov5s.pt--img 640: 设置输入图像的大小为640x640。--batch 16: 每个批次的图像数量。--epochs 100: 训练周期数。--data data.yaml: 指定之前创建的数据集配置文件。--weights yolov5s.pt: 从YOLOv5的小型版本yolov5s开始微调。4. 使用DeepSORT进行跟踪训练完成后将YOLOv5与DeepSORT结合使用来进行目标跟踪。下面是一个简单的示例代码展示了如何加载训练好的YOLOv5模型并将其与DeepSORT集成用于视频中的目标检测和跟踪importcv2importtorchfromdeep_sort_pytorch.utils.parserimportget_configfromdeep_sort_pytorch.deep_sortimportDeepSortfromyolov5.models.experimentalimportattempt_loadfromyolov5.utils.generalimportnon_max_suppression,scale_coords# 加载YOLOv5模型weightsruns/train/exp/weights/best.pt# 训练好的权重路径devicetorch.device(cudaiftorch.cuda.is_available()elsecpu)modelattempt_load(weights,map_locationdevice)model.to(device).eval()# 加载DeepSORTcfgget_config()cfg.merge_from_file(deep_sort_pytorch/configs/deep_sort.yaml)deepsortDeepSort(cfg.DEEPSORT.REID_CKPT,max_distcfg.DEEPSORT.MAX_DIST,min_confidencecfg.DEEPSORT.MIN_CONFIDENCE,nms_max_overlapcfg.DEEPSORT.NMS_MAX_OVERLAP,max_iou_distancecfg.DEEPSORT.MAX_IOU_DISTANCE,max_agecfg.DEEPSORT.MAX_AGE,n_initcfg.DEEPSORT.N_INIT,nn_budgetcfg.DEEPSORT.NN_BUDGET,use_cudaTrue)defprocess_video(video_path):capcv2.VideoCapture(video_path)whilecap.isOpened():ret,framecap.read()ifnotret:breakimgcv2.cvtColor(frame,cv2.COLOR_BGR2RGB)imgimg.transpose((2,0,1))[::-1]# HWC to CHW, BGR to RGBimgnp.ascontiguousarray(img)imgtorch.from_numpy(img).to(device)imgimg.float()/255.0# 0 - 255 to 0.0 - 1.0ifimg.ndimension()3:imgimg.unsqueeze(0)predmodel(img)[0]prednon_max_suppression(pred,conf_thres0.25,iou_thres0.45)fori,detinenumerate(pred):# detections per imageiflen(det):det[:,:4]scale_coords(img.shape[2:],det[:,:4],frame.shape).round()outputsdeepsort.update(det.cpu(),frame)foroutputinoutputs:bbox_2doutput[:4]idoutput[-1]cv2.rectangle(frame,(int(bbox_2d[0]),int(bbox_2d[1])),(int(bbox_2d[2]),int(bbox_2d[3])),(0,255,0),2)cv2.putText(frame,fID:{id},(int(bbox_2d[0]),int(bbox_2d[1])-10),cv2.FONT_HERSHEY_SIMPLEX,0.9,(0,255,0),2)cv2.imshow(Tracking,frame)ifcv2.waitKey(1)0xFFord(q):breakcap.release()cv2.destroyAllWindows()if__name____main__:video_pathpath/to/video.mp4process_video(video_path)这个脚本会读取指定的视频文件利用YOLOv5模型对每一帧进行物体检测并使用DeepSORT算法对检测到的目标进行跟踪。对于每个检测结果都会绘制边界框并在框上显示跟踪ID。总结训练一个能够识别车辆和行人的YOLOv5模型并将其与DeepSORT结合起来实现多目标跟踪。1. 安装依赖首先确保安装了必要的库pipinstallPyQt5 opencv-python torch torchvision deep_sort_pytorch2. 创建主界面使用PyQt5创建一个简单的GUI界面。importsysfromPyQt5.QtWidgetsimportQApplication,QMainWindow,QLabel,QPushButton,QVBoxLayout,QWidget,QFileDialog,QCheckBox,QComboBox,QHBoxLayoutfromPyQt5.QtGuiimportQImage,QPixmapfromPyQt5.QtCoreimportQt,QTimerimportcv2importnumpyasnpimporttorchfromyolov5.models.experimentalimportattempt_loadfromyolov5.utils.datasetsimportLoadImagesfromyolov5.utils.generalimportnon_max_suppression,scale_coordsfromdeep_sort_pytorch.utils.parserimportget_configfromdeep_sort_pytorch.deep_sortimportDeepSortclassMainWindow(QMainWindow):def__init__(self):super().__init__()self.setWindowTitle(YOLOv5DeepSORT)self.setGeometry(100,100,800,600)# Initialize UI componentsself.initUI()# Initialize YOLOv5 and DeepSORTself.modelNoneself.deepsortNoneself.tracks{}self.tracking_enabledTrueself.show_trajectoryTrueself.selected_classcar# Timer for video processingself.timerQTimer(self)self.timer.timeout.connect(self.process_frame)self.capNonedefinitUI(self):# Layout setuplayoutQVBoxLayout()widgetQWidget()widget.setLayout(layout)self.setCentralWidget(widget)# Input and Output labelsinput_labelQLabel(输入)output_labelQLabel(输出)layout.addWidget(input_label)layout.addWidget(output_label)# Image labels for displaying framesself.input_labelQLabel(self)self.output_labelQLabel(self)layout.addWidget(self.input_label)layout.addWidget(self.output_label)# Buttons and checkboxesbutton_layoutQHBoxLayout()open_buttonQPushButton(打开)open_button.clicked.connect(self.open_video)adapt_window_buttonQPushButton(适应窗口)adapt_window_button.clicked.connect(self.adapt_window)predict_buttonQPushButton(预测)predict_button.clicked.connect(self.start_prediction)stop_buttonQPushButton(停止)stop_button.clicked.connect(self.stop_prediction)trajectory_checkboxQCheckBox(绘制轨迹)trajectory_checkbox.setChecked(True)trajectory_checkbox.stateChanged.connect(self.toggle_trajectory)class_comboboxQComboBox()class_combobox.addItems([person,car])class_combobox.currentIndexChanged.connect(self.select_class)person_checkboxQCheckBox(person)car_checkboxQCheckBox(car)person_checkbox.setChecked(False)car_checkbox.setChecked(True)person_checkbox.stateChanged.connect(lambda:self.select_class(person))car_checkbox.stateChanged.connect(lambda:self.select_class(car))button_layout.addWidget(open_button)button_layout.addWidget(adapt_window_button)button_layout.addWidget(predict_button)button_layout.addWidget(stop_button)button_layout.addWidget(trajectory_checkbox)button_layout.addWidget(QLabel(选择跟踪对象))button_layout.addWidget(person_checkbox)button_layout.addWidget(car_checkbox)layout.addLayout(button_layout)defopen_video(self):optionsQFileDialog.Options()file_name,_QFileDialog.getOpenFileName(self,选择视频文件,,Video Files (*.mp4 *.avi);;All Files (*),optionsoptions)iffile_name:self.capcv2.VideoCapture(file_name)self.load_model()defload_model(self):# Load YOLOv5 modelself.modelattempt_load(yolov5s.pt,map_locationcpu)self.model.to(cpu).eval()# Load DeepSORTcfgget_config()cfg.merge_from_file(deep_sort_pytorch/configs/deep_sort.yaml)self.deepsortDeepSort(cfg.DEEPSORT.REID_CKPT,max_distcfg.DEEPSORT.MAX_DIST,min_confidencecfg.DEEPSORT.MIN_CONFIDENCE,nms_max_overlapcfg.DEEPSORT.NMS_MAX_OVERLAP,max_iou_distancecfg.DEEPSORT.MAX_IOU_DISTANCE,max_agecfg.DEEPSORT.MAX_AGE,n_initcfg.DEEPSORT.N_INIT,nn_budgetcfg.DEEPSORT.NN_BUDGET,use_cudaTrue)defstart_prediction(self):ifself.capisnotNone:self.timer.start(30)# 30 ms delay between framesdefstop_prediction(self):self.timer.stop()defprocess_frame(self):ret,frameself.cap.read()ifnotret:self.timer.stop()return# Process frame with YOLOv5 and DeepSORTimgcv2.cvtColor(frame,cv2.COLOR_BGR2RGB)imgimg.transpose((2,0,1))[::-1]# HWC to CHW, BGR to RGBimgnp.ascontiguousarray(img)imgtorch.from_numpy(img).to(cpu)imgimg.float()/255.0# 0 - 255 to 0.0 - 1.0ifimg.ndimension()3:imgimg.unsqueeze(0)predself.model(img)[0]prednon_max_suppression(pred,conf_thres0.25,iou_thres0.45)fori,detinenumerate(pred):# detections per imageiflen(det):det[:,:4]scale_coords(img.shape[2:],det[:,:4],frame.shape).round()outputsself.deepsort.update(det.cpu(),frame)self.tracks{}foroutputinoutputs:bbox_2doutput[:4]idoutput[-1]center((bbox_2d[0]bbox_2d[2])//2,(bbox_2d[1]bbox_2d[3])//2)ifidnotinself.tracks:self.tracks[id][]self.tracks[id].append(center)ifself.tracking_enabledandself.selected_classin[person,car]:cv2.rectangle(frame,(int(bbox_2d[0]),int(bbox_2d[1])),(int(bbox_2d[2]),int(bbox_2d[3])),(0,255,0),2)cv2.putText(frame,fID:{id},(int(bbox_2d[0]),int(bbox_2d[1])-10),cv2.FONT_HERSHEY_SIMPLEX,0.9,(0,255,0),2)ifself.show_trajectory:foriinrange(1,len(self.tracks[id])):point1self.tracks[id][i-1]point2self.tracks[id][i]cv2.line(frame,point1,point2,(0,0,255),2)# Display the processed frameheight,width,channelframe.shape bytes_per_line3*width q_imgQImage(frame.data,width,height,bytes_per_line,QImage.Format_RGB888)self.output_label.setPixmap(QPixmap.fromImage(q_img))deftoggle_trajectory(self,state):self.show_trajectorystateQt.Checkeddefselect_class(self,class_name):self.selected_classclass_namedefadapt_window(self):self.input_label.setScaledContents(True)self.output_label.setScaledContents(True)if__name____main__:appQApplication(sys.argv)windowMainWindow()window.show()sys.exit(app.exec_())