Multi-modal target detection method and system suitable for modal deficiency

A target detection and multi-modal technology, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve problems such as information loss, noise in detection results, missing modal data, etc., to improve consistency, The effect of reducing the amount of calculation and complexity

Active Publication Date: 2022-04-15
HEFEI UNIV OF TECH
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This patented technology helps create deleted networks with fewer copies than needed for analysis purposes while also reducing computational effort. It uses algorithms like Lapla’s algorithm to compare all possible modes instead of just one mode at once. By comparing different versions of each type, it becomes easier to calculate how many times they have been deleted without affecting any other parts of the system being analyzed. Additionally, this technique simplifies calculations because there are no matrices involved. Overall, these improvements improve accuracy and efficiency when performing various analyses on multiple types simultaneously.

Problems solved by technology

The present patents discuss how current methods used by imagers and sensors have issues when there're losing some form of data during operation due to factors like environmental changes over time. This issue leads to errors in identifying objects accurately even after long periods of observation without any significant impact from external influences.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal target detection method and system suitable for modal deficiency
  • Multi-modal target detection method and system suitable for modal deficiency
  • Multi-modal target detection method and system suitable for modal deficiency

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0082] Embodiment 1: A multi-modal target detection method suitable for mode loss. In the following embodiment, H is height, W is width, and D is depth. The two-dimensional RGB image data collected by the camera and the laser radar are collected. Taking the two modal data of three-dimensional space point cloud data as an example, a detailed description will be given, including the following steps:

[0083] S1. Real mode generation stage:

[0084] 1) Based on the open source multi-modal data set KITTI, obtain the 3D space point cloud data and 2D RGB image data of the lidar in the same time and space;

[0085] 2), the two-dimensional RGB image data is reflected in the mathematical expression, that is, the modal feature tensor with a size of (H, W, 3), which respectively represents the height, width and RGB three channels of this image; the Resnet network is used As a feature extraction unit of two-dimensional RGB image data, the extraction accuracy is further improved, and finally

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-modal target detection method and system suitable for modal missing. The method comprises the following steps: S1, training a neural network unit by using multi-modal data in a data set; inputting all modal data of the data set into the trained neural network unit for detection, and storing a detection result; s2, extracting modal data of other dimensions; generating a pseudo-modal feature tensor by using the generative network unit, splicing, and inputting into the attention network unit, the information fusion unit and the discrimination network unit until the training of the generative network unit is completed; and S3, collecting modal data in real time through data collection equipment, and generating the type and identifier of the target by using the trained neural network unit. According to the method, on the premise that noise introduction and feature information loss are avoided, missing modal data are virtually generated, the calculation amount and complexity of the model are reduced to a great extent, and the representation consistency of generated modals can be improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products