Violent video recognition method for bimodal task learning based on attention mechanism

A video recognition and attention technology, applied in the field of violent video recognition, can solve the problems affecting the generalization ability of classifiers, ignoring interdependence, etc., to achieve the effect of improving generalization ability, improving feature appearance, and suppressing feature expression

Pending Publication Date: 2020-11-06
COMMUNICATION UNIVERSITY OF CHINA
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Existing research methods basically only use video labels as supervisory signals, build and train network structures to obtain video violence/non-violence labels, but ignore the interdependence between fe

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0024] Embodiment 1: as figure 1 , figure 2 , image 3 and Figure 4 As shown, the violent video recognition method based on the dual-modal task learning of the attention mechanism includes the following steps:

[0025] Step 1: Add an attention mechanism module to the spatial flow deep neural network to capture the interdependence between the violent features of static frame pictures, and form the weight of the attention mechanism;

[0026] Step 2: Add an attention mechanism module to the time flow deep neural network to capture the interdependence between the violent features of the optical flow sequence diagram, and form the weight of the attention mechanism;

[0027] Step 3: Extract the feature information of the violent video on a single frame image, and establish a violent video recognition model based on a single frame image;

[0028] Step 4: Extract the feature information of the violent video on the motion optical flow, and establish a violent video recognition model

Example Embodiment

[0059] Embodiment 2: as figure 1 , figure 2 , image 3 and Figure 4 As shown, the violent video recognition method based on the dual-modal task learning of the attention mechanism includes the following steps:

[0060] Step S101, adding an attention mechanism module to the deep neural network to capture interdependence between violent features;

[0061] Step S102, using a deep neural network with an attention mechanism to extract the features of the violent video on a single frame image;

[0062] Step S103, using a deep neural network with an attention mechanism to extract the features of the violent video on the motion optical flow;

[0063] Step S104, build a more reasonable violence recognition system based on the post-fusion multi-feature average fusion strategy.

[0064] The first basic neural convolutional network used is the TSN network, which is composed of a spatial stream convolutional neural network and a temporal stream convolutional neural network. The attenti

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a violent video recognition method for bimodal task learning based on an attention mechanism, and belongs to the technical field of natural interaction and intelligent image recognition. The method includes taking the analysis of the characteristics of the violent scene video as a starting point, and extracting video characteristics which are suitable for violent scene description and have space-time correlation; secondly, establishing an attention mechanism module for violent video features by taking capture of global feature information as a principle; and finally, fusing spatial-temporal features with a global attention relationship so as to realize multi-modal information complementation as a starting point, and researching a violent video recognition step of multi-task learning based on an attention mechanism of violent video features and violent video classification so as to form a complete violent video recognition detection framework. According to the violent video recognition method, intelligent and effective detection of the violent video is realized.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner COMMUNICATION UNIVERSITY OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products