Video abstraction method based on progressive generative adversarial network

一种视频摘要、渐进式的技术,应用在信息处理领域,能够解决重复信息内存要求训练时间降低视频摘要效率和速度、学习不到视频、训练代价大等问题,达到好可扩展性、应用范围广、降低大小的效果

Active Publication Date: 2020-05-15
BOYA XINAN TECH (BEIJING) CO LTD +1
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Among the existing technologies, first of all, the video synthesized from a single image to tens of millions of pictures means a surge in the amount of data, which often causes training pressure on the deep learning model, too much repetitive information, high memory requirements and too long training time can seriously reduce the efficiency and speed of video summarization
Secondly, for video data, the existing technology often reads and trains in a high-resolution manner to learn more accurate features, but the training cost is too high, and simply reducing the resolution training will result in blurred images. The model cannot learn the features of the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0019] In this embodiment, a video summarization method based on a progressive generation confrontation network, such as figure 1 shown, including the following steps:

[0020] Step 1. Perform frame sampling on the abstract video at a fixed frequency, and divide the video into a collection of pictures frame by frame;

[0021] The video is composed of many pictures. The difference between several adjacent pictures is very small, and the information extracted from these pictures is basically the same. Therefore, the main information of the video is not only contained in one picture, but also in multiple adjacent pictures. It contains the main information of the video, and sampling the video at an appropriate frequency will not lose the content expression of the video. For a video R, the first image is f 1 , and then sample the video R at an interval I, and finally get pictures, the collection of these pictures F={f i , i∈[1, N]} can represent the main content contained in t...

Embodiment 2

[0037] In this embodiment, the following video is taken as the video to be summarized, and the video summarization method based on the progressive generation confrontation network of the present invention is used to summarize the video, wherein,

[0038] Video to be digested: A 2-megapixel surveillance video of an underground parking lot, which needs to be digested every 10 minutes for safety management or early warning.

[0039] Summary task: extract the key frames of the video, save the key frames of the video, and compress the length of the video.

[0040] Summary method:

[0041] Step 1. First, the video is frame-sampled at a frame rate of 10 frames per second (10fps), and the video R is divided into 6000 frames, that is, 6000 pictures. After segmentation, the video data is read as frame vectors and arranged in an orderly Reals, which is

[0042] Step 2. The video resolution of 2 million pixels is 1920*1080. Correspondingly, build a 9-layer progressive generation confr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a video abstraction method based on a progressive generative adversarial network, and relates to the technical field of information processing. The method comprises the steps offirstly, segmenting a video into a set of pictures according to a certain frame rate, and converting video data into picture data; then establishing a progressive generative adversarial network model, gradually increasing the network layer of the model, training from low resolution to high resolution, and extracting a key frame; meanwhile, selecting a precision mode or a convergence mode to determine which mode the model stops training at a certain resolution; and finally, giving labels of all frames of the video so as to mark the key frame of the video. The Key frame can be extracted by using the label, and abstract short videos are synthesized. According to the video abstraction method provided by the invention, an unsupervised training mode is adopted; manual marking of key frames doesnot need to be carried out on the video, and meanwhile, progressive training is carried out, so that local information can be fully utilized; the training complexity is reduced; and the stability ofa training result is improved.

Description

technical field [0001] The invention relates to the technical field of information processing, in particular to a video summarization method based on a progressive generation confrontation network. Background technique [0002] With the technological advancement and cost reduction of video handheld shooting equipment and the rise of video social platforms, the application of video in social, monitoring, advertising media and other fields has become more and more in-depth, and the amount of video data has also increased sharply. Like text, pictures, and voice, video is an important form of information media, especially visual information. The difference is that video, especially unedited (or otherwise processed), records facts more vividly and objectively than text and pictures. It is especially important to develop computer vision techniques that can extract valid facts from large amounts of video data. As an upstream task of video analysis, video summarization can remove ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/44H04N21/8549G06N3/04
CPCH04N21/44008H04N21/8549G06N3/045
Inventor 简维凤吴振豪陈钟李青山杨可静兰云飞吴琛李洪生王晓青
Owner BOYA XINAN TECH (BEIJING) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products