Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

10 results about "Optical flow" patented technology

Optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image. The concept of optical flow was introduced by the American psychologist James J. Gibson in the 1940s to describe the visual stimulus provided to animals moving through the world. Gibson stressed the importance of optic flow for affordance perception, the ability to discern possibilities for action within the environment. Followers of Gibson and his ecological approach to psychology have further demonstrated the role of the optical flow stimulus for the perception of movement by the observer in the world; perception of the shape, distance and movement of objects in the world; and the control of locomotion.

Device and method for monitoring and early warning of flood damages based on multi-resource integration

ActiveCN105894741AImprove accuracyExact approximation to the true solutionHuman health protectionAlarmsMulti resourceRadar
The invention discloses a device and method for monitoring and early warning of flood damages based on multi-resource integration. The device comprises a hydrological actual state module, a flow forecast module, a weather forecast module, a radar analysis module, a forecast integration module and a module for product display and early warning release. The method comprises the steps that (1) the hydrological actual state module reads and monitors monitoring point data; (2) the weather forecast module reads input data and parameters to carry out weather mode forecasting; (3) the flow forecast module carries out flow forecasting according to the weather mode forecast and the monitoring point data provided by the hydrological actual state module; (4) the radar analysis module reaches basic data of a radar and estimates precipitation and optical-flow extrapolation; (5) the forecast integration module integrates and outputs information of the flow forecast module and the radar analysis module; and (6) the module for the production display and the early warning release completes the product display and the early warning release. The device and the method consider accuracy of radar extrapolation forecasting in forecasting of the position of a precipitation system and also consider capacity of numerical mode forecasting, so that limitations of the single method are avoided.
Owner:NANJING UNIV OF INFORMATION SCI & TECH

Behavior recognition method and system based on attention mechanism double-flow network

InactiveCN111462183ATake advantage ofImprove the accuracy of behavior recognitionImage enhancementImage analysisTime domainRgb image
The invention provides a behavior recognition method and system based on an attention mechanism double-flow network, and belongs to the technical field of behavior recognition, and the method comprises the steps: dividing an obtained whole video segment into a plurality of video segments with the same length, extracting an RGB image and an optical flow gray-scale image of each frame of each videosegment, and carrying out the preprocessing of the RGB images and the optical flow gray-scale images; carrying out random sampling on the preprocessed image to obtain an RGB image and an optical flowgrayscale image of each video clip; extracting appearance features and time dynamic features of the sampled images by using a double-flow network model introducing an attention mechanism, fusing the appearance features and the time dynamic features according to the types of a time domain network and a space domain network respectively, and performing weighted fusion on a fusion result of the timedomain network and a fusion result of the space domain network to obtain an identification result of the whole video. According to the invention, the video data can be fully utilized, the local key features of the video frame image can be better extracted, the foreground area where the action occurs is highlighted, the influence of irrelevant information in the background environment is inhibited,and the behavior recognition accuracy is improved.
Owner:SHANDONG UNIV

Real-time gesture recognition method

InactiveCN107958218AImprove dynamic gesture recognition rateImprove recognition rateInput/output for user-computer interactionCharacter and pattern recognitionSupport vector machineFeature vector
The invention discloses a real-time gesture recognition method which comprises the steps of (1) decomposing an obtained gesture video into image sequences sorted in a chronological order and preprocessing obtained images and then carrying out hand-region segmentation, (2) extracting a hand shape feature of a hand region in each image and using an SVM support vector machine to identify the hand shape feature as a corresponding gesture value, (3) combining the gesture value of each image and a direction feature of a motion trajectory obtained by an iterating LK pyramid optical flow algorithm asa feature vector of each dynamic gesture image, (4) carrying out loop execution of steps (2) and (3) with a loop end condition that all images of a current video are processed so as to obtain a complete set of feature vector sequences, (5) creating a gesture template library, (6) carrying out optimization DTW match on the obtained feature vector sequences and all templates in a template library, calculating the degree of distortion of the match, wherein recognition is failed if the degree of distortion is larger than a distortion threshold, and a recognition result is outputted if the degree of distortion is smaller than the distortion threshold.
Owner:NANJING UNIV OF POSTS & TELECOMM

Feature point tracking method based on event camera

The invention discloses a feature point tracking method based on an event camera, and aims to improve the feature point tracking precision. According to the technical scheme, the feature point tracking system based on the event camera is composed of a data acquisition module, an initialization module, an event set selection module, a matching module, a feature point updating module and a templateedge updating module. The initialization module extracts feature points and an edge graph from the image frame. The event set selection module selects an event set S of the feature points from the event flow around the feature point. The matching module matches the S with the template edge around the feature points to solve an optical flow set Gk of n feature points at the moment tk. The feature point updating module calculates a position set FDk + 1 of the n feature points at the tk + 1 moment according to the Gk, and the template edge updating module updates the positions in the PBDk by using IMU data to obtain a position set PBDk + 1 of template edges corresponding to the n feature points at the tk + 1 moment. By adopting the method, the precision of tracking the feature points on the event stream can be improved, and the average tracking time of the feature points can be prolonged.
Owner:NAT UNIV OF DEFENSE TECH

Multi-area real-time motion detection method based on monitoring video

ActiveCN108764148ARealize space-time position detectionReal-time processingImage analysisCharacter and pattern recognitionOptical flowTest phase
The invention discloses a multi-area real-time motion detection method based on a monitoring video. The method comprises a model training phase and a testing phase. The model training phase comprisesa step of acquiring training data and marking a database of specific actions, a step of calculating a dense optical flow of a video sequence in the training data, obtaining an optical flow sequence ofthe video sequence in the training data, and marking an optical flow image in the optical flow sequence, and a step of training a target detection model yolo v3 by using the video sequence and the optical flow sequence in the training data to obtain an RGB yolo v3 model and an optical flow yolo v3 model. According to the method, the spatiotemporal position detection of the specific action in themonitoring video can be achieved, and the real-time processing of the monitoring can be achieved.
Owner:NORTHEASTERN UNIV

Unmanned aerial vehicle visual angle video semantic segmentation method based on deep learning

PendingCN113269133AEasy to use and flexibleReduction procedureCharacter and pattern recognitionNeural architecturesImage segmentationOptical flow
The invention belongs to the field of unmanned aerial vehicle vision, and relates to an unmanned aerial vehicle view angle video semantic segmentation method based on deep learning. In order to solve the problem of image semantic segmentation, an encoder-decoder asymmetric network structure is designed, in the encoder stage, channel split and channel recombination are fused to improve a Bottleneck structure so as to carry out down-sampling and feature extraction, in the decoder stage, rich features are extracted and fused based on a spatial pyramid multi-feature fusion module, and in the multi-feature fusion module, the multi-feature fusion is carried out on the basis of the multi-feature fusion module. And finally, up-sampling is performed to obtain a segmentation result. And then, aiming at a video semantic segmentation problem, the image segmentation model designed by the invention is used as a segmentation module of video semantic segmentation, thereby improving a key frame selection strategy and performing feature transfer in combination with an optical flow method, reducing redundancy and accelerating the video segmentation speed.
Owner:DALIAN UNIV OF TECH

Zinc flotation working condition identification method based on long-time-history depth features

The invention discloses a zinc flotation working condition identification method based on long-time-history depth features, and the method comprises the following steps: firstly, employing a 3D convolutional network as a basic network, and simulating an optical flow network through a knowledge distillation method by employing a part of structure of an RGB flow network, so that the RGB flow network can learn the motion information of the optical flow without extracting the optical flow during testing; segmenting a single video, performing frame-level feature extraction on each segment by using a trained RGB flow network, and inputting the extracted frame-level features of each segment into an LSTM network to further extract video-level global spatial-temporal features; and supplementing a 2D convolutional network for the network to extract supplemented appearance features, splicing the global spatial-temporal features and the enhanced appearance features together, and inputting the spliced features into a multi-layer perceptron to carry out final working condition identification. According to the invention, the advantages of the convolutional neural network and the recurrent neural network are combined, and the zinc flotation working condition can be quickly and accurately identified, so that dosing is effectively guided.
Owner:CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products