Virtual processor scheduling method based on NUMA high-performance network processor loads

A virtual processor and network processor technology, applied in software simulation/interpretation/simulation, resource allocation, multi-programming devices, etc., to achieve the effect of realizing system overhead and reducing impact

Inactive Publication Date: 2015-05-13
SHANGHAI JIAO TONG UNIV
View PDF6 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In view of the above-mentioned defects in the prior art, the technical problem to be solved by the present invention is to provide a method based on the affinity between the current virtual machine network card cache and the NUMA node, and analyze the physical processor (Ph

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0021] The concept, specific structure and technical effects of the present invention will be further described below in conjunction with the accompanying drawings, so as to fully understand the purpose, characteristics and effects of the present invention.

[0022] figure 1 It is a schematic diagram of the laboratory server architecture, which shows the classic problems of the NUMA architecture.

[0023] figure 2 It is a schematic diagram of the operation of the SR-IOV high-performance network card used in the present invention, and IOMMU refers to an IO memory management unit.

[0024] image 3 It is the position of the vNAB system in the overall VMM framework in the present invention. The vNAB obtains the cache information from the VM monitor through the hypercall, and at the same time, the DOM0 obtains the real-time VCPU load from the VM and itself in real time, and then obtains the PCPU load situation to provide information for the vNAB calling module support.

[0025]

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual processor scheduling method based on Non-Uniform Memory Access Architecture (NUMA) high-performance network processor loads. The method comprises the steps that when a virtual machine starts a virtual function network card distributed by a Single-Root I/O Virtualization (SR-IOV) high-performance network card, the affinity situation of a virtual machine network card buffer and an NUMA node is obtained by monitoring the running of the virtual machine network card; then, the VCPU running load situation is obtained for the VM and a DOM0, and the PCPU running loads on the NUMA node are obtained through the mapping relation between the VCPU and the PCPU; finally, according to the affinity of the NUMA node and the buffer and real-time load information of a processor of the NUMA node, the final scheduling result is obtained. The VCPU processing network data packets can have the advantages of the affinity and processor resources, and therefore the network performance of a high-performance network server under the virtualization environment can be obviously improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products