Server management apparatus and server management method

a server management and server technology, applied in the field of server virtualization technologies, can solve the problems of increasing power consumption, reducing response performance, and reducing power consumption, so as to reduce migration costs, increase the number and speed of request processing, and reduce power consumption

Inactive Publication Date: 2011-04-28
HITACHI LTD
View PDF1 Cites 141 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

The technical effect described in this patented technology allows administrators to manage clusters without having them stop working due to overcapacity caused when they cannot handle more resources effectively. It also reduces migrations cost while improving efficiency during times where servers can become obsolete.

Problems solved by technology

This patented describes how to improve efficiency in managing large amounts of computer equipment's energy usage during times when they run at different levels of their maximum capacity. However, current methods have limitations: (1) They require significant effort and costly hardware changes overtime, (2) These modifications involve changing certain components within the machine itself, making them difficult to remove without affecting any others connected devices, leading to increased overhead expenses.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Server management apparatus and server management method
  • Server management apparatus and server management method
  • Server management apparatus and server management method

Examples

Experimental program
Comparison scheme
Effect test

embodiment 1

[0053]FIG. 1 is a diagram showing a cluster system in accordance with an embodiment 1. This cluster system is generally made up of a server management apparatus 101, a group of physical servers 102 which are objects to be managed and each of which has a group of virtual servers 103, and a workload dispersion device 104, also known as load balancer. The load balancer 104 is communicably connected via a network 105 to client equipment 106.

[0054]The server management apparatus 101 has its function of executing scale-in / scale-out processing, and may be realizable by using a currently available standard computer. The server management apparatus 101 includes a memory 112, a central processing unit (CPU) 113, a communications device 114, a storage device 114 such as hard disk drive (HDD) or else, an input device 115, and a display device 116. The server management apparatus 101 is coupled to the physical server group 102, virtual server group 103 and load balancer 104 via communication devi

embodiment 2

[0102]An embodiment 2 is the one that is similar to the server management apparatus 101 of the embodiment 1 with an additional means being provided therein, which is for selecting a physical server of the scale-out target by leveraging a degree of load variation similarity of each of a plurality of cluster systems at the time of scale-out judgment. The load variation similarity as used herein is a degree of coincidence of a cluster system-constituting virtual server number-increasing / decreasing time period and stay-constant time period and, further, virtual server number change rate at that time. In cluster systems which coincide with each other in the load variation similarity, migration is performed to a state which is simultaneously less or greater in cluster system-constituting virtual server number. To derive this load variation similarity through computation, a time period with the cluster system-constituting virtual server number staying constant and information of a virtual s

embodiment 3

[0127]An embodiment 3 is the one that adds to the server management apparatus 101 of the embodiment 1 a means for performing migration of a virtual server(s) after execution of the scale-in processing to thereby execute virtual server workload consolidation, thus making it possible to turn off the power supply of a surplus physical server(s). As for the workload consolidation here, appropriate control is provided to execute this processing only when it is possible to power down a physical server by migration of a virtual server(s) thereto while preventing execution of any load consolidation with no power consumption reducing effect.

[0128]FIG. 10 is a diagram showing a cluster system in accordance with the embodiment 3. As shown in FIG. 10, respective software programs of a workload consolidation judgment unit 1001, workload consolidation target virtual server selection unit 1002 and workload consolidation execution unit 1003 are added to the storage device 111 of the server management

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A server management apparatus for lowering migration costs during scale-in/scale-out and workload consolidation of a cluster system(s) to thereby reduce power consumption is disclosed. The apparatus manages a physical server group which renders operative a virtual server group thereon and, when putting into practice a cluster system including a plurality of virtual servers placed in the physical server group, manages the layout state of virtual servers pursuant to the load state of the virtual server group. When executing scale-in, a virtual server operating on a physical server with the minimum number of operative virtual servers is specified as a shutdown target. When executing scale-out, a workload variation is predicted to control a scale-out destination of cluster system so that load variation-resembled cluster systems gather on the same physical server. The scale-in execution timing is delayed if the predicted load variation tends to rise and accelerated if it falls.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Owner HITACHI LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products