GUPTA MECHANICAL

IN THIS WEBSITE I CAN TELL ALL ABOUT TECH. TIPS AND TRICKS APP REVIEWS AND UNBOXINGS ALSO TECH. NEWS .............

Wednesday, 28 September 2022

[Solution] Topology-Aware VM Placement Codeforces Solution



A. Topology-Aware VM Placement
time limit per test
15 seconds
memory limit per test
512 megabytes
input
standard input
output
standard output

Cloud computing has now formed a large-scale ecosystem of technologies, products and services. An Infrastructure as a Service (IaaS) provider offers on-demand access to computing resources in the form of virtual machines, disks and networks on a pay-per-usage basis. The global IaaS market grew 41.4% in 2021, to total $90.9 billion, up from $64.3 billion in 2020, according to Gartner, Inc. With the further expansion of the cloud computing market, the competition among major cloud service providers is also becoming increasingly fierce. In this context, the reduction of cloud computing costs has become the key to leading.

To win the competition, major cloud service providers are focusing on improving the utilization of cloud resources, as this will allow them to reduce the cost of cloud services. It is estimated that even a 1% increase in cloud service resource utilization can save millions of dollars in costs. The optimization of resource utilization, in turn, requires the development of advanced algorithms and technologies for cloud resource management. The virtual machine placement algorithm considered in this contest is one of these critical algorithms, which is used to select a physical machine (PM or host) for running a user's virtual machine (VM). Since VM requests arrive dynamically, this algorithm works in an online fashion.

The increase of the scale and complexity of cloud services and deployed applications also lead to new user requirements. In particular, users often need to deploy a set of VMs with specific requirements on the quality of their VM placement. For example, to ensure high availability each VM must be placed in a separate fault domain, a set of PMs that share a single point of failure. In other situations, it may be preferable to place VMs "close" to each other in terms of network topology to reduce the communication latency. Basic algorithms that place each VM independently cannot adapt to such complex requirements. Your goal will be to devise an algorithm that can both satisfy new user requirements and ensure efficient resource utilization.

Resource Pool

In this contest, we consider the placement of VMs inside a single resource pool consisting of PMs from one or multiple datacenters. The resource pool size is fixed, and no PMs are added or removed during the tests. Initially, all PMs are empty.

Each PM is characterized by its resource capacity in terms of CPUs and memory. Since all current mainstream servers use the Non-Uniform Memory Access (NUMA) architecture, each PM is split into multiple (2 or 4) NUMA nodes. Each node holds some, not necessarily equal, part of CPUs and memory resources of PM. In this contest, it is assumed that all PMs in the resource pool have identical configurations.

The resource pool is organized in the following hierarchical topology.

PMs are mounted in racks, each of which can hold around a dozen of PMs. Each rack forms a separate fault domain because the failure of the rack network switch or power supply leads to the failure of all PMs from the rack.

Finally, racks are grouped into network domains. Each rack and recursively its PMs belong to exactly one network domain. The network communication between PMs from the same network domain is much faster than between PMs from different network domains.

VM Requests and Placement Groups

A cloud user can create a VM from the predefined set of VM types. Each VM type is characterized by NUMA count and CPU and memory usage per NUMA node. NUMA count, which can be 1 or 2, defines the number of NUMA nodes required by VM.

A user can request the creation of multiple VMs of the same type at once. A user can request the deletion of any previously created VM at any time. The deletion time is unknown during the VM creation.

Each VM is created in the context of some placement group (PG). The main role of PG is to define the requirements for the placement of VMs belonging to this group, the so called placement constraints. The constraints can be hard (must be satisfied) or soft (can be violated, but the algorithm will be rewarded for fulfilling them). Each constraint is expressed as affinity (put all VMs in a single topology domain) or anti-affinity (spread VMs across different domains) relation on the particular topology level (PM, rack, network).

The following placement constraints can be used:

  • Hard rack anti-affinity with partitions: VMs are distributed into k partitions, a rack cannot contain VMs from different partitions of this PG (see Figure 2).
  • Soft PM anti-affinity: Each PM should contain at most a VMs from this PG.
  • Soft (hard) network affinity: All VMs should (must) be placed in the same network domain.
  • Soft (hard) rack affinity: All VMs should (must) be placed in the same rack.

PGs are created by users according to their requirements and, therefore, can use different combinations of constraints and parameters (e.g. partition count). PG must be created by the user before creating any VMs in this PG.

According to the above description, the algorithm processes a sequence of user requests of the following types:

  • PG creation,
  • VM creation in the specified PG,
  • VM deletion.

There is also a special termination request which designates the end of the request sequence.

VM Placement Algorithm

The algorithm takes as an input resource pool configuration and VM types. After that, it starts to process user requests. The requests are provided one by one, in online fashion. That means that the algorithm must output its decision regarding the current request without knowledge about the next requests. This is exactly how such an algorithm runs in a real cloud.

Among the described requests, only the VM creation request needs an explicit output from the algorithm. The output should contain the placement decision for each VM — a particular PM and its NUMA nodes where the VM is placed. Each placement decision should not violate the following constraints:

  • For each NUMA node the sum of resource requirements of all VMs placed on it must not exceed the capacity of the node;
  • VM with NUMA count = 2 must be placed on two different NUMA nodes belonging to the same PM (using nodes from different PMs is not allowed);
  • Hard constraints from the corresponding PG.

If the algorithm cannot place all VMs from the request so that all these constraints are met, it must terminate as described in the Interaction section.

PG creation requests should be processed by the algorithm by creating an internal representation of the PG to process future requests referring to it. No output is expected.

VM deletion requests should be processed by the algorithm by releasing the PM resources used by the corresponding VMs. No output is expected.

Optimization Goals

The main goal is to maximize the number of successfully placed VMs. As the algorithm runs on a resource pool of fixed size the more requests are allocated, the less resources are wasted and the more resource efficient is the algorithm.

A secondary goal is to maximize the number of VMs placed without violation of soft constraints, as it improves the cloud user experience.

Interaction

The interaction with your program is organized via standard input and output streams.

Upon startup, the program is provided with resource pool configuration and VM types as follows.

The first line of the input contains four integers:

  • N: the number of network domains (1N8),
  • R: the number of racks in each network domain (1R32),
  • S: the number of PMs in each rack (1S16),
  • K: the number of NUMA nodes per PM (2 or 4).

Each of the next K lines contains two integers:

  • Ck: CPU capacity of NUMA node k (1Ck200),
  • Mk: memory capacity of NUMA node k (1Mk500).

The next line contains one integer F — the number of VM types (1F20).

The next F lines describe the VM types. Each line contains three integers:

  • nf: NUMA count of VM type f (1 or 2),
  • cf: CPU usage per NUMA node of VM type f (1cf100),
  • mf: memory usage per NUMA node of VM type f (1mf100).

Then the program starts to receive and process a sequence of requests. The total number of requests 13500, the total number of created VMs 235000.

In each interaction round your program needs to read and process one request.

The first line of each request i contains a single integer ti (1ti4) which denotes the type of request:

  • 1: PG creation,
  • 2: VM creation,
  • 3: VM deletion,
  • 4: termination.

The following input and expected output depend on the request type.

PG Creation

The next two lines describe the constraints for this placement group.

The first line contains three integers:

  • j: the index of new placement group (groups are indexed with consecutive integers starting from 1 in order of their appearance in the input),
  • kj: the number of partitions for hard rack anti-affinity with partitions (0kj7)
    • 0 means there is no such constraint for this PG
  • aj: the maximum number of VMs per PM for soft PM anti-affinity (0aj5)
    • 0 means there is no such constraint for this PG

The second line describes the affinity constraints. It contains two integers:

  • nj: defines the network affinity requirement (0nj2)
    • 0 means there is no such constraint
    • 1 means that the constraint is soft
    • 2 means that the constraint is hard
  • rj: defines the rack affinity requirement (0rj2)
    • 0 means there is no such constraint
    • 1 means that the constraint is soft
    • 2 means that the constraint is hard

It is guaranteed that njrj. Also if kj0 then rj=0.

After reading this request your program should create the new PG and read the next request without writing anything.

VM Creation

The next line contains four integers:

  • li: the number of created VMs (1li100),
  • fi: the type of created VMs (1fiF),
  • ji: the index of (previously created) PG,
  • pi: the index of partition in this PG (1pikj)
    • 0 means there is no such hard anti-affinity constraint in this PG
    • pi>0 means that all VMs must be placed in partition pi
    • -1 means that each VM must be placed in a separate partition as follows: i-th VM must be placed in partition i (in this case it is guaranteed that li=kj)

The next line contains a sequence of li integers — indexes of new VMs. The VMs are indexed with consecutive integers starting from 1 in order of their appearance in the input.

After reading this request your program should try to place VMs from the request without violating the hard constraints of the corresponding PG.

If it succeeds to place all VMs from the request, it should write li lines to the standard output. Each line must contain the placement decision for a single VM, and the output order should match the VM order in the request. The placement decision for VM v should contain the following integers:

  • nv: the index of network domain (1nvN),
  • rv: the index of rack inside this network domain (1rvR),
  • sv: the index of PM in this rack (1svS),
  • 1 or 2 (depending on NUMA count of VM type) indexes of NUMA nodes in this PM, where the VM is placed (start indexes with 1 and separate them with space).

If your program can not place all VMs from the request so that all hard constraints are met and no NUMA node runs out of resources, it should write -1 to standard output and terminate.

Do not output the partial placement if you cannot place all VMs from the request.

Note that there can be multiple VM creation requests associated with the same PG. The only exception is PGs with rack affinity constraints (rj>0) — for such groups, only a single VM creation request is submitted.

VM Deletion

The next line contains a positive integer li and a sequence of li unique integers — the number of deleted VMs followed by their indexes. It is guaranteed that all these VMs were created and were not deleted before this request. It is also guaranteed that all these VMs are from the same PG.

After reading this request your program should delete VMs from this request, release the corresponding resources, and read the next request without writing anything.

Termination

After reading this request your program should terminate.

Local Test Kit

The local test kit is provided to facilitate the development of problem solutions. It includes the local interactor and the sample tests. Check out the zip archive in the contest materials (right menu).

Scoring

Your solution will be judged based on multiple tests. The test set is fixed during the whole contest, no retesting of solutions on extra tests will be made.

For each test t two scores are collected:

  • PCt: total number of VMs for all successfully processed VM creation requests (the placement score),
  • SCt: total number of VMs placed without violation of soft constraints (the soft score).

The soft score is awarded for each VM creation request X with soft constraints as follows. After X is processed, the constraints are checked. If there is soft affinity constraint and it is violated, then the score is 0. Otherwise the score is set to the number of VMs in X. However, if there is soft PM anti-affinity constraint and it is violated on some PMs, the VMs from X placed on such PMs are excluded from the score. The detailed example of soft score calculation is included in the Note section.

If on some test the solution tries to make an impossible action (for example, makes some NUMA node exceed its capacity or violates some hard constraint) or runs longer than the time limit, then both scores for this test case are set to 0.

The overall score for test is computed as St=1000(wt(0.8(PCt/PCt)+0.2(SCt/SCt))) where PCt and SCt are the scores achieved by the baseline solution, wt is the weight of this test.

The total solution score is computed as S=tSt.

No comments:

Post a Comment