top of page

PerfectTeam Gruppe

Öffentlich·8 Mitglieder
Makar Vorobyov
Makar Vorobyov

Algo De Cloud Computing


The rapid development of Cloud Computing (CC) has led to the release of many services in the cloud environment. Service composition awareness of Quality of Service (QoS) is a significant challenge in CC. A single service in the cloud environment cannot respond to the complex requests and diverse requirements of the real world. In some cases, one service cannot fulfill the user's needs, so it is necessary to combine different services to meet these requirements. Many available services provide an enormous QoS and selecting or composing those combined services is called an Np-hard optimization problem. One of the significant challenges in CC is integrating existing services to meet the intricate necessities of different types of users. Due to NP-hard complexity of service composition, many metaheuristic algorithms have been used so far. This article presents the Artificial Bee Colony and Genetic Algorithm (ABCGA) as a metaheuristic algorithm to achieve the desired goals. If the fitness function of the services selected by the Genetic Algorithm (GA) is suitable, a set of services is further introduced for the Artificial Bee Colony (ABC) algorithm to choose the appropriate service from, according to each user's needs. The proposed solution is evaluated through experiments using Cloud SIM simulation, and the numerical results prove the efficiency of the proposed method with respect to reliability, availability, and cost.




Algo de Cloud Computing


Download: https://www.google.com/url?q=https%3A%2F%2Fjinyurl.com%2F2u43m5&sa=D&sntz=1&usg=AOvVaw09caSZKgcNsLPRTr3ktQws



The cloud computing algorithms include resource management algorithms and workflow task scheduling algorithms. Resource management scheme is that how to rent the resources out to the cloud users on a pay-per-use basis to maximize the profit by achieving high resource utilization, Madni, et al investigate resource manage schemes and algorithms, and analysis and evaluates these schemes [12, 13]. The workflow task scheduling algorithm is a branch of cloud computing scheduling algorithms [14], which used to map task node to the suitable server, and of ordering the task nodes on each server to satisfy some performance criterion. Madni, et al present the comparison of heuristic algorithms for task scheduling [15]. The task-graph scheduling problem is an NP-hard optimization problem, and it difficult to achieve an optimal schedule result [16]. In recent years, some researchers had proposed many effective and feasible scheduling algorithms. The classical scheduling algorithm includes GBLCA(Global Leagure Championship Algorithm) [17] (Abdulhamid, S. M.et al.), dynamic clustering league championship algorithm (DCLCA) [18] (Abdulhamid, S. I. M, et al.), HEFT&CPOP [19] (Heterogeneous Earliest-Finish-Time)&(Critical-Path-on-a-Processor) (Topcuouglu H, et al), DLS(Dynamic Level Scheduling) [20] (Sih G C, et al), DSH(Duplication Scheduling Heuristic) [21] (Badawi, A A, et al.), FCBWTS(Workflow Task Scheduling Based on Fuzzy Clustering) [22] (Guo F Y, et al.), GA(Genetic Algorithm) [23] (Bonyadi M R, et al.), SA(Simulated Annealing) [24] (Dai M, et al.)etc. The QoS parameter of these algorithms is single, which is minimizing Makespan. In the cloud computing system, there are many important parameters, such as minimizing Makespan, minimizing the execution cost. The cloud servers own the different QoS parameters such as CPU type and memory size, and its price is different, for example, the server with faster CPU and more memory, its price is higher, in contrast, its price is lower. The scheduler must to consider a time-cost trade-off when they select server to schedule the workflow tasks, i.e., the multi-objective task graph scheduling in the cloud computing system. To address the multi-objective scheduling problem of task graph in the cloud computing system, many effective and feasible scheduling algorithms are proposed, which are classified heuristic and metaheuristic solutions [25]. The main concept of heuristic solution is that the feasible solution is given to solve the special condition problem, the time and space complexity of the solution is acceptable, but it difficult to achieve an optimal solution. The metaheuristic solution is a general heuristic solution, which solve the problem without the special condition, so the solution is widely applied. The classical metaheuristic solution contains PSO(Particle Swarm Optimizaiton)(Verma, A et al.) [26], ACO(Ant Colony Optimizaiton)(Daun W J et al.) [27], GA(Genetic Algorithm)(Verma, A et al.) [28], SA(Simulated Annealing)(Jian C f et al.) [29] and CSO(Cat Swarm Optimization) [30](Bilgaiyan S et al.). These algorithms own the higher time complexity and very higher time consuming, so they do not apply to the real cloud computing system sparingly.


Recently, many effective and feasible metaheuristic solutions are proposed. The main concept of metaheuristic solution that the reasonable scheduling order list of task nodes is acquired according to the property analysis of task graph, under the special constraints condition, such as deadline, budget etc., and map task node to the corresponding server. The classical heuristic solution includes IC-PCP&IC-PCPD2(Abrishami S et al.) [31] (IaaS Cloud Partical Critical Paths)& (IaaS Cloud Partial Critical Paths with Deadline Distribution), DCCP [32](Vahid A et al.) (Deadline Constrained Critical Path), Deadline-MDP(Deadline-Markov Decision Process) [33](Jia Y et al.), CD-PCP [34](Abrishami S et al.)(Cost-Driven Partial Critical Paths)etc., but these algorithms only consider task graph and server itself, which sort all task nodes and select the execution server prior to the actual scheduling. The above scheduling algorithms do not consider the change problem of sub-deadline and execution cost in the scheduling process. They do not consider the actual computation time (cost) on the execution server in the scheduling process.


This paper converts a workflow into the DAG graph shown in Fig 3. The computation time on the three different types (heterogeneous) server are also given in Table 1. It is assumed that three type servers (S1, S2, S3) are used to schedule the DAG graph, and all servers are connected with communication links of the same capacity. There are many same type servers. The communication time between task nodes is denoted by the edge of the DAG graph shown in Fig 3. The unit price of S1, S2, S3 is 5, 2, 1 respectively. The Deadline of a workflow in Fig 3 is 40 unit time. We demonstrate the implementation process of Deadline-DDEP algorithm.


DDEP algorithm is a scheduling algorithm for a deadline-constrained workflow in the cloud computing system and contains four major data phases: (1)The computation time phase, (2) the communication time phase, (3) the dynamic essential path phase, and (4)the pre-scheduling task node phase.


1. The time complexity of the dynamic sub-deadline strategy contains three separate components. The adjacent matrix is used to store the relationships (communication time) between task nodes in the task graph. The number of task nodes is n, and the size of adjacent matrix is n * n. First part is the number of searching for the pre-scheduling task node is n. Second part is the number of computing the dynamic essential path of all task nodes. Because the maximum number of the predecessor task nodes of the current task node is n, the maximum number of computing the dynamic essential path of the current task node is n; the maximum number of computing the dynamic essential path of all task nodes is n * n. Third part is the number of computing the dynamic sub-deadline for all task nodes, whose maximum number is n. The maximum number of computing the sub-deadline of all task nodes is n + n * n + n = n2 + 2 * n, the time complexity of the dynamic sub-deadline strategy is O(n2).


2. The time complexity of the quality assessment of optimization cost strategy contains two separate components. First part is the time complexity of sorting all task nodes by the dynamic essential path. Sort all task nodes in descending order by their dynamic essential path, whose time complexity is O(n log n). Second part is the scheduling server of all task nodes are got according to their sort order and QM values, the maximum number of computing QM values for all task nodes is k * n, where k is the number of server types, n is far greater than k, its time complexity is O(k * n) = O(n). The time complexity of the quality assessment of optimization cost strategy is O(n log n) + O(n).


where Ccheapest is the execution cost that all task nodes are executed on the cheapest server. If the NC value is smaller, the algorithm performance is better; if the algorithm performance is worse, the NC value is larger. The average NC values over several DAG graphs are used to our experiment.


Where SuccesfulPlanningNumber is the successful scheduling number of task graph under meeting the defined deadline. If the SPR value is smaller, the algorithm performance is worse, whereas if the SPR value is larger, the algorithm performance is better. The average SPR values over several DAG graphs are used to our experiment.


The CyberShake, LIGO and SIPHT structure task graph that has the same size and same deadline of task graph, have longer dynamic essential path for entry task node. Every task node will own a tight dynamic sub-deadline by the proposed algorithm. The proposed algorithm will select the CPU faster and price higher of server to schedule every task node while meeting their dynamic deadline, in this way, it makes the total execution cost will be higher.


The cloud computing systems are sorts of shared collateral structure which has been in demand from its inception. In these systems, clients are able to access existing services based on their needs and without knowing where the service is located and how it is delivered, and only pay for the service used. Like other systems, there are challenges in the cloud computing system. Because of a wide array of clients and the variety of services available in this system, it can be said that the issue of scheduling and, of course, energy consumption is essential challenge of this system. Therefore, it should be properly provided to users, which minimizes both the cost of the provider and consumer and the energy consumption, and this requires the use of an optimal scheduling algorithm. In this paper, we present a two-step hybrid method for scheduling tasks aware of energy and time called Genetic Algorithm and Energy-Conscious Scheduling Heuristic based on the Genetic Algorithm. The first step involves prioritizing tasks, and the second step consists of assigning tasks to the processor. We prioritized tasks and generated primary chromosomes, and used the Energy-Conscious Scheduling Heuristic model, which is an energy-conscious model, to assign tasks to the processor. As the simulation results show, these results demonstrate that the proposed algorithm has been able to outperform other methods.


Info

Willkommen in der Gruppe! Hier können sich Mitglieder austau...

Mitglieder

  • Instagram
  • Facebook
bottom of page