Dr. Aly, O.
Computer Science
Abstract
The purpose of this paper is to provide analysis of the Performance of IaaS Cloud with an emphasis on Stochastic Model. The project begins with a brief discussion on the Cloud Computing and its Deployment Models of IaaS, PaaS, and SaaS. It also discusses the available three options for the performance analysis of the IaaS Cloud of the experiment-based, discrete event-simulation-based, and the stochastic model-based. The discussion focuses on the most feasible approach which is the stochastic model. The discussion of the performance analysis also includes the proposed sub-models of the Resource Provisioning Decision Engine (RPDE) of CTMC, the Hot Physical Machine (PM) Sub-Model, the Closed-Form Solution for Hot PM Sub-Model, the Warm PM Sub-Model, and the Cold PM Sub-Model. The discussion and the analysis also include the Interactions among these sub-models and the impact on the performance. The Monolithic Model is also discussed and analyzed. The findings of this discussion and analysis are addressed comparing the scalability and accuracy with the one-level monothetic model, and the accuracy of the interacting sub-models with the monolithic model. The result also showed that when the number of PMs in each pool increased beyond three and the number of the VMs per PM increases beyond 38, the monolithic model runs into a memory overflow problem. The result also indicated that the state space size of the monolithic model increases quickly and becomes too large to construct the reachability graph even for a small number of PMs and VMs. When using the interacting sub-models, a reduced number of states and nonzero entries leads to a concomitant reduction in solution time needed. The findings of indicated that the values of the probabilities models (Ph, Pw, Pc) that at least one PM can accept a job in a pool are different in monolithic (“exact”) model and interacting (“approximate”) sub-models.
Keywords: IaaS Performance Analysis, Stochastic Model, Monolithic Model, CTMC.
Cloud Computing
Cloud Computing has attracted the attention of both the IT industry and the academia as it represents a new paradigm in computing and as a business model (Xiao & Xiao, 2013). The key concept of the Cloud Computing is not new (Botta, de Donato, Persico, & Pescapé, 2016; Kaufman, 2009; Kim, Kim, Lee, & Lee, 2009; Zhang, Cheng, & Boutaba, 2010). In accordance to (Kaufman, 2009) the technology of the Cloud Computing has been evolving for decades, “more than 40 years.” Licklider introduced the term of “intergalactic computer network” back in the 1960s at the Advanced Research Projects Agency (Kaufman, 2009; Timmermans, Stahl, Ikonen, & Bozdag, 2010). The term “cloud” goes back 1990s when the telecommunication world was emerging (Kaufman, 2009). The virtual private network (VPN) services also got introduced with the telecommunication (Kaufman, 2009). Although the VPN maintained the same bandwidth as “fixed networks,” the bandwidth efficiency got increased and the utilization of the network was balanced because these “fixed networks” supported “dynamic routing (Kaufman, 2009). The telecommunication with the VPN and the bandwidth efficiency using dynamic routing resulted in technology that was coined the term “telecom cloud” (Kaufman, 2009). The term of Cloud Computing is similar to the term “telecom cloud” as Cloud Computing also provides computing services using virtual environments that are dynamically allocated as required by consumers (Kaufman, 2009).
Also, the underlying concept of the Cloud Computing was introduced by John McCarthy, the “MIT computer scientist and Turning aware winner,” in 1961 (Jadeja & Modi, 2012; Kaufman, 2009). McCarthy predicted that “computation may someday be organized as a public utility” (Foster, Zhao, Raicu, & Lu, 2008; Jadeja & Modi, 2012; Joshua & Ogwueleka, 2013; Khan, Khan, & Galibeen, 2011; Mokhtar, Ali, Al-Sharafi, & Aborujilah, 2013; Qian, Luo, Du, & Guo, 2009; Timmermans et al., 2010). Besides, Douglas F. Parkhill as cited in (Adebisi, Adekanmi, & Oluwatobi, 2014), in his book called “The Challenge of the Computer Utility” also predicted that the computer industry will provide similar services like the public utility “in which many remotely located users are connected via communication links to a central computing facility” (Adebisi et al., 2014).
NIST (Mell & Grance, 2011) identifies three essential Cloud Computing Service Models as follows:
- layer provides the capability to the consumers to provision storage, processing, networks, and other fundamental computing resources. Using IaaS, the consumer can deploy and run “arbitrary” software, which can include operating systems and application. When using IaaS, the users do not manage or control the “underlying cloud infrastructure.” However, the consumers have control over the storage, the operating systems, and the deployed application; and “possibly limited control of selected networking components such as host firewall” (Mell & Grance, 2011).
- allows the Cloud Computing consumers to deploy applications that are created using programming languages, libraries, services, and tools supported by the providers. Using PaaS, the Cloud Computing users do not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage. However, the consumers have control over the deployed applications and possibly configuration settings for the application-hosting environment (Mell & Grance, 2011).
- allows Cloud Computing consumers to use the provider’s applications running on the cloud infrastructure. Users can access the applications from various client devices through either a thin client interface, such as a web-based email from a web browser, or a program interface. Using SaaS, the consumers do not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings (Mell & Grance, 2011).
Performance of IaaS Cloud
The management of Big Data does require computing capacity. This computing capacity requirement is met by the IaaS clouds which are regarded to be the major enabler of data-intensive cloud application (Ghosh, Longo, Naik, & Trivedi, 2013; Sakr & Gaber, 2014). When using the IaaS Service Cloud Model, instances of Virtual Machines (VMs) which are deployed on physical machines (PMs) are provided to users for computing needs (Ghosh et al., 2013; Sakr & Gaber, 2014). Providing the basic functionalities for processing Big Data is important. However, the performance of the Cloud is regarded to be another important factor (Ghosh et al., 2013; Sakr & Gaber, 2014). IaaS cloud providers offer Service Level Agreement (SLA) to guarantee availability (Ghosh et al., 2013; Sakr & Gaber, 2014). However, performance SLA is as important as the availability SLA (Ghosh et al., 2013; Sakr & Gaber, 2014). Performance analysis of the Cloud is complex process because performance is impacted by various things such as the hardware components of CPU speed, disk properties, or software such as the nature of hypervisor, or the workload such the arrival rate, or the placement policies (Ghosh et al., 2013; Sakr & Gaber, 2014).
There are three major techniques which can be used to evaluate the performance of the Cloud (Ghosh et al., 2013; Sakr & Gaber, 2014). The first technique involves the experimentation for measurement-based performance quantification (Ghosh et al., 2013; Sakr & Gaber, 2014). However, this approach is not a practical approach due to the scale of the cloud which becomes prohibitive in term of time and cost when using this measurement-based analysis (Ghosh et al., 2013; Sakr & Gaber, 2014). The second approach involves discrete event simulation (Ghosh et al., 2013; Sakr & Gaber, 2014). However, this approach is not practical approach either because the simulation can take a long time to provide statistically significant results (Ghosh et al., 2013; Sakr & Gaber, 2014). The third approach is the stochastic technique which can be used as a low-cost option where the model solution time is much less the experimental approach and the simulation approach (Ghosh et al., 2013; Sakr & Gaber, 2014). However, with the stochastic approach, the cloud may not scale giving the complexity and the size of the Cloud (Ghosh et al., 2013; Sakr & Gaber, 2014). Scalable stochastic modeling approach which can preserve accuracy is important (Ghosh et al., 2013; Sakr & Gaber, 2014).
As indicated in (Ghosh et al., 2013; Sakr & Gaber, 2014), three pools identified for the Cloud architecture; hot, warm and cold. The hot pool is the busy and running pool (running status), while the warm pool is turned on but not ready (turned on but not ready status) and it is saving power, and the cold pool is turned off (turned off status) (Ghosh et al., 2013; Sakr & Gaber, 2014). There is no delay with the hot pool, while there is a little delay in the warm pool and the delay gets increased with the cold pool (Ghosh et al., 2013; Sakr & Gaber, 2014). When a request arrived, the Resource Provisioning Decision Engine (RPDE) tried to find a physical machine from the hot pool, which can accept the request (Ghosh et al., 2013; Sakr & Gaber, 2014). However, if all machines are all busy in the hot pool, the RPDE tries to find a physical machine from the warm pool (Ghosh et al., 2013; Sakr & Gaber, 2014). If the warm pool can not meet the request, the RPDE will go to the cold pool to meet that request (Ghosh et al., 2013; Sakr & Gaber, 2014).
There are interacting sub-models for performance analysis. A scalable approach using interacting stochastic sub-models are proposed where iteration composes an overall solution over individual sub-model solutions (Ghosh et al., 2013; Sakr & Gaber, 2014).
- RPDE Sub-Model of the Continuous-Time MarkovChains (CTMC)
The first model is the RPDE sub-model of the Continuous-Time Markov Chains (CTMC) which is designed to capture the resource providing decision process (Ghosh et al., 2013; Sakr & Gaber, 2014). In this submodel, a finite length decision queue is considered where decisions are made on a first-come, first-serve (FMFS) basis (Ghosh et al., 2013; Sakr & Gaber, 2014). Under this sub-model, there is a closed form solution for RPDE sub-model and VM provisioning sub-model solutions.
1.1 The Closed Form Solution for RPDE Sub-Model
Using this closed form sub-model, a numerical solution can be obtained in two steps which start with some value of π(0,0), and compute all the state probabilities as a function of π(0,0) (Ghosh et al., 2013; Sakr & Gaber, 2014). The second step includes the actual steady state probability which gets calculated and normalized (Ghosh et al., 2013; Sakr & Gaber, 2014). The calculation of the steady state is found in (Ghosh et al., 2013; Sakr & Gaber, 2014). Using the Markov reward approach, the outputs from the RPDE sub-model are obtained by appropriate reward rate assigned to each state of the CTMC and then computing the expected reward rate in the steady state (Ghosh et al., 2013; Sakr & Gaber, 2014). There are three scenarios for the outputs from this RPDE sub-model: The job rejection probability, the mean queuing delay, and the mean decision delay. Each of these outputs has its calculations which can be found in details in (Ghosh et al., 2013; Sakr & Gaber, 2014).
1.2 The VM Provisioning Sub-Models
The VM provisioning sub-models capture the instantiation, configuration, and provision of a VM on a physical machine (PM) (Ghosh et al., 2013; Sakr & Gaber, 2014). For each hot, warm, and cold physical machine pool, there is CTMC which keeps track of the number of assigned and running VMs (Ghosh et al., 2013; Sakr & Gaber, 2014). The VM provisioning sub-models include various sub-models: (1) the hot PM sub-model, (2) the closed form solution for hot PM sub-model, (3) the warm PM sub-model, and (4) the cold PM sub-model (Ghosh et al., 2013; Sakr & Gaber, 2014).
1.2.1 The Hot PM Sub-Model
The hot PM sub-model include the overall hot pool which is modeled by a set of independent hot PM sub-models (Ghosh et al., 2013; Sakr & Gaber, 2014). The VMs are assumed to be provisioned serially (Ghosh et al., 2013; Sakr & Gaber, 2014).
1.2.2 The Closed-Form Solution for Hot PM Sub-Model
The closed form solution for hot PM sub-model is derived for the steady state probabilities of the hot PM CTMC where the hot PM is modeled as a two-stage tandem network of queues (Ghosh et al., 2013; Sakr & Gaber, 2014). In the closed form solution for hot PM sub-model, the queuing system consists of two nodes (1) node A , and node B. Node A has only one server with service rate of βh , while Node B has infinite servers with service rate of each server of µ (Ghosh et al., 2013; Sakr & Gaber, 2014). The server in node A denotes the provisioning engine within the PM while the servers in node B denote the running VMs. The service time distribution at both nodes A and B is exponential (Ghosh et al., 2013; Sakr & Gaber, 2014). The calculation for the external arrival process is demonstrated in (Ghosh et al., 2013; Sakr & Gaber, 2014). If the buffer of PM is full, it cannot accept a job for provisioning (Ghosh et al., 2013; Sakr & Gaber, 2014). The steady state probability can be computed after solving the hot PM sub-model (Ghosh et al., 2013; Sakr & Gaber, 2014).
1.2.3 The Warm PM Sub-Model
In the warm PM sub-model, there are three main differences from the hot PM sub-model. The effective arrival rate is the first difference. In the warm PM sub-mode, the effective arrival rate is different because jobs arrive at the warm PM pool only if they are not provisioned on any of the hot PMs (Ghosh et al., 2013; Sakr & Gaber, 2014). The second difference is the time required to provision VM. When there is no VM running or being provisioning, warm PM is turned on but not ready for use yet. Upon a job arrival in this state, the warm PM requires some additional time for startup which causes delay to make it ready to use (Ghosh et al., 2013; Sakr & Gaber, 2014). The time to make a warm PM ready for use is assumed to be exponential. The third difference is the mean time to provision a VM on a warm PM is 1/βw for the first VM to be deployed on this PM. The mean time to provision VMs for subsequent jobs is the same as that for a hot PM (Ghosh et al., 2013; Sakr & Gaber, 2014). After solving the warm PM sub-model, the steady-state probability is computed detailed in (Ghosh et al., 2013; Sakr & Gaber, 2014).
1.2.4 The Cold PM Sub-Model
The cold PM sub-model had differences from the hot and cold PM sub-models discussed above (Ghosh et al., 2013; Sakr & Gaber, 2014). The effective arrival rate, the rate at which startup is executed, and the initial VM provisioning rates and the buffer sizes (Ghosh et al., 2013; Sakr & Gaber, 2014). In (Ghosh et al., 2013; Sakr & Gaber, 2014), the computations for these factors are provided in details.
Once the job is successfully provisioned using either hot, cold or warm, the job utilizes the resources until the execution of the job is completed (Ghosh et al., 2013; Sakr & Gaber, 2014). The run-time sub-model is utilized to determine the mean time for job completion. The Discrete Time Markov Chain (DTMC) is used to capture the details of a job execution (Ghosh et al., 2013; Sakr & Gaber, 2014). The job can complete its execution with a probability of P0 or go for some local I/O operations with a probability of ( 1 – P0 ) (Ghosh et al., 2013; Sakr & Gaber, 2014). The full calculation is detailed in (Ghosh et al., 2013; Sakr & Gaber, 2014).
2. The Interactions Among Sub-Models
The sub-models discussed above can interact. The interactions among these sub-models are illustrated in Figure 1, adapted from (Ghosh et al., 2013).

Figure 1: Interactions among the sub-models, adapted from (Ghosh et al., 2013).
In (Ghosh et al., 2013), this interaction is discussed briefly. The run-time sub-model yields the mean service time (1/µ) which is needed as an input parameter to each type; hot, warm, or cold of the VM provisioning sub-model (Ghosh et al., 2013). The VM provisioning sub-models compute the steady state probabilities as (Ph, Pw, and Pc) which at least on PM in a hot pool, warm pool, or cold pool respectively can accept a job for provisioning (Ghosh et al., 2013). These probabilities are used as input parameters to the RPDE sub-model (Ghosh et al., 2013). From the RPDE sub-model, rejection probability due to buffer full as (Pblock), or insufficient capacity (Pdrop), and their sum (Preject) are obtained (Ghosh et al., 2013). This (Pblock) is an input parameter to the three VM provisioning sub-models discussed above. Moreover, the Mean Response Delay (MRD) is computed from the overall performance model (Ghosh et al., 2013). There are two components of the MRD; Mean Queuing Delay (MQD) in RPDE, and Mean Decision Delay (MDD) which are obtained from the RPDE sub-model (Ghosh et al., 2013). Two more components are calculated; MQD in a PM and Mean Provisioning Delay (MPD) are obtained from VM provisioning sub-models (Ghosh et al., 2013). There are dependencies among the sub-models. The (Pblock) which is computed in the RPDE sub-model is utilized as an input parameter in VM provisioning sub-models (Ghosh et al., 2013). However, to solve the RPDE sub-model, outputs from VM provisioning sub-models (Ph, Pw, Pc) are required as input parameters (Ghosh et al., 2013). This cyclic dependency issue is resolved by using fixed-point iteration using a variant of the successive substitution method (Ghosh et al., 2013).
3. The Monolithic Model
In (Ghosh et al., 2013), a monolithic model for IaaS cloud is constructed using the variant of stochastic Petric Net (SPN) called stochastic reward net (SRN) (Ghosh et al., 2013). In this model, the SRN is used to construct a monolithic model for IaaS Cloud (Ghosh et al., 2013). The SRNs are extensions of GSPNs (Generalized Stochastic Petri Nets) (Ajmone Marsan, Conte, & Balbo, 1984) and the key features of SRNs are:
- (Ghosh et al., 2013).
Using this monolithic model, the findings of (Ghosh et al., 2013) showed that the outputs were obtained by assigning an appropriate reward rate to each marking of the SRN and then computing the expected reward rate in the steady state. The measures that were used by (Ghosh et al., 2013) are the Job Rejection Probability (Preject), the Mean Number of Jobs in the RDPE (E(NRPDE)). The (Preject) has two components as discussed earlier (Pblock) which is rejection probability due to buffer full, and (Pdrop), which insufficient capacity (Ghosh et al., 2013), when the RPDE buffer is full and when all (hot, warm, cold) PMs are busy respectively (Ghosh et al., 2013). The (E(NRPDE)), which is a measure of mean response delay, is given by the sum of the number of jobs that are waiting in the RPDE queue and the job that is currently undergoing provisioning decision (Ghosh et al., 2013).
4. The Findings
In (Ghosh et al., 2013; Sakr & Gaber, 2014), the SHARP software package is used to solve the interacting sub-models to compute two main calculations: (1) the Job Rejection Probability, and (2) the Mean Response Delay (MRD) (Ghosh et al., 2013; Sakr & Gaber, 2014). The result of (Ghosh et al., 2013; Sakr & Gaber, 2014) showed that the job rejection probability gets increased with longer Mean Service Time (MST). Moreover, if the PM capacity in each pool is increased, the job rejection probability gets reduced at a given value of mean service time (Ghosh et al., 2013; Sakr & Gaber, 2014). The result also showed that with the increasing MST, the MRD increased for a fixed number of PMs in each pool (Ghosh et al., 2013; Sakr & Gaber, 2014).
In comparison with one-level monolithic model, the scalability and accuracy of the proposed approach by (Ghosh et al., 2013; Sakr & Gaber, 2014), the result also showed that when the number of PMs in each pool increased beyond three and the number of the VMs per PM increases beyond 38, the monolithic model runs into a memory overflow problem (Ghosh et al., 2013; Sakr & Gaber, 2014). The result also indicated that the state space size of the monolithic model increases quickly and becomes too large to construct the reachability graph even for a small number of PMs and VMs (Ghosh et al., 2013; Sakr & Gaber, 2014). The findings of (Ghosh et al., 2013; Sakr & Gaber, 2014) also showed that the non-zero elements in the infinitesimal generator matrix of the underlying CTMC of the monolithic model are hundreds to thousands in orders of magnitude larger compared with the interacting sub-models for a given number of PMs and VMs. When using the interacting sub-models, a reduced number of states and nonzero entries leads to a concomitant reduction in solution time needed (Ghosh et al., 2013; Sakr & Gaber, 2014). As demonstrated by (Ghosh et al., 2013; Sakr & Gaber, 2014), the solution time for monolithic model increased almost exponentially with the increase in model size, while the solution time for interacting sub-models remains almost constant with the increase in model size. Thus, the findings of (Ghosh et al., 2013; Sakr & Gaber, 2014) indicated that the proposed approach is scalable and tractable compared with the one-level monolithic model.
In comparison with the monolithic modeling approach, the accuracy of interacting sub-models showed when the arrival rate and maximum number of VMs per PM is changed, the outputs obtained from both the modeling approaches are near similar using the two performance measures of the Job Rejection and Mean Number of Jobs in RPDE (Ghosh et al., 2013; Sakr & Gaber, 2014). Thus, the errors introduced by the decomposition of the monolithic model are negligible, and interacting sub-models approach preserves accuracy while being scalable (Ghosh et al., 2013; Sakr & Gaber, 2014). These errors are the result of solving only one model for all the PMs in each pool, and the aggregation of the obtained results to approximate the behavior of the pool as a whole (Ghosh et al., 2013; Sakr & Gaber, 2014). The findings of (Ghosh et al., 2013; Sakr & Gaber, 2014) indicated that the values of the probabilities models (Ph, Pw, Pc) that at least one PM can accept a job in a pool are different in monolithic (“exact”) model and interacting (“approximate”) sub-models (Ghosh et al., 2013; Sakr & Gaber, 2014).
Conclusion
The purpose of this project was to provide analysis of the Performance of IaaS Cloud with an emphasis on Stochastic Model. The project began with a brief discussion on the Cloud Computing and its Deployment Models of IaaS, PaaS, and SaaS. It also discussed the available three options for the performance analysis of the IaaS Cloud of the experiment-based, discrete event-simulation-based, and the stochastic model-based. The discussion focused on the most feasible approach which is the stochastic model. The discussion of the performance analysis also included the proposed sub-models of RPDE of CTMC, which the Hot PM Sub-Model, the Closed-Form Solution for Hot PM Sub-Model, the Warm PM Sub-Model, and the Cold PM Sub-Model. The discussion and the analysis also included the Interactions among these sub-models and the impact on the performance. The Monolithic Model was also discussed and analyzed. The findings of this analysis are addressed comparing the scalability and accuracy of them with the one-level monothetic model, and the accuracy of the interacting sub-models with the monolithic model. The result also showed that when the number of PMs in each pool increased beyond three and the number of the VMs per PM increases beyond 38, the monolithic model runs into a memory overflow problem. The result also indicated that the state space size of the monolithic model increases quickly and becomes too large to construct the reachability graph even for a small number of PMs and VMs. The findings of also showed that the non-zero elements in the infinitesimal generator matrix of the underlying CTMC of the monolithic model are hundreds to thousands in orders of magnitude larger compared with the interacting sub-models for a given number of PMs and VMs. When using the interacting sub-models, a reduced number of states and nonzero entries leads to a concomitant reduction in solution time needed(Ghosh et al., 2013; Sakr & Gaber, 2014). As demonstrated by, the solution time for monolithic model increased almost exponentially with the increase in model size, while the solution time for interacting sub-models remains almost constant with the increase in model size. Thus, the findings indicated that the proposed approach is scalable and tractable compared with the one-level monolithic model. The findings of (Ghosh et al., 2013; Sakr & Gaber, 2014)indicated that the values of the probabilities models (Ph, Pw, Pc) that at least one PM can accept a job in a pool are different in monolithic (“exact”) model and interacting (“approximate”) sub-models.
References
Adebisi, A. A., Adekanmi, A. A., & Oluwatobi, A. E. (2014). A Study of Cloud Computing in the University Enterprise. International Journal of Advanced Computer Research, 4(2), 450-458.
Ajmone Marsan, M., Conte, G., & Balbo, G. (1984). A class of generalized stochastic Petri nets for the performance evaluation of multiprocessor systems. ACM Transactions on Computer Systems (TOCS), 2(2), 93-122.
Botta, A., de Donato, W., Persico, V., & Pescapé, A. (2016). Integration of Cloud Computing and Internet Of Things: a Survey. Future Generation computer systems, 56, 684-700.
Foster, I., Zhao, Y., Raicu, I., & Lu, S. (2008). Cloud Computing and Grid Computing 360-Degree Compared. Paper presented at the 2008 Grid Computing Environments Workshop.
Ghosh, R., Longo, F., Naik, V. K., & Trivedi, K. S. (2013). Modeling and performance analysis of large-scale IaaS clouds. Future Generation computer systems, 29(5), 1216-1234.
Jadeja, Y., & Modi, K. (2012). Cloud Computing-Concepts, Architecture and Challenges. Paper presented at the Computing, Electronics and Electrical Technologies (ICCEET), 2012 International Conference on.
Joshua, A., & Ogwueleka, F. (2013). Cloud Computing with Related Enabling Technologies. International Journal of Cloud Computing and Services Science, 2(1), 40. doi:10.11591/closer.v2i1.1720
Kaufman, L. M. (2009). Data Security in the World of Cloud Computing. IEEE Security & Privacy, 7(4), 61-64.
Khan, S., Khan, S., & Galibeen, S. (2011). Cloud Computing an Emerging Technology: Changing Ways of Libraries Collaboration. International Research: Journal of Library and Information science, 1(2).
Kim, W., Kim, S. D., Lee, E., & Lee, S. (2009). Adoption Issues for Cloud Computing. Paper presented at the Proceedings of the 7th International Conference on Advances in Mobile Computing and Multimedia.
Mell, P., & Grance, T. (2011). The NIST Definition of Cloud Computing.
Mokhtar, S. A., Ali, S. H. S., Al-Sharafi, A., & Aborujilah, A. (2013). Cloud Computing in Academic Institutions. Paper presented at the Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication.
Qian, L., Luo, Z., Du, Y., & Guo, L. (2009). Cloud Computing: an Overview. Paper presented at the IEEE International Conference on Cloud Computing.
Sakr, S., & Gaber, M. (2014). Large Scale and Big Data: Processing and Management: CRC Press.
Timmermans, J., Stahl, B. C., Ikonen, V., & Bozdag, E. (2010). The Ethics of Cloud Computing: A Conceptual Review.
Xiao, Z., & Xiao, Y. (2013). Security and Privacy in Cloud Computing. IEEE Communications Surveys & Tutorials, 15(2), 843-859.
Zhang, Q., Cheng, L., & Boutaba, R. (2010). Cloud Computing: State-of-the-Art and Research Challenges. Journal of internet services and applications, 1(1), 7-18.

