• Skip to primary navigation
  • Skip to main content
Rishidot Research

Rishidot Research

deciphering the modern enterprise

  • Home
  • About Us
    • Meet The Team
  • Research
    • Research Agenda
    • Research Data
  • Services
  • Blog
  • Stacksense
  • AISutra
  • Rishidot TV
  • Modern Enterprise Podcast
  • Contact Us
    • Information For AR/PR Representing Vendors
  • Show Search
Hide Search
You are here: Home / Cloud Computing / Elasticity On a Schedule

Elasticity On a Schedule

lmacvittie · August 9, 2012 · 4 Comments

#cloud The benefits of auto-scaling when applied to enterprise applications is more about scheduled elasticity than it is immediate demand

gradual-elasticity When we talk about elasticity of applications we usually evoke images of application demand spiking up and down in rapid, unpredictable waves. Indeed, this is the scenario for which cloud and virtualization is touted as the most effective solution. But elasticity of enterprise class applications is more like an eclipse than a supernova – it’s slow, gradual and fairly predictable.

Let’s face it – sudden demand for most mission-critical (internal facing) applications doesn’t general happen unless it’s the 8am rush to log in at the beginning of the day (or any well-known shift-start time). Demand rises, stays consistent throughout the day, and the drops again suddenly when everyone logs off for the day. Demand on the weekend, for most apps, is almost non-existent with the most obvious exception being call centers operating 24×7.

So for most enterprise applications, the lure of cloud is most certainly not going to be focused on elasticity. Or is it?

Our inference is often that the term elasticity not only describes the scale out and back of applications, but that it is rapid and often. We assume that elasticity is for applications that a constant fluctuation in demand that can be best met through the use of virtualization and cloud computing.

But no where in the definition of elasticity is it a requirement that the fluctuations implied must happen within a very short period of time. Indeed, the notion of elasticity is simply the ability to scale out and back, on demand. That demand may be frequent or infrequent, predictable or unpredictable. In the case of predictably infrequent elasticity, enterprises may find that cloud and virtualization models can indeed reduce costs.

SHARING the LOAD

Within the enterprise there are myriad applications and processes that occur on specific schedules. In the past, this has been a function of processing availability – particularly coming from within organizations making heavy use of mainframe technology (yes, even today). The excessive load placed on shared systems – whether mainframe or because of extensive querying of master databases (think ETL and BI-related processing) – required that such processing occur after hours, when systems were either not in use or were more lightly used and thus the additional load would have a relatively minimal impact.

These considerations do not evaporate when cloud and virtualization are introduced into the mix. ETL and BI-related processing still stresses a database to the point that applications requiring the data may be negatively impacted, which in turn reduces productivity and results in degrading business performance. These are undesirable results that must be considered, even more so in a broadly shared infrastructure model. Thus, the continuation of predictable, schedule-based processing for many applications and processes will continue in the enterprise, regardless of the operating model adopted.

This provides and opportunity to architect systems such that scheduled elasticity is the norm. Customer service-related applications are scaled up in the morning (optimally before the rush to log in for the day) and scaled back in the evening. Resources freed at this time can then be allocated to heavy lifting workloads such as ETL and BI-related processing, and then reassigned to business applications again in the morning.

Sharing the resources across these equally-important-to-business applications can reduce overall costs by allowing infrastructure to be shared across time rather than dedicating resources permanently. The availability of "extra" resources for highly intense processing type workloads may also alleviate those situations in which overnight "batch" processing systems have run into normal business hours, impacting negatively the network and system responsiveness required to maintain acceptable business KPI metrics upon which business users are measured.

BOTTOM LINE: Organizations who may believe that the benefits of elasticity do not apply because applications do not have sudden spikes or are not public facing should re-evaluate their position. The benefits of cloud and virtualization can certainly apply to internal facing business applications. 

Share this:

  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)

Related

Cloud Computing, Enterprise Software

Reader Interactions

Comments

  1. Ron Lee says

    August 10, 2012 at 3:48 am

    Good article and point of view. Not well understood but I very much agree. Our cloud application for shipping (Pacejet) is an enterprise app that has unique usage patterns where scheduled scaling really helps and is valuable for our users. For example, shipping tends to ramp up in the afternoon which of course is different for east, west coast, etc… During the holidays, some of our users ship 2x or 3x the volume. We know and can predict these spikes and so schedule more capacity to come online rather than waiting for constraints to trigger it. It’s an area to keep working on but our goals are to keep performance as strong and consistent for users as we can but make efficient use of resources so we can keep prices low.

    So as I think you were saying, knowing a bit about how and when users run various transactions can help you create a capacity plan that’s aligned with the user experience you want. So elasticity matters, even if it’s not the pure dynamic form we sometimes think of with cloud platforms.

    Reply
  2. Lori MacVittie says

    August 17, 2012 at 5:02 am

    That’s exactly it, Ron. In many ways, “scheduled” elasticity is easier to plan for and automate than on-demand elasticity and provides an opportunity to more quickly prove a return on investment in the tools and frameworks used to do so, which can then translate to dynamic elasticity initiatives if necessary.

    Reply
  3. Strategic Blue says

    August 23, 2012 at 8:16 am

    If we are really to treat cloud computing as a utility, then we should expect the features of a traded commodity market to develop. It is already possible to prebook whole months of capacity on various cloud providers through a cloud broker-dealer like http://www.strategic-blue.com. As understanding of this grows, it will ultimately be possible for one application to be booked onto a particular cloud infrastructure for the mornings, another for the afternoon, and another overnight. We are even seeing contracts where a user gets a cheaper price in exchange for the provider / cloud broker-dealer being able to displace them under certain agreed circumstances.

    Reply

Trackbacks

  1. Observations from Cloud Connect Chicago | Rishidot Research says:
    September 17, 2012 at 7:11 am

    […] well, however, no matter how far afield the definition of cloud might have gotten. The notion of scheduled elasticity fits with these interests, as enterprises desire the flexibility of cloud as a way to address […]

    Reply

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Modern Enterprise Newsletter & get notified about our research




© 2021 · Rishidot Research