• Skip to primary navigation
  • Skip to main content
Rishidot Research

Rishidot Research

deciphering the modern enterprise

  • Home
  • About Us
    • Meet The Team
  • Research
    • Research Agenda
    • Research Data
  • Services
  • Blog
  • Stacksense
  • AISutra
  • Rishidot TV
  • Modern Enterprise Podcast
  • Contact Us
    • Information For AR/PR Representing Vendors
  • Show Search
Hide Search

cloud computing

On Robustness And Resiliency

Krishnan Subramanian · January 24, 2013 · 10 Comments

RisBlog1When you talk about cloud computing with the enterprises and tell them how cloud requires a different approach to designing applications, I get the biggest pushback from them. Since most of the large enterprises are used to the idea that expensive and powerful hardware that seldom fails is the only way to build robustness into their IT (which ensures business continuity), they are appalled by the new way of designing applications for the cloud. They feel that they are being forced to subscribe to a completely new paradigm in order to take advantage of the cloud. In spite of the marketing gimmicks from the traditional vendors, they understand that cloud computing is more about resiliency than robustness and it bothers many of the enterprise IT managers. They really have difficulty changing their mindset from “failure is not an option” to “failure is not a problem”.

I was recently watching one of Clay Shirky’s talks at the Singularity University and he was trying to highlight the example of cell phone towers to explain the difference between robustness and resiliency in the crowdsourced world. He was trying to highlight the difference between the robustness needed for the survival of Encyclopedia Britannica and the resiliency needed in the case of Wikipedia. It got me excited to make another attempt at explain the enterprise community on the need to shift their thinking from robustness to resiliency. In short, I want to argue that this mental shift is not a new paradigm which cloud forces upon the enterprise IT but, instead, it is an old well tested idea for dealing with scale.

The example which I am going to pick is the cell phone tower example Clay Shirky used in his talk. When you look at the construction of cell phone towers, its base is broad and built with the idea of robustness but, after a certain height, the tower is not built for robustness. Instead, it is built for resiliency against heavy winds. It is not just expensive to ensure robustness at such great heights but also practically impossible. Once the construction engineers understood this difficulty, they relied on the idea of resiliency for building tall towers that has become the backbone for mission critical networks. They didn’t give up on the idea of tall towers because they are faced with a mission critical problem. Rather, they figured out a way to make these towers resilient for winds.. It is no brainer to know that construction industry were trained to focus on robustness while building smaller towers/buildings. But when it came to taller towers or building (scale), they figured that it makes sense to focus on resiliency than robustness. In short, it not only helped save tons of money (economics) but also helped to innovate faster (agility) than waiting for technology to improve so that they can achieve robustness at such heights.

The point I am trying to highlight is that it doesn’t make sense to be married to the traditional IT paradigm of robustness. It worked well when it came to legacy applications. However, for applications at scale, it doesn’t make sense to wait for infrastructure at scale that also offers the robustness of the traditional world. I am not saying we can never achieve success building robust infrastructure at scale. I am just arguing that it doesn’t make sense to wait for such an infrastructure when designing applications for resiliency can help any organization innovate faster. I am also not advocating that one should rely on hardware or data centers that fail every other hour. I am only emphasizing on the need to make the mental shift as history has already shown us that one can trust resiliency over robustness for mission critical needs.

Again, I make it a point to emphasize that it is perfectly ok to shop around for service providers who offer some level of robustness through SLAs. Even then, it is important to understand and accept the fact that servers fail and designing the apps for failure is the right approach for building modern applications. Whether we like it or not, legacy applications are on their way out. Globalized nature of the economy, mobile and social are pushing organizations to slowly (eventually) move away from legacy applications to modern applications. It is important for the enterprise IT managers to understand and accept this fact, change the mindset quickly and start embracing “modernity” in their IT. Organizations waiting patiently for robust infrastructure at scale will eventually end up getting disrupted.

Anuta Networks Unveils its nCloudX Platform

lmacvittie · January 22, 2013 · Leave a Comment

Anuta Networks unveils a solution to more effectively manage cloud and SDN-enabled networks

anuta Anuta Networks, a Silicon Valley-based self-proclaimed "network services virtualization" startup, is entering the market with its inaugural offering, the Anuta nCloudX Platform.

The solution essentially provides network service abstraction and orchestration for cloud service providers and large enterprises engaged in the implementation of private clouds.

Anuta says of its initial offering:

The breakthrough Anuta Networks nCloudX platform simplifies and automates the complete lifecycle of network services in complex, heterogeneous networks across multi-vendor, multi-device, multi-protocol, and multi-hypervisor offerings for physical and virtual devices in both private cloud and public cloud deployments. The solution reduces network services delivery time from weeks to a few hours, building efficient software automation on top of a combination of both the existing hardware-controlled networks and the new generation of SDN technologies.

One of the concerns with regard to potential SDN adoption is, of course, the potential requirement to overhaul the entire network. Many SDN solutions today are highly disruptive, which may result in a more than tepid welcome in established data centers.

Solutions such as Anuta’s nCloudX Platform, however, aim to alleviate the pain-points associated with managing data centers comprised of multiple technologies and platforms by abstracting services from legacy and programmable network devices. The platform then provides integration points with Cloud Management Platforms and SDN controllers to enable implementation of more comprehensive cloud and data center orchestration flows.

Anuta’s nCloudX Platform follows the popular "class of provisioning" and "cataloging" schemes prevalent in Cloud Management Platform offerings and adds drag-and-drop topological design capabilities to assist in designing flexible, virtual networks from top to bottom. Its integration with OpenStack takes the form of a Quantum plug-in, essentially taking over the duties of multiple Quantum plug-ins for legacy and programmable devices. Given the pace at which support for many network devices in Quantum is occurring, alternative methods of integration may provide early adopters with more complete options for implementation.

Its ability to integrate with SDN controllers from Nicira (VMware) and Big Switch Networks enable organizations to pursue a transitory, hybrid strategy with respect to SDN, minimizing potential disruption and providing a clear migration path toward virtualized networks. Its cataloging and design services enable rapid on-boarding of new tenants and a simpler means of designing what are otherwise complex, virtualized networks.

While much of Anuta Networks’ focus lies on cloud, its ability to virtualize network services is likely to provide value to organizations seeking to automate its network provisioning and management systems, regardless of their adoption of cloud or SDN-related technologies.

anuta-logo

Anuta Networks was founded by Chandu Guntakala, President & CEO, Srini Beereddy, CTO, and Praveen Vengalam, Vice President of Engineering, Anuta Networks. It focuses on the delivery of network services virtualization in the context of cloud and SDN-related architectures.

Homomorphic Encryption finds a Home in the cloud

lmacvittie · November 1, 2012 · 2 Comments

Porticor, which earlier this year unveiled its split-key encryption technology for securing cloud data has taken the next step in its quest to assure users of the security of data in the cloud. In addition to adding VMware private cloud to its portfolio of supported environments (previously it supported only Amazon environments) it announced that it has introduced homomorphic encryption into the equation, which further secures one of the least often (and yet most important) aspects of cryptography – the security of cryptographic keys.

Where split-key technology assured the security of data by only allowing the full (and secret) key to be derived algorithmically from the two halves of the keys, homomorphic encryption ensures that the actual keys are no longer stored anywhere. Joining the keys is accomplished algorithmically and produces an encrypted symmetric key that is specific to a single resource, such as a disk volume or S3 object.

Porticor can secure a fairly impressive list of data objects, including:

  • EBS
  • VMDK
  • MySQL
  • Oracle
  • SQL Server
  • MongoDB
  • Cassandra
  • Linux, Unix (NFS)
  • Windows (CIFS)
  • AWS S3

porticor-homomorphic-announcement

The split-key technology is used when data is stored, and homomorphic techniques are used when data is accessed. Keys are always encrypted in the cloud, and control is maintained by the customer – not the key management or cloud service provider.

The addition of partially homomorphic encryption techniques allows for two very important security features to its portfolio of cloud encryption services:

1. The master key is never exposed, making it nigh unto impossible to steal

2. A compromise involving one object does not afford attackers access to other objects as each is secured using its own unique encrypted symmetric key 

This second benefit is important, particularly as access to systems is often accomplished via a breach onto a single, internal system. Gaining access to or control over one system in a larger network has been a primary means of gaining a foothold "inside" as a means to further access the intended target, often data stores. The 2012 DATA BREACH INVESTIGATIONS REPORT noted that "94% of all data compromised involved servers." The 18% increase in this statistic over the previous years’ findings make the security of individual systems – not just from outsider agents but inside agents as well – a significant contributor to data breaches and one in need of serious attention.

While new to the security scene and relatively untested as to its ability to withstand the rigorous attention and zealous attempts to crack as other cryptographic algorithms and techniques, Porticor offers the analysis and proof of its homomorphic techniques via Dr. Alon Rosen, a cryptography expert from the School of Computer Science at the Herzliya Interdisciplnary Center.

Regardless, the problems Porticor is attempting to address are real. Key management in the cloud is too often overlooked and storing full keys anywhere – even on-premise in the data center – can be a breach waiting to happen. By splitting key management responsibility but assigning control to the customer, Porticor provides a higher level of trust over traditional techniques in the overarching cryptographic framework required to securely store and manage data stored in public cloud computing environments.

Observations from Cloud Connect Chicago

lmacvittie · September 17, 2012 · Leave a Comment

cloud-connect-logo-2012 Last week saw the inauguration of CloudConnect Chicago and it was great to see both established and newer speakers taking the stage. The event felt a lot like the inaugural event in Santa Clara; more intimate, more buy-side than sell-side, and of course a focus on cloud.

Some general observations from the event:

SDN is BUBBLING into CLOUD

It’s not necessarily an overt message, but it’s there. SDN – or at least it’s core "decouple and abstract" premise – is definitely rising through the layers of cloud. Speaking to ProfitBricks, for example, showed the way in which the assumptions we draw upon to design L2 architectures may be the most disrupted by SDN while the L3 (IP) network architecture might remain largely untouched. While many vendors are approaching SDN with new L3 architectures and protocols, ProfitBricks has run with the idea that the same "decouple and abstract" premise of SDN that provides value up the stack can also provide significant advantages down the stack.

Given that many of the challenges SDN is designed to address are more pronounced in cloud computing environments than traditional data centers, this is no surprise. SDN is currently quickly moving up the stack in terms of hype, so expect to see at least marketing in the cloud computing demesne start to take advantage of its somewhat nebulous definition as well.

CLOUD CONFUSION CONTINUES

There is still a lot of confusion attached to the word "cloud" on the buy-side, especially when prefixed by modifiers like "private" "public" and "hybrid". Customers are being inundated with self-serving definitions that, while based loosely on NIST definitions, are outside what most experts would consider at least typical. Even associated terms like "elasticity", long considered a staple benefit of cloud, are being stretched thin to include processes that clearly fall outside the implied definition of "just in time" flexible capacity. 

Faster provisioning and reducing operational complexity resonated well, however, no matter how far afield the definition of cloud might have gotten. The notion of scheduled elasticity fits with these interests, as enterprises desire the flexibility of cloud as a way to address periodic (and anticipated) increases in capacity needs without maintaining and incurring the costs of an over-provisioned infrastructure.

IDENTITY and ACCESS CONTROL

There continues to be awareness of the issues surrounding identity and access control, particularly as it applies to SaaS, and the need to integrate such services with existing data center processes. While adoption of IaaS remains less broad, SaaS usage is continuing to expand with a significant majority of customers taking advantage of SaaS in some aspect of business and operations. This is leading to an increased awareness of the potential risks and challenges for managing access to these systems, incurring a desire in customers to reassert their governance.

Anticipate the arrival of turn-key solutions in the form of cloud brokers that streamline managing identity and access control for SaaS in the near future as demand continues to escalate in the face of continued SaaS adoption.

 

If you missed the event, you can enjoy the keynote presentations online.

Cloud Brokers: Services versus Architecture

lmacvittie · September 6, 2012 · Leave a Comment

#ccevent #Cloud brokers and the difference between choice and connectivity

The notion of a cloud brokerage, of an intermediate service that essentially compares and ultimately chooses a cloud provider based on customer-specific parameters is not a new one. Many may recall James Urquhart‘s efforts around what he termed a "pCard" more than two years ago, an effort aimed at defining interfaces that would facilitate the brokering of services between competing clouds based on characteristics such as price, performance, and other delivery-related services.

As we look toward a future in which federated clouds play a larger and more impactful role, we necessarily must include the concept of a cloud service broker that can intermediate on our behalf to assist in the choice of which cloud will meet an application’s needs at any given time.

But we cannot overlook the growing adoption of hybrid clouds and the need to broker certain processes through systems over which the enterprise has control, such as identity management. The ability to broker – to intermediate – authentication and authorization for off-premise applications (SaaS) is paramount to ensuring access to corporate data stored externally is appropriately gated through authoritative identity systems and processes.

Doing so requires careful collaboration between enterprise and off-premise systems achieved through an architectural solution: cloud brokers.

Such brokers are architecturally focused in nature, not service-focused, and thus serve a different market and different purpose. One facilitates choice in a federated cloud ecosystem while the other enables the connectivity required to architect a hybrid cloud ecosystem.

Both cloud brokers and cloud service brokers will be key components in future architectures, but we should not conflate the two as they serve very different purposes despite their very similar nomenclature. 


I’ll be presenting "Bridges, Brokers, and Gateways: Exploring Hybrid Cloud Architectural Models" at Cloud Connect Chicago next week in which we’ll explore the notion of architectural brokers (as well as bridges and gateways) in more depth.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Subscribe to Modern Enterprise Newsletter & get notified about our research




© 2021 · Rishidot Research