Modern Enterprise Newsletter

Subscribe to Modern Enterprise Newsletter and receive research and articles related to your modern enterprise journey

Posts by lmacvittie:

    Anuta Networks Unveils its nCloudX Platform

    January 22nd, 2013

    Anuta Networks unveils a solution to more effectively manage cloud and SDN-enabled networks

    anuta Anuta Networks, a Silicon Valley-based self-proclaimed "network services virtualization" startup, is entering the market with its inaugural offering, the Anuta nCloudX Platform.

    The solution essentially provides network service abstraction and orchestration for cloud service providers and large enterprises engaged in the implementation of private clouds.

    Anuta says of its initial offering:

    The breakthrough Anuta Networks nCloudX platform simplifies and automates the complete lifecycle of network services in complex, heterogeneous networks across multi-vendor, multi-device, multi-protocol, and multi-hypervisor offerings for physical and virtual devices in both private cloud and public cloud deployments. The solution reduces network services delivery time from weeks to a few hours, building efficient software automation on top of a combination of both the existing hardware-controlled networks and the new generation of SDN technologies.

    One of the concerns with regard to potential SDN adoption is, of course, the potential requirement to overhaul the entire network. Many SDN solutions today are highly disruptive, which may result in a more than tepid welcome in established data centers.

    Solutions such as Anuta’s nCloudX Platform, however, aim to alleviate the pain-points associated with managing data centers comprised of multiple technologies and platforms by abstracting services from legacy and programmable network devices. The platform then provides integration points with Cloud Management Platforms and SDN controllers to enable implementation of more comprehensive cloud and data center orchestration flows.

    Anuta’s nCloudX Platform follows the popular "class of provisioning" and "cataloging" schemes prevalent in Cloud Management Platform offerings and adds drag-and-drop topological design capabilities to assist in designing flexible, virtual networks from top to bottom. Its integration with OpenStack takes the form of a Quantum plug-in, essentially taking over the duties of multiple Quantum plug-ins for legacy and programmable devices. Given the pace at which support for many network devices in Quantum is occurring, alternative methods of integration may provide early adopters with more complete options for implementation.

    Its ability to integrate with SDN controllers from Nicira (VMware) and Big Switch Networks enable organizations to pursue a transitory, hybrid strategy with respect to SDN, minimizing potential disruption and providing a clear migration path toward virtualized networks. Its cataloging and design services enable rapid on-boarding of new tenants and a simpler means of designing what are otherwise complex, virtualized networks.

    While much of Anuta Networks’ focus lies on cloud, its ability to virtualize network services is likely to provide value to organizations seeking to automate its network provisioning and management systems, regardless of their adoption of cloud or SDN-related technologies.

    anuta-logo

    Anuta Networks was founded by Chandu Guntakala, President & CEO, Srini Beereddy, CTO, and Praveen Vengalam, Vice President of Engineering, Anuta Networks. It focuses on the delivery of network services virtualization in the context of cloud and SDN-related architectures.

    No Comments "

    How CA is Automating the Cloud

    December 13th, 2012

    CA Technologies’ AppLogic Suite enables Automated Service Delivery, #Cloud Style #devops

    I had an opportunity to sit down with CA Technologies and talk cloud with a focus on cloud management last month and discovered a robust portfolio of automation and orchestration solutions.

    CA has been a staple name in the enterprise for more than a decade so it’s no surprise they understand the need for as turnkey solutions as possible. Pre-integrated and pre-packaged with plenty of easy buttons is desirable, as long as there remains room for customization. The fine folks at CA are focused on hybrid service delivery these days, comprising physical and virtual devices across public and private environments unified by "one point of control." This includes a catalog of services (a registry, if you’re coming from a SOA perspective) as well as pre-integrated and pre-tested process flows that enable rapid deployment of automated processes.

    AppLogic is its cloud management platform (CMP) designed as a turnkey solution for service providers. It relies on an environment comprised of commodity components and attempts to minimize the amount of code required to build, maintain and use a cloud environment.

    Its framework is capable of executing scripts to perform mundane deployment and configuration tasks such as mounting a volume or configuring a VLAN. It’s model is similar to that of CloudStack and OpenStack but uses its own language, ADL (Application Definition Language). The advantage for CA here is in its visual editor tool for constructing flows and tasks, something lacking in other efforts.

    It claims to provide an entire "cloud stack" including:

    • Server management
    • Embedded SAN
    • SDN
    • Resource quotas
    • Design studio
    • Metering
    • Security

    Its claim to SDN is much like other, pre-existing solutions that leverage an automated and dynamic configuration paradigm. It creates connections automatically between virtual machines and the Internet, leverages its own DPI for security and consistent latency both ingress and egress from the virtual machine. Network virtualization is really CA’s SDN game here, as the fabric created by AppLogic takes advantage of its own packet encapsulation to enable route domain isolation, security, and bandwidth enforcement.

    Its embedded SAN was interesting in that it’s a completely software-based construct that simulates a block level SAN comprised of DAS.

    While CA is aiming at service providers with AppLogic, its easy to believe that a large enough enterprise would find value in a cloud management platform that is more fully fleshed out than competing offerings.

    No Comments "

    Homomorphic Encryption finds a Home in the cloud

    November 1st, 2012

    Porticor, which earlier this year unveiled its split-key encryption technology for securing cloud data has taken the next step in its quest to assure users of the security of data in the cloud. In addition to adding VMware private cloud to its portfolio of supported environments (previously it supported only Amazon environments) it announced that it has introduced homomorphic encryption into the equation, which further secures one of the least often (and yet most important) aspects of cryptography – the security of cryptographic keys.

    Where split-key technology assured the security of data by only allowing the full (and secret) key to be derived algorithmically from the two halves of the keys, homomorphic encryption ensures that the actual keys are no longer stored anywhere. Joining the keys is accomplished algorithmically and produces an encrypted symmetric key that is specific to a single resource, such as a disk volume or S3 object.

    Porticor can secure a fairly impressive list of data objects, including:

    • EBS
    • VMDK
    • MySQL
    • Oracle
    • SQL Server
    • MongoDB
    • Cassandra
    • Linux, Unix (NFS)
    • Windows (CIFS)
    • AWS S3

    porticor-homomorphic-announcement

    The split-key technology is used when data is stored, and homomorphic techniques are used when data is accessed. Keys are always encrypted in the cloud, and control is maintained by the customer – not the key management or cloud service provider.

    The addition of partially homomorphic encryption techniques allows for two very important security features to its portfolio of cloud encryption services:

    1. The master key is never exposed, making it nigh unto impossible to steal

    2. A compromise involving one object does not afford attackers access to other objects as each is secured using its own unique encrypted symmetric key 

    This second benefit is important, particularly as access to systems is often accomplished via a breach onto a single, internal system. Gaining access to or control over one system in a larger network has been a primary means of gaining a foothold "inside" as a means to further access the intended target, often data stores. The 2012 DATA BREACH INVESTIGATIONS REPORT noted that "94% of all data compromised involved servers." The 18% increase in this statistic over the previous years’ findings make the security of individual systems – not just from outsider agents but inside agents as well – a significant contributor to data breaches and one in need of serious attention.

    While new to the security scene and relatively untested as to its ability to withstand the rigorous attention and zealous attempts to crack as other cryptographic algorithms and techniques, Porticor offers the analysis and proof of its homomorphic techniques via Dr. Alon Rosen, a cryptography expert from the School of Computer Science at the Herzliya Interdisciplnary Center.

    Regardless, the problems Porticor is attempting to address are real. Key management in the cloud is too often overlooked and storing full keys anywhere – even on-premise in the data center – can be a breach waiting to happen. By splitting key management responsibility but assigning control to the customer, Porticor provides a higher level of trust over traditional techniques in the overarching cryptographic framework required to securely store and manage data stored in public cloud computing environments.

    2 Comments "

    Intel DPDK and Cloud

    October 2nd, 2012

    If you follow Intel and the nuggets of information it releases regarding progress on its processors, you might be familiar with its DPDK (Data Plane Development Kit). The idea behind the DPDK is to enable bypass of software-based network stacks and allow access directly to the data plane, enabling a nearly zero-copy environment from network to CPU. Without getting too deep in the weeds, the ability to bypass the OS processing that forwards packets through a system is a lot like what network manufacturers have done for years – enabling direct access to hardware, which results in much higher performance. 

    This brings to mind one of the arguments for (and against) certain hypervisor implementations. Those that are paravirtualized require a modified version of the operating system, which can be off-putting to some customers. In much the same way, applications desiring to take advantage of Intel’s DPDK will need a modified version of the operating system that includes the necessary drivers and libraries.

    Not an insurmountable requirement, by any stretch of the imagination, but one that should be taken into consideration nonetheless as the performance gains of having (more) direct access to bare metal is considered worth it, particularly for networking-oriented applications.

    How much gain? Errata Security claims a 100 to 1 difference, while Intel has claimed even higher gains:

    You might be skeptical at this point, especially if you’ve benchmarked a commodity Intel x86 system recently. They come nowhere near network wirespeeds. The problem here isn’t the hardware, but the software. The network stack in today’s operating systems is painfully slow, incurring thousands of clock cycles of overhead. This overhead is unnecessary. Custom drivers (like PF_RING or DPDK) incur nearly zero cycles of overhead. Even the simple overhead of 100 cycles for reading the packet from memory into the CPU cache is avoided.

    A standard Linux system that struggles at forwarding 500,000 packets-per-second can forward packets at a rate a 50 million packets-per-second using these custom drivers. Intel has the benchmarks to prove this. This is a 100 to 1 performance difference, an unbelievable number until you start testing it yourself.

    — Errata Security: Software networks: commodity x86 vs. network processors

    The (POTENTIAL) IMPACT on CLOUD

    While the Intel DPDK is easily interpreted as a boon for SDN and network virtualization proponents as well as telecommunications, the question as to how the Intel DPDK will interact with virtualization technology must be asked because of its potential impact on cloud. The premise of virtualization implies a layer of abstraction between the hardware and applications deployed within the environment, the purpose being to allow sharing of hardware. This runs counter to the premise of DPDK, which is to bypass such intervening software and allow direct access to the hardware. Cloud, which is almost universally built atop some form of virtualization technology, seems inherently incompatible then with Intel’s DPDK. It turns out it is, with some minor modifications.

    A recent post by Scott Lowe explains.

    To help with intra-VM communication, Intel DPDK offers several benefits. First, DPDK provides optimized pass-through support. Second, DPDK offers SR-IOV support and allows L2 switching in hardware on Intel’s network interface cards (estimated to be 5-6x more performant than the soft switch). Third, DPDK provides optimized vNIC drivers for Xen, KVM, and VMware. [emphasis added]

    What Intel has been working on is repliace [sic] the VirtIO drivers with DPDK-enabled VirtIO drivers, and use DPDK to replace the Linux bridging utilities with a DPDK-enabled forwarding application. The DPDK-enabled forwarding application is a “zero copy” operation, thus reducing latency and processing load when forwarding packets. Intel is also creating shims between DPDK and Open vSwitch, so that an OVS controller can update Open vSwitch, which can then update the DPDK forwarding app to modify or manipulate forwarding tables.

    The former resolves the issue of sharing hardware (and likely produces fewer performance gains than having sole access but should still see improvement) and the latter addresses what would become another bottleneck if the former is implemented, the virtual switch.

    Ultimately, however, what such modifications do is introduce dependence on specific hardware resources in order to accommodate a very real need – performance.Not all hardware is capable of supporting DPDK – it is Intel specific, after all – which means careful attention to hardware capabilities would be required in order to manage myriad workload types in a cloud computing environment. In a cloud environment, built on the premise of commoditized and broadly applicable hardware and software this has the effect of reversing commoditization.

    Cloud’s benefits of efficiency and lower costs are highly dependent on the ability to leverage commoditized hardware, available for repurposing across a wide variety of workload types. The introduction of hardware and images specifically designed to enable DPDK-capable services would complicate the provisioning process and threaten to decrease efficiency while increasing costs.

    Cloud could still benefit from such progress, however, if providers take advantage of DPDK-enabled network appliances to build its underlying infrastructure – its support foundations. Without the pressures of seemingly random provisioning requirements and with a more controlled environment, providers could certainly benefit from improved performance that might allow consolidation or higher throughput. This could become a selling point, a differentiator, to the market for a cloud to be built on the premise of "bare metal access" and its performance enhancing characteristics.

    Given the right impetus, they might also find a premium cloud service market hungry for the performance gains and more importantly, willing to pay for it.

    3 Comments "

    Observations from Cloud Connect Chicago

    September 17th, 2012

    cloud-connect-logo-2012 Last week saw the inauguration of CloudConnect Chicago and it was great to see both established and newer speakers taking the stage. The event felt a lot like the inaugural event in Santa Clara; more intimate, more buy-side than sell-side, and of course a focus on cloud.

    Some general observations from the event:

    SDN is BUBBLING into CLOUD

    It’s not necessarily an overt message, but it’s there. SDN – or at least it’s core "decouple and abstract" premise – is definitely rising through the layers of cloud. Speaking to ProfitBricks, for example, showed the way in which the assumptions we draw upon to design L2 architectures may be the most disrupted by SDN while the L3 (IP) network architecture might remain largely untouched. While many vendors are approaching SDN with new L3 architectures and protocols, ProfitBricks has run with the idea that the same "decouple and abstract" premise of SDN that provides value up the stack can also provide significant advantages down the stack.

    Given that many of the challenges SDN is designed to address are more pronounced in cloud computing environments than traditional data centers, this is no surprise. SDN is currently quickly moving up the stack in terms of hype, so expect to see at least marketing in the cloud computing demesne start to take advantage of its somewhat nebulous definition as well.

    CLOUD CONFUSION CONTINUES

    There is still a lot of confusion attached to the word "cloud" on the buy-side, especially when prefixed by modifiers like "private" "public" and "hybrid". Customers are being inundated with self-serving definitions that, while based loosely on NIST definitions, are outside what most experts would consider at least typical. Even associated terms like "elasticity", long considered a staple benefit of cloud, are being stretched thin to include processes that clearly fall outside the implied definition of "just in time" flexible capacity. 

    Faster provisioning and reducing operational complexity resonated well, however, no matter how far afield the definition of cloud might have gotten. The notion of scheduled elasticity fits with these interests, as enterprises desire the flexibility of cloud as a way to address periodic (and anticipated) increases in capacity needs without maintaining and incurring the costs of an over-provisioned infrastructure.

    IDENTITY and ACCESS CONTROL

    There continues to be awareness of the issues surrounding identity and access control, particularly as it applies to SaaS, and the need to integrate such services with existing data center processes. While adoption of IaaS remains less broad, SaaS usage is continuing to expand with a significant majority of customers taking advantage of SaaS in some aspect of business and operations. This is leading to an increased awareness of the potential risks and challenges for managing access to these systems, incurring a desire in customers to reassert their governance.

    Anticipate the arrival of turn-key solutions in the form of cloud brokers that streamline managing identity and access control for SaaS in the near future as demand continues to escalate in the face of continued SaaS adoption.

     

    If you missed the event, you can enjoy the keynote presentations online.

    No Comments "

    Cloud Brokers: Services versus Architecture

    September 6th, 2012

    #ccevent #Cloud brokers and the difference between choice and connectivity

    The notion of a cloud brokerage, of an intermediate service that essentially compares and ultimately chooses a cloud provider based on customer-specific parameters is not a new one. Many may recall James Urquhart‘s efforts around what he termed a "pCard" more than two years ago, an effort aimed at defining interfaces that would facilitate the brokering of services between competing clouds based on characteristics such as price, performance, and other delivery-related services.

    As we look toward a future in which federated clouds play a larger and more impactful role, we necessarily must include the concept of a cloud service broker that can intermediate on our behalf to assist in the choice of which cloud will meet an application’s needs at any given time.

    But we cannot overlook the growing adoption of hybrid clouds and the need to broker certain processes through systems over which the enterprise has control, such as identity management. The ability to broker – to intermediate – authentication and authorization for off-premise applications (SaaS) is paramount to ensuring access to corporate data stored externally is appropriately gated through authoritative identity systems and processes.

    Doing so requires careful collaboration between enterprise and off-premise systems achieved through an architectural solution: cloud brokers.

    Such brokers are architecturally focused in nature, not service-focused, and thus serve a different market and different purpose. One facilitates choice in a federated cloud ecosystem while the other enables the connectivity required to architect a hybrid cloud ecosystem.

    Both cloud brokers and cloud service brokers will be key components in future architectures, but we should not conflate the two as they serve very different purposes despite their very similar nomenclature. 


    I’ll be presenting "Bridges, Brokers, and Gateways: Exploring Hybrid Cloud Architectural Models" at Cloud Connect Chicago next week in which we’ll explore the notion of architectural brokers (as well as bridges and gateways) in more depth.

    No Comments "

    The Misconception about Control and the Cloud

    August 21st, 2012

    control in the cloud

    Cloud adoption appears to have reached a plateau, which means a resurgence of punditry regarding why you should make the leap, why your fears are unfounded, and why you’re simply still "not getting it" if you haven’t migrated to the cloud yet.

    And while some of these discussions will have value and advance adoption, some are simply downright oversimplifying the concerns still held by many organizations.

    Consider the immediate distillation of "control" in the cloud down to "data at rest". The bulk of these discussion with respect to control revolves around data at rest – where it is, how secure it is, how it grows. There is no discussion about data in flight, or applications, or the infrastructure necessary to provide for the delivery and reception of data to and from origination points (end-users, applications). 

    Control is not just about where the data ends up, or where it resides, or who’s handling it. The driver of a car doesn’t just control where the passengers end up, he controls the entire journey – from end-to-end. He’s got control over how fast he drives, when and if he chooses to adhere to signals and signs, and which direction he goes.

    That’s control, and that’s the loss of control implicit in outsourcing your entire IT infrastructure to someone else.

    Like public transportation, it is shared and thus costs less. It has pre-defined routes (which you cannot really influence) and you don’t have any control over how you arrive at your ultimate destination. If the driver goes too slow, you’re late. You’ll still get there but the consequences fall solely on your shoulders, not the driver.

    Reality is that right now cloud computing is perched on the cusp of a second-generation of offerings; offerings with services that will, one hopes, put the control over data in flight back into the hands of the people who are ultimately responsible and held accountable for not only the data arriving at the right destination, but doing so without compromising security or performance.

    In order for a larger percentage of in-house application outsourcing to come to fruition, the majority of infrastructure capabilities and functions upon which organizations rely must be available in the cloud – as services. This means careful attention to in-flight data handling from its origination (who, from what device, and from where) to its processing by the application (it is free of malicious code or malware) to its final destination. The infrastructure services necessary to prioritize data (traffic) and cleanse and secure that data must be in place if IT is to outsource more fully to the cloud. Such services by and large today do not exist, but they will need to in order for providers to push past the plateau we appear to be reaching.

    1 Comment "

    Elasticity On a Schedule

    August 9th, 2012

    #cloud The benefits of auto-scaling when applied to enterprise applications is more about scheduled elasticity than it is immediate demand

    gradual-elasticity When we talk about elasticity of applications we usually evoke images of application demand spiking up and down in rapid, unpredictable waves. Indeed, this is the scenario for which cloud and virtualization is touted as the most effective solution. But elasticity of enterprise class applications is more like an eclipse than a supernova – it’s slow, gradual and fairly predictable.

    Let’s face it – sudden demand for most mission-critical (internal facing) applications doesn’t general happen unless it’s the 8am rush to log in at the beginning of the day (or any well-known shift-start time). Demand rises, stays consistent throughout the day, and the drops again suddenly when everyone logs off for the day. Demand on the weekend, for most apps, is almost non-existent with the most obvious exception being call centers operating 24×7.

    So for most enterprise applications, the lure of cloud is most certainly not going to be focused on elasticity. Or is it?

    Our inference is often that the term elasticity not only describes the scale out and back of applications, but that it is rapid and often. We assume that elasticity is for applications that a constant fluctuation in demand that can be best met through the use of virtualization and cloud computing.

    But no where in the definition of elasticity is it a requirement that the fluctuations implied must happen within a very short period of time. Indeed, the notion of elasticity is simply the ability to scale out and back, on demand. That demand may be frequent or infrequent, predictable or unpredictable. In the case of predictably infrequent elasticity, enterprises may find that cloud and virtualization models can indeed reduce costs.

    SHARING the LOAD

    Within the enterprise there are myriad applications and processes that occur on specific schedules. In the past, this has been a function of processing availability – particularly coming from within organizations making heavy use of mainframe technology (yes, even today). The excessive load placed on shared systems – whether mainframe or because of extensive querying of master databases (think ETL and BI-related processing) – required that such processing occur after hours, when systems were either not in use or were more lightly used and thus the additional load would have a relatively minimal impact.

    These considerations do not evaporate when cloud and virtualization are introduced into the mix. ETL and BI-related processing still stresses a database to the point that applications requiring the data may be negatively impacted, which in turn reduces productivity and results in degrading business performance. These are undesirable results that must be considered, even more so in a broadly shared infrastructure model. Thus, the continuation of predictable, schedule-based processing for many applications and processes will continue in the enterprise, regardless of the operating model adopted.

    This provides and opportunity to architect systems such that scheduled elasticity is the norm. Customer service-related applications are scaled up in the morning (optimally before the rush to log in for the day) and scaled back in the evening. Resources freed at this time can then be allocated to heavy lifting workloads such as ETL and BI-related processing, and then reassigned to business applications again in the morning.

    Sharing the resources across these equally-important-to-business applications can reduce overall costs by allowing infrastructure to be shared across time rather than dedicating resources permanently. The availability of "extra" resources for highly intense processing type workloads may also alleviate those situations in which overnight "batch" processing systems have run into normal business hours, impacting negatively the network and system responsiveness required to maintain acceptable business KPI metrics upon which business users are measured.

    BOTTOM LINE: Organizations who may believe that the benefits of elasticity do not apply because applications do not have sudden spikes or are not public facing should re-evaluate their position. The benefits of cloud and virtualization can certainly apply to internal facing business applications. 

    4 Comments "