Tuesday, 28 August 2012

Windows Azure Queues–The Complete Works

 

My last post had concentrated on Service Bus Queues, I do get a lot of questions from the customer when to use Azure Queues vs. Service Bus Queues. This post I try to establish the decision which help one make better choices between the two.

Digging deeper into Azure Queues: Windows Azure Queue the expectation is it will be lower cost alternative to the Service Bus Queue’s. In principle Windows Azure Queue going ahead referred to as WAQ

  • Are asynchronous reliable delivery messaging construct.
  • Highly available , durable and performance efficient. The performance numbers to which the WAQ can manage is area of some research.
  • Ideally they are process At Least Once.
  • REST based interface support.
  • WAQ doesn’t have a limit on the number of messages stored in queue.
  • TTL for WAQ is 1 week , post that they will be garbage collected.
  • Meta data support in form name value pair exists
  • Maximum message size is 64KB
  • Message inside WAQ can be put in binary when read back it comes as XML
  • No guaranty on sequencing of the message
  • No support for duplicate messages identification.
  • Parameters of WAQ include
    • MessageID: GUID
    • Visibility Timeout: Default is 30 seconds maximum is 2 hours, Ideally use for read and process and then issue a delete.
    • PopReceipt:  On reading the queue there is visibility timeout associated with it, the receiver reads the messages tries to complete some processing and then may decide to issue a delete. The message which is read has a PopReceipt associated with it.  PopReceipt is used while issuing a delete it goes with the MessageId.

      PopReceipt, is

      • Property of CloudQueueMessage
      • Set every time a message is popped from the queue (GetMessage or GetMessages)
      • Used to identify the last consumer to pop the message
      • A valid pop receipt is required to delete a message
      • An exception is thrown if an invalid pop receipt is passed
      • PopReceipt is used in conjuction with the message id to issue a Delete of a message , for which a visibility timeout is set. We have the following scenarios

        A Delete is issued within the visibility timeout the Delete the message is deleted from the queue, the assumption here is the message has been read and processing required has been done term it the happy path.

        A Delete is issued post expiry of the visibility time, this assumed to be exception flow “ ex: the receiver process has crashed” and message is available in queue for re-processing. This failure recovery process rarely happens, and it is there for your protection. But it can lead to a message being picked up more than once. Each message has a property, DequeueCount, that tells you how many times this message has been picked up for processing. For example above, when receiver A first received the message, the dequeuecount would be 0. When receiver B picked up the message, after server A’s tardiness, the dequeuecount would be 1. This becomes a strategy to detect problem or poison message and route it to a log,repair and resubmit process.

      • Poison message is a message that is somehow continually failing to be processed correctly. This is usually caused by some data in the contents that causes the processing code to fail. Since the processing fails, the messages timeout expires and it reappears on the queue. The repair and resubmit process is sometimes a queue that is managed by a system management software. There is a need to check for and set a threshold for this dequeuecount for messages.
    • MessageTTL : This specifies the time-to-live interval for the message, in seconds. The maximum time-to-live allowed is 7 days. If this parameter is omitted, the default time-to-live is 7 days. If a message is not deleted from a queue within its time-to-live, then it will be garbage collected and deleted by the storage sytem.

Notes: It is important to note that all queue names must be lower case. The CreateIfNotExist() method will see if the queue really does exist in Windows Azure, and if it doesn’t it will create it for you.

 

Comparison of Azure Queues with Service Queues

A good post which covers that can be found here  -http://preps2.wordpress.com/2011/09/17/comparison-of-windows-azure-storage-queues-and-service-bus-queues/

 

Design Consideration for Azure Queues

The messages are pushed into the queues the receiver will read the message process & delete. The general technique for reading messages from a queue used is Polling. The use of a classic queue listener with a polling mechanism may not be the optimal choice when using Windows Azure queues because the Windows Azure pricing model measures storage transactions in terms of application requests performed against the queue, regardless of if the queue is empty or not.  If the number of messages increase in the queue “load leveling” will kick in and more receivers roles will spin off. These receivers will continue to run and accrue cost.

The costing of a single queue listener using polling mechanism

Assuming a hypothetical situation there is a single queue listener constantly polling for messages in the queue. The business transaction data arrives at regular intervals. However, let’s assume

  • The solution is busy processing workload just 25% of the time during a standard 8-hour business day.
  • That results in 6 hours (8 hours * 75%) of “idle time” when there may not be any transactions coming through the system.
  • Furthermore, the solution will not receive any data at all during the 16 non-business hours every day.

Total Idle time= 22 hours, there is dequeue work i.e GetMessage() called from Polling function that amounts

22 hrs X 60 min X 60 transaction/min – assuming polling at 1 second= 79,200 transaction/day

Cost of 100,000 transactions = $0.01

The storage transactions generated by a single dequeue thread in the above scenario will add approximately  = 79,200 / 100,000 * $0.01 * 30 days = $0.238/ month for 1 queue listener in polling mode.

Architects will not plan for a single queue listener for the entire application and chances are number queue listeners will be high & there are going to different queues for different requirements.  I’m assuming a total 200 queues used in an application with polling

200 queues X $0.238 $45. 720 per month - is the cost incurred when the solution was not performing any computations at all, just checking on the queues to see if any work items are available

 

Addressing The Polling Hell

To address the polling hell following techniques can be used

  • Back off polling, a method to lessen the number of transactions in your queue and therefore reduce the bandwidth used.  A good implementation can be found here http://www.wadewegner.com/2012/04/simple-capped-exponential-back-off-for-queues/
  • Triggering (push-based model): A listener subscribes to an event that is triggered (either by the publisher itself or by a queue service manager) whenever a message arrives on a queue. The listener in turn can initiate message processing thus not having to poll the queue in order to determine whether or not any new work is available. The implementation specifics of a Push Based Model is made easier with introduction of internal IP addresses for roles. An internal endpoint in the Windows Azure roles is essentially the internal IP address automatically assigned to a role instance by the Windows Azure fabric. This IP address along with a dynamically allocated port creates an endpoint that is only accessible from within a hosting datacenter with some further visibility restrictions. Once registered in the service configuration, the internal endpoint can be used for spinning off a WCF service host in order to make a communication contract accessible by the other role instances. A Publish Subscriber implementation based on this straightforward. The limitations of this approach are.
      • Internal endpoints must be defined ahead of time – these are registered in the service definition and locked down at design time; In case the endpoints where dynamic a small registry can be implemented for the same purpose.
      • The discoverability of internal endpoints is limited to a given deployment – the role environment doesn’t have explicit knowledge of all other internal endpoints exposed by other Azure hosted services;
      • Internal endpoints are not reachable across hosted service deployments – this could render itself as a limiting factor when developing a cloud application that needs to exchange data with other cloud services deployed in a separate hosted service environment even if it’s affinitized to the same datacenter;
      • Internal endpoints are only visible within the same datacenter environment – a complex cloud solution that takes advantage of a true geo-distributed deployment model cannot rely on internal endpoints for cross-datacenter communication;
      • The event relay via internal endpoints cannot scale as the number of participants grows – the internal endpoints are only useful when the number of participating role instances is limited and with underlying messaging pattern still being a point-to-point connection, the role instances cannot take advantage of the multicast messaging via internal endpoints.

 Note: Given that application is not a large scale application spreading across geo location the pub sub model can still be implemented using the above approach. The limitation hit hard in large scale geo distributed applications. In case we are to look at a large scale geo distributed application the idea would be go for service bus.

  • Look at Service Bus Queues as alternative after a complete cost analysis as the Pub Sub implementation on Service Bus is out of box.

Dynamic Scaling

Dynamic scaling is the technical capability of a given solution to adapt to fluctuating workloads by increasing and reducing working capacity and processing power at runtime. The Windows Azure platform natively supports dynamic scaling through the provisioning of a distributed computing infrastructure on which compute hours can be purchased as needed.

It is important to differentiate between the following 2 types of dynamic scaling on the Windows Azure platform:

  • Role instance scaling refers to adding and removing additional web or worker role instances to handle the point-in-time workload. This often includes changing the instance count in the service configuration. Increasing the instance count will cause Windows Azure runtime to start new instances whereas decreasing the instance count will in turn cause it to shut down running instances. It takes 10 minutes to add a new instance.
  • Process (thread) scaling refers to maintaining sufficient capacity in terms of processing threads in a given role instance by tuning the number of threads up and down depending on the current workload.

Dynamic scaling in a queue-based messaging solution would attract a combination of the following general recommendations:

  • Monitor key performance indicators including CPU utilization, queue depth, response times and message processing latency.
  • Dynamically increase or decrease the number of role instances to cope with the spikes in workload, either predictable or unpredictable.
  • Programmatically expand and trim down the number of processing threads to adapt to variable load conditions handled by a given role instance.
  • Partition and process fine-grained workloads concurrently using the Task Parallel Library in the .NET Framework 4.
  • Maintain a viable capacity in solutions with highly volatile workload in anticipation of sudden spikes to be able to handle them without the overhead of setting up additional instances.

 

Note: To implement a dynamic scaling capability, consider the use of the Microsoft Enterprise Library Autoscaling Application Block that enables automatic scaling behavior in the solutions running on Windows Azure. The Autoscaling Application Block provides all of the functionality needed to define and monitor autoscaling in a Windows Azure application. It covers the latency impact, storage transaction costs and dynamic scale requirements.

 

Additional Consideration for Queues

HTTP 503 Server Busy on Queue Operations

At present, the scalability target for a single Windows Azure queue is “constrained” at 500 transactions/sec. If an application attempts to exceed this target, for example, through performing queue operations from multiple role instance running hundreds of dequeue threads, it may result in HTTP 503 “Server Busy” response from the storage service. I have found Transient Fault Handling Application Block pretty handy in retry mechanism - http://msdn.microsoft.com/en-us/library/hh680905(v=pandp.50).aspx

 

Important References

Saturday, 25 August 2012

Windows Azure Service Bus- Messaging Features

 

The Service Bus is single most important component be it an Enterprise Integration scenario or a cloud (which by the way happens to be mass scale integration of massive number of applications). The expectation from Service Bus in the cloud are very many, when compared to an Enterprise scenario the Enterprise Service Bus does cater to a bare minimum of the following features

  • Messaging Services: 
  • Management Services
  • Security Services
  • Metadata Services
  • Mediation Services
  • Interface Service

ESB is a messaging expert not to get into history of Traditional EAI, EAI broker, MOM architecture. Messaging is a feature which has seen significant improvement in past 2 decades in ESB. This post I’m specifically concentrating on Windows Azure Service Bus – Messaging capabilities and compare it with what is the standard ESB implementation. Before dwelling into the details of the Azure Service Bus Messaging setting the context on Messaging features on a standard ESB.

 

ESB – Messaging features – What to expect

 

The Message: A message is typically composed of 3 basic parts: the header, the properties and the message payload. The header is used by the messaging system and application developer to provide information such as the destination, reply to destination, message type & message expiration time. Properties section is generally a name-value pair. These properties are essentially a part of the message payload or body that get promoted to a special section of the message so that filtering can be applied to the message by consumer or specialized routers. The format of the message payload can vary across messaging implementation example plain text, binary or xml.

ESB is a messaging expert so that it can manage whatever type of messaging you can throw at it.  The types of messaging which can potentially be exchanged in mid sized organization between different business and support application can be very many and cloud scale can be a very different playground. Obvious to the fact there will a standard set of messaging which will be supported by an ESB below are the following

  • Point to point Messaging :  P2P messages can also be marked as persistent or non-persistent
  • Point to point request/response: Request/ Response Messaging Pattern for most ESB is synchronous , asynchronous in nature. Applications and services in fire and forget mode which allows an application to go about its business once a message is asynchronously delivered. A variant of this is the Reply Forward Pattern where by response of the message is send to another destination. 
  • Broadcast message
  • Broadcast request/response
  • Publish subscribe: Pub Sub is self explanatory a common misconception regarding Pub Sub is lightweight compared to point to point. A pub sub message can be delivered just as reliably as a point to point message can. A message delivered on a point to point queue can be delivered with little additional overhead if it is not marked persistent. A reliable pub sub message is delivered using a combination of persistent message and durable subscriptions. When an application register to receiving message of a specific topic it can specify that the subscription is durable. A durable subscription will survive is the subscribing client fails. This means that if that intended receiver of a message becomes unavailable for any reason, the message server will continue to store the messages on behalf of the receiver until the receiver becomes available again.

image

  • Store and forward: ESB provides message queuing and guaranteed delivery semantics which ensure that “unavailable” application will get their data queued and delivered at a later time. The message delivery semantics can cover a range of options from exactly-once delivery to at-least once to at most once delivery. Message when marked as persistent will utilize store and forward mechanism.

image

In a ESB the concept of store and forward should be capable of being repeated across multiple servers that are chained together.  In this scenario each message server uses store and forward and message acknowledgements to get the message to the next server in the chain. Each server to server handoff maintains minimum reliability and the QoS that are specified by the sender.  It would be interesting to understand how Azure really manages this internally. MSFT has not given out the details on the same.  This is were the idea of dynamic routing  comes into play.

Transacted Messages  an important aspect of messaging in simpler words “transactional messaging”. ESB is predominantly built around “loose coupled architecture”, introducing an idea of producers and consumers of message participate in one global transaction is defeating the purpose of purpose of loosely coupled architecture.  What is effective in the ESB scenario is local transaction. The local transaction is in context of an individual sender or an individual receiver where multiple operations are grouped as a single transaction. An example is the grouping together of multiple messages in all or nothing fashion. The transaction follows the convention of separating send and receive operations.  From a sender’s perspective the message are held by the message server until a commit command is issued in which case the messages are sent to the receiver. In case of a rollback the messages are discarded.

image

There are specific situation where sending or receiving of a local transaction with the update of another transactional resource, such as a database or transactional completion of workflow code. This typically involves an underlying transaction manager that takes care of coordinating the prepare commit or rollback operation, each resource participating in the transaction. ESB in general provides interfaces for accomplishing this, allowing a message producer or a consumer to participate in a transaction with any other resource that is compliant with the XOpen/ XA two phased commit transactional protocol. This ideally becomes a distributed transaction.

Having covered enough on standard ESB messaging dwelling into what Azure Service Bus has to offer is next.

Azure Service Bus Messaging

Azure Service Bus consist of a bare minimum of following features. The focus of this post is Service Bus Messaging

image

 

On July 16, 2012 Microsoft released the beta of Microsoft Service Bus 1.0 for Windows Server. This release has been tightly kept under wraps for several months and my team was fortunate enough to have the opportunity to evaluate the early bits and help shape this release. A separate blog post on the same will out soon.

 

The service bus server component mentioned above is clear replacement to MSMQ.

 

Azure Service Bus supports the following Messaging Patterns, not getting too overwhelmed with earlier discussion of messaging types , there is a direct comparison of the same towards the end of this post.

At a high level Azure Service supports the following types of Messaging Patterns

  • Relayed Messaging:  Message Session Relay Protocol used in the computer networking world is a protocol for transmitting a series of related messages in the context of a communications session. MSRP messages can also be transmitted by using intermediaries. Relay Messaging Pattern is similar in many ways to MSRP . Service Bus in Windows Azure provides a highly load balanced relay service that supports a variety of transport protocols and WS standards. This includes SOAP, WS-* and even REST.  The relay service supports the following messaging types

 

  • One way messaging
  • Request/ Response
  • Point to Point
  • Publish / Subscribe scenarios
  • Bidirectional socket communication for increased point to point efficiency.

 

In a relay messaging pattern an on premise service connects to the relay service  through an outbound port and creates a bidirectional socket for communication tied to a particular rendezvous address. The client can then communicate to the on premise service by sending messages to the relay service targeting the rendezvous address. The relay service will relay messages to the on premise service through the bidirectional socket already in place. The client does not need a direct connection to the on premise service nor is it required to know where the service resides and the on premise service does not need any inbound ports open on the firewall. To support this at a code level the .NET framework WCF supports relay bindings.  Relay Service require a server and client components to be online at the same time. So essentially the persistent and durable messaging is not something which the relay can outright support in its vanilla form. Looking the Jan 2012 release of Azure “it supported only relay messaging” which in my personal opinion was “half baked ESB messaging”. HTTP style communications in which the requests may not be typically long lived, the clients that connect only occasionally , such as browsers, mobile applications don’t fit the bill for relay messaging.

Does relay messaging  only support synchronous behavior is something which needs more discussion?

July 2012 MSFT decided correct the mistake with introduction of brokered messaging.

 

Brokered Messaging

Brokered message is the asynchronous option for messaging or temporal decoupled. Producers(senders) and consumers(receivers) do not have to be online at the same time. The messaging infrastructure reliably stores messages until the consuming party is ready to receive. This allows the components of distributed applications to be disconnected, and connect whenever desired and download the messages. The core components of Service Bus brokered infrastructure are Queues, Topics, Subscription. These components enable new asynchronous messaging scenarios such s

  • Temporal decoupling
  • Publish/Subscribe.

Brokered Messaging essentially filled the gap for persistent durable messaging.

Service Bus Queues

Service Bus Queues are decoupled messaging construct. In the service bus they have the following characteristics

FIFO- delivery of messages to one of more consumers in a sequenced order.

Load leveling is a perceived benefit which is standard benefits of using a queue. Since the sender and receivers are decoupled the message sending and consumption strategies can be many offline receive. Fan out receivers in case of messages in the queue are too many. 

Note: Rolling out more receivers instances on Windows Azure will take the order to 10 minutes.

At a feature level Queue has the following functionality

Receive and Delete- allows the options for the receivers to receive and message and later issue a delete.

PeekLock- receive operation is two stage which makes it possible to support application that cannot tolerate missing messages. When Service Bus receives the request it finds the next message to be consumed, locks it to prevent other consumer from receiving it and then returns it to the application. After the application is finishes processing the message it completes the second stage of receive process by calling Complete on the received message, this will mark the message as being consumed. In cases where the application is unable process the message it can call abandon the Service Bus will unlock the message and make it available to be received by other applications. Usually a timeout is associated with PeekLock beyond which the Service Bus unlocks the message.

In case of  message being read and no Complete issued the Service Bus considers this as Abandon situation. Going back the standard implementation of Store and Forward it supports all At Least Once.

If the scenario cannot tolerate duplicate processing, then additional logic is required in the application to detect duplicates which can be achieved based upon the MessageId property of the message which will remain constant across delivery attempts. This is known as Exactly Once processing.

 

Topics & Subscription

A deliberate move to support Publish Subscribe in more structured manner topics had been introduced.  In normal queue based communication we see a single sender and a single receivers, Topics and subscription provide one to many communication in a pure pub sub manner. Useful for scaling to very large numbers recipients, each published message is made available to each subscription registered within the topic. Messages are sent to a topic and delivered to one or more associated subscription depending on the filter rules that can be set on a per subscription basis. The subscription can use additional filters to restrict that they want to receive. Message are sent to topic the same manner as the queue apparently received from subscription.

image

 

While the topics receives all the messages , each subscription picks a subset of the messages based on the subscription need. There is still a requirement filter the messages coming down to the subscription. The volumes of the messages can be large so filters ideally give you another chance to apply a where cause and have more targeted messaging. The filter expression is where clause on one of the properties and based on Sql 92 standards example given below

namespaceManager.CreateSubscription(("Dashboard", new SqlFilter("StoreName = 'Store1'");

 

image

 

Important notes


  • Filters are SQL 92 expressions , Correlation filter & Tagging Filter
  • Support for 2000 rules per subscription.
  • Each matched rule yields a message copy

What is additional overhead of having subscription and filter from a compute standpoint is something which one needs to understand?



Partitioning is one more targeted messaging construct which allows an additional rule to the filter by which the incoming message can be logically sub divided.


Example below


image



Composite Patterns of Messaging on Service Bus

CQRS have written about this in one of earlier post , In relation to messaging it makes perfect sense what they call it in the Messaging World is “Update Read Separation”.


  • · Reads on partitioned stores
  • · All writes through messages
  • · Distribution via fan-out
  • · Trades timeliness and instant feedback for robustness and scale


Diagnostics and Statistics


In the cloud world diagnostics and statistics is pretty much at a reset with new tools and new challenges. If one were to use messaging in service bus “diagnostic” deserve a special mention where the messaging can be some assistance.


The strategy for this could be to have



  • Flow diagnostics events from backend services to the diagnostic queues.
  • Vary the TTL by the severity, verbose errors short lived, fatal error reports long lived.
  • Filter by severity or needs of different audience

image



Correlation Pattern


If there is need to set the reply paths between a sender and receiver.  A sender needs to receive back a response on a different queue. The Sender sends in a correlation id with the Queue Name (Response Queue) where it wishes to receive the response. The receivers queue which receives the message gets picked up by an application which in turn post processing sends a responds to senders correlated information queue.


3 correlation models supported in Service Bus are



  • Message Correlation
  • Subscription Correlation
  • Session Correlation

image



N to 1 Correlation : This is a scenario where multiple Senders will send in the same correlation id or queue. What this ideally means multiple senders and the response to that needs to go back to a single response queue.


N to M Correlation : This is a scenario where multiple Senders will send in the different correlation id or queue. What this ideally means multiple senders and the response to that needs to go back to a multiple response queue.


Correlation in Service Bus


Message Correlation (Queues)



  • Originator sets Message or CorrelationId, Receiver copies to the reply
  • Reply sent to Originator owned Queue indicated by ReplyTo
  • Originator receives and dispatches on CorrelationId

Subscription Correlation (Topics)



  • Originator sets Message or CorrelationId, Receiver copies to the reply
  • Originator has Subscription on shared reply Topic  w/ rule covering Id
  • Originator receives and dispatches on CorrelationId

Session Correlation



  • Originator sets some SessionId, on outbound session
  • Receivers reuses SessionId for reply session
  • Originator filters on known SessionId using session receiver.


Additional features



  • Local Transaction support exists in Service Bus Messaging
  • Message Scheduling
  • Dead Lettering
  • Duplicate Detection
  • Prefetching


Summary


In principle the Service Bus Messaging supports pretty much all messaging type via relay or brokered messaging. In addition to it has lot more. Next Post comparing Azure Queues to Azure Service Bus Queue.


Codebase for All Supported Messaging on Services Bus can be found at my github here – still working - https://github.com/ajayso/Azure-Service-Bus---Messaging-Samples.git

Monday, 23 July 2012

What my 6 year old using an IPAD got me thinking about…….

 

As I watched my 6 year old learn how to use the new IPAD in matter of hours and how comfortable she was I noticed something “she managed to download most of the applications of her liking from ITunes and % of operation which she did using a browser where down to 10%”. More interestingly virtual keyboard was used even lesser. This actually got me thinking and specifically in the larger context in terms of Information Technology and what are most likely things going to happen in the coming years.

1. Smart & Rich Application find there way back into main stream: With most new devices be it phone or tablet they provide much richer interface and the browser will soon be out fashioned by the RIA applications installed on the phone or tablets. Interesting advancement in GPU and glass taking the center stage for display allows the potential for rich application development. I see a lot more new frameworks come up in this area. The new User Interface is not limited to the browser. Interactive User Interface Design is really the need of the hour. A closer look at Windows 8 tablet and most of your daily use applications will be replaced by live tiles. The User Interface Design plays a big role in IPAD too “intuitive interactive user interface design and essentially easy to use” that’s the reason major % of IPAD users in US are senior citizens.

2.Live Data: With Rich Applications comes the need of having lot more sensible data on the display. Data is alive essentially its push. I don’t see the devices in a position to either process that kind of data or cache that kind of data, at the same time I see the responsive to be the state of art. This is where the cloud comes into play. The MVC frameworks will have to revisited here to a large extent. The views on the devices are data rich and bank on lot of business intelligence and analytics. This part is essentially cloud.

3.No Keyboard please: The Applications running of device will be intelligent would guess the next expected request of the user well before time.  The touch plays a major a role in graphical and non-graphical views and data fetches on these operations will be very many. The chances of user going to the Keyboard will be very limited.

4. Majority Applications in the Cloud:  Cloud is the way to go. The bank or the mall next door all of it will be on the cloud in no time. I see majority of the application move to the cloud by 2020. The Cloud geographies have to evolve , at present we see none , the geographies will be defined based on security needs. This may sound slightly stupid. I see a reason for cloud geographies defining based on business needs of data segregation.

5.Internet of things: Most the devices which are likely to come in the mainstream in the coming years will have the capability of connectivity to internet (ANT, ANT+). These devices will be transmitting data to applications on the cloud & can get to the point of taking instructions back from the applications. These instruction could be basic or could be more advanced in terms running a diagnostic check and sending data back to the applications on the cloud. The application on cloud may decide later if a technician needs to be sent on premise or send a set of instruction to solve the problem. With an estimated number of devices been close to 30 billion by 2020 the cloud for devices will have to really huge… & intelligent.

6. Connected all the Time do you really need a Phone: It may feel a bit out of sci-fi movie , an individual with multiple devices is connected all time then why would you need mobile phone. The mobile phone is miniaturized device attached to hear, intelligent one which tells you a person wants to chat/talk to you. Do you really need an expensive mobile to do that job.

 

I know a lot more is likely to change in our daily lives the way we do things will change. Above are few of things which I think are likely to change. They can be many more. This is just light reading.

 

 

Monday, 16 July 2012

Cloud–Application Migration Experiences

Just finished a public facing capital markets portal migration to cloud.  A quick background the application provides advisory services in capital markets to about 100k users and architecturally built on the Microsoft stack of ASP.NET MVC 3 , Entity FW , WCF , Sql Server and involved the complexity of a service bus built on ESB toolkit for integrating with third party application for pure data pull & BI and dash boarding capability used were third party. Additionally a lot of third party controls used. Source Code Repository and Build Management was Team Foundation Server. The application evolved over a period of 3 years and deployment standpoint a DC and fully functional DR. The DC had roughly 4 Web Front End Server(inclusive of Reporting) , 2 Application Server, 2 Database Server clustered in an Active / Passive Model.
Agile was followed and overall estimation initially had been at 18 weeks with 4 releases. The actual completion took 22 weeks 3 major releases and 3 minor releases.
Cloud another disruptive technology, my heartfelt sympathies for developers. I too am a part of the same brigade. Cloud is indeed the biggest disruptive technology which has come in. The developers are at yet another unlearn and learn curve. This one is strangely different from the last one been Web where the developer had to move out of the client server era into a 3 tier one which pretty much got the developers to learn html, client side and server programming and whole variety of incremental education coming in. Cloud presents a new challenge following are some of my thoughts. What I’m penning down is what has journey been through this the pain areas and blind curves.
The application that we have written is about 3 years in functioning and an n-tiered applications. These application run in the companies data centers controlled by the infrastructure folks, now the applications are hosted in an unknown location depending on which deployment model of cloud we are targeting the re-architecting can vary from a complete rewrite to simple migration.  The complete application had to be re architected piece meal wise to suit the customer needs based on feature priority.
The First choice ideally would be to move the application to PAAS (ex: Azure or Amazon), some key considerations which need to be taken into account are
    • Decomposing the architecture into web and worker role. The web and worker role are stateless and loosely coupled which means rewriting the communication across the web and worker some example are rudimentary queues.
    • State Management is an area which needs special attention. Thumb Rule: Everything in cloud is stateless.There are multiple strategies on how to manage state in cloud common cache etc..
    • Third party controls are not welcome , Have addressed the work around’s this.  Some applications which really required the third party controls not web these are business specific were hosted on VM i.e Persistent VM offering by Azure, which allowed us to install whatever was required and keep the state. These VM were easily accessible to the other web and worker role VM.
    • MVC Revamp: If one is using MVC frameworks which by the way is the defacto standard for web based application today, the wiring between the controller and view may need to be looked back into. The model which is existential surface for objects has to be stateless. The database is no more relational not a compulsion one can still end up using relational but the motto is go No Sql. The datastore can vary anything from blobs, table (not relational store), hadoop or a relational store. This means a new Object Cloud Store Mapper framework.  Written an Object to Cloud Store Mapper framework will soon be sharing it as open source. The framework can work across multiple types of storage provided by Amazon, Azure , on Premise Database Server. CQRS Pattern had been implemented for faster reads.I think it helped.
    • Services Layer: If the application has a well defined services layer which serves the integration need there is good news all PAAS offering have a good enough Service Bus.
    • Identity Management:  Most PAAS have well defined authentication and authorization policies with federated choices like  authenticating with an on premise Active Directory, LDAP, Facebook, Google… The application needs small effort to support the cloud identity management.
    • Storage:  The storage options in the cloud are many ranging from non relational blobs, tables, hadoop to relational Sql Azure.
    • Data Migration: From on premise sql to sql azure was straight forward.
    • Resources Accountability:  A close watch on the code written is needed in terms how much Compute, Memory, Network & Storage(CMNS) static code analysis can assist in code clean up. Every useless instruction which gets execute is compute. The costing model on the cloud is based on CMNS.
    • User Interface: This is a big area, Although not cloud a lot has changed in this area in last 2 years, with introduction of various device types the tablet, the smart phone and browser innovations to HTML 5, Metro style UI the list is endless. The new mantra in UI is interactive designs this can change the programming model largely. HTML 5 alone has brought such innovative stuff discussed later.  The User Interface has been redesigned with more interactive design to suit multiple form factors. UI needed special consideration.
    • Microsoft Application Blocks for Azure: Auto Scaling ,Transient Fault Handling Block
    • Source Code Repository:  Have used GIT and seem to work exceptionally well does the job, There was no need to go in for some expensive tool like Team Foundation Server. The GIT server was tweaked to support some IDE experience. The source code of the same can be shared.
    • Test and Staging Environment- To cut risk both these environment were setup on Azure. Had to tweak GIT to work along these environment. Performance Testing for the application was kind of crazy I still haven’t got a handle on this. I’m assuming everything will work ok its cloud after all infinite compute & storage. Still working on it will update when we close this.
I’m trying to see if I can share the artifacts and source code for framework as open source. Lets hope.

Thursday, 21 June 2012

Cloud Strategy and Planning Framework

Cloud is moving fast , so are the IT companies. The Cloud Platform companies are churning out new features examples Azure 2.0 , Amazon new releases. The new functionality is tuned to the industry needs.

Gartner estimates, over the next five years, enterprises will spend $112 billion cumulatively on cloud services, 91% of IT Professionals anticipate in 5 years cloud will overtake on-premises computing & Over 80% of IT leaders indicate that their staff will need to develop new skills.

In the last 3 decades there has been billions invested in IT. These IT application a good percentage of them will most likely move into cloud. The IT services and consulting companies as of current have move past the stages of POC’s and building capacity for cloud development by way or training or buying out smaller companies in cloud developmentThere is a need for most IT services and consulting companies to start thinking of a good framework & proven practices for addressing cloud projects. Cloud is essentially a disruptive technology and is a part of overall IT Architecture & I believe an Enterprise Architecture should drive Cloud Strategy & Planning.

I read a couple of articles from companies like

Aditi Technologies http://adititechnologiesblog.blogspot.in/2012/06/5-important-considerations-for.html,- I found most of the articles from Aditi on tactical technological benefits , no insights into maturity of having a Strategy Framework for Cloud

Sapient - CLOUD COMPUTING FOR THE FINANCIAL SERVICES ... – Sapient- Very Half Baked Strategy

Booz Hamilton - http://www.boozallen.com/consulting/transform-technology/technology-innovation/cloud-computing- Seem to have some kind of a framework not much data on the same.

Open Group on Cloud Computing - http://www3.opengroup.org/subjectareas/cloudcomputing- Some good stuff but not open to public

I’m still in the process of evaluating more companies.  I see the need to highlight the importance of having a Cloud Strategy and Planning Framework.  TOGAF in some ways can be morphed to run large cloud engagement. The below write up is work in progress I’m reading, collating data & putting my own perspective to the same.

. I have studied the sapient cloud framework, CSC cloud adoption strategy , accenture cloud strategy framework. One thing which is categorically missed out is Cloud although a disruptive technology but still is a part of the overall IT Architecture. Hence Enterprise Architecture should ideally drive the Cloud Strategy & Planning.

The Need for Cloud Strategy & Planning Framework

A need for a robust cloud strategy framework is required. The CSP will help aligning the Cloud Strategy to Implementation. The Corporate Strategy defining the business benefits , Business & IT Capability Analysis & Transformation Plans & RoadMaps. The Transformative Strategy driving the execution ( Initiatives & Projects).

The Methodology Used

One can use any Enterprise Architecture Framework which indeed has a methodology like TOGAF ADM. The TOGAF ADM can be taken and changed to adapt to cloud.

Applying TOGAF to CSP

clip_image001

Strategy Rationalization

The Business & IT strategies are walked through to identify cloud ready capabilities and which align with the overall business goals have a good value and low risk. This is starting process which a experienced consultant will study the organization enterprise architecture, understand the business drivers , goals & dependencies. It’s a hectic process.

Establish Scope and Approach:

  • Conduct the Cloud Envisioning Workshop
  • Provide overview of cloud computing
  • Define the enterprise business model for cloud computing
  • Establish project charter

Set Strategic Vision

  • Gather the IT and business strategic objectives
  • Identify strategic cloud computing patterns and technologies
  • Analyze customer feasibility and readiness
  • State strategic vision for cloud computing

Identify & Prioritize

  • Define evaluation criteria for key IT & business value drivers
  • Evaluate the capabilities based on these metrics
  • Identify ~5 high-priority capabilities for deeper analysis

Cloud Valuation

Based on the priorities identified , we evaluate the technology readiness and risk associated with the same.

Profile Capabilities

  • Determine current state of capability maturity leveraging IO Maturity Tools
  • Execute Risk Analysis Method with corresponding assessments and remediation steps.
  • Profile the capability asset portfolios of information, technology, and processes and analyze by architectural fit, risk and readiness

Recommend Deployment Patterns

  • Research capability proven practices and market direction
  • Define target cloud capability requirements
  • Determine optimal cloud service and deployment patterns for the capabilities based on fit, value, and risk

Business Transformation and Planning

Define the Execution Roadmap.

Define & Prioritize Opportunities

  • Completely define opportunities to include an overview, benefits, risks, assessment results, technology impacts, and project plan
  • Prioritize opportunities for detailed architecture and execution

Define Business Transformation Roadmap

  • Assess implementation risks and dependencies
  • Develop and deliver a business transformation roadmap
  • Validate with the customer and edit accordingly

 

I’m currently engaged in cloud assignment which is helping me build this framework. I  will be publishing the complete framework and more structured documentation.

 

I have taken a lot of the inputs from Mike Walkers blogs found it nice.

Saturday, 16 June 2012

Windows Azure–Session–Starter Series

Considering a lot of folks have asked to host the sessions on Windows Azure, here I start with the first session. The volume is slightly low ,works fine on headphones.
Broken down in to  Windows Azure Part 1, Part 2 for size consideration. This session have be done last year Oct 2011. Windows Azure 2.0 is already out this is older session good to have an understanding from a beginners stand point of view. The subsequent sessions will be in line with Azure 2.0.
-

Part 2

Wednesday, 13 June 2012

Forrester–Future of IT is not Cloud

The complete article on Future of IT is not Cloud can be found here. Below are my responses marked in blue.

Cloud computing is not the future of IT and Commoditization is, says analyst house Forrester, although the two support each other.

--> Amazon, MSFT Azure pretty much have there entire business model packaged around " selling cloud services as granular as possible". The Commoditization of cloud is currently at the platform service level, it has to be seen how the economics play around changing the Commoditization story around larger offering. The larger offerings like Sales Force, MS CRM, Office 365 are already in the market. A lot of organization have begun to see value in moving to the low level Commoditization as well using larger offering such as Office 365. The future play I presume, the direction is already set up for most organizations. What's to see how much more Commoditization can happen? I refer to old article on Cloud Computing Commoditization - Cloud Computing Commoditization.Which is dated 2009 , I guess unless you are living under a rock you will see how much progress has been made into this space.

Forrester says that as a result of commoditization and modernization, IT portfolios will evolve over time so that many applications will become suitable for cloud deployment, but many will not.

--->This is not a new statement its an line pulled out from the old reports "We all know in client server to 3 tier change era not all applications moved to 3 tier as there was no business justification.

James Staten, an analyst at Forrester, says in a report: "Not everything will move to the cloud as there are many business processes, data sets and workflows that require specific hardware or proprietary solutions that can't take advantage of cloud economics. For this reason we'll likely still have mainframes 20 years from now."

--->Cloud is broadly classified into 2 major areas Cloud Applications and Cloud Platforms. It’s a bold statement to make that all IT applications will move to the cloud in FY 2022, We know for sure this is a business decision and some very large investments have been made in technology in the past 3 decades which involve IP, proprietary solutions and standards moving to cloud should be seen first be the ROI lens both short & long term. So I would tend to agree with James but Forrester needs to go down into the trenches of each industry verticals come out with hard data on IT applications and potentially the % of conversion into cloud. That would require a lot of hard work, I'm sure there are plenty of consulting companies which are already equipped with that data as this is what is going to drive there sales in the coming years.

The "Make the Cloud Enterprise Ready" report, which is part of Forrester's "Playbook on Cloud Computing", urges CIOs to "leverage cloud services today and reap the early education from doing so" to gain competitive advantage in the future.

--->I think 40-60% CIO are past the education phase on cloud, A lot are experimenting to understand the ROI and long term benefits. So Forrester is little late here. A good gauge will be to dig into Amazon Enterprise customer data I'm sure some interesting results will be there.

The Playbook on Cloud Computing is a framework for adopting the cloud, going into detail on the benefits and disadvantages of public and private clouds, cloud economics, and addresses "cloud washing" - the efforts by a number of vendors in branding their "business-as-usual IT services" and virtualization products as "cloud" offerings.

--->Playbooks should go beyond Virtualization story peg the Cloud Platform Services story " that’s where the real meat is". Most organization today use virtualization . What they really need at this point a consulting company to walk the talk on cloud application migration of course with hard data in terms ROI.

The Make the Cloud Enterprise Ready report says: "Long term, enterprises will have a hybrid portfolio of cloud and non-cloud workload deployments that uses these options to optimize resource and agility requirements."

It adds: "In this future state the majority of system workloads will be cloud-resident while your own systems of record will evolve to cloud at a slower but deliberate pace. The end result will be a mixed environment managed through a decision tree and a series of workload automation systems that ensure governance and regulatory compliance across

this portfolio."

--->The Hybrid Cloud Story is quite a confusing term Find my research here Hybrid Cloud

The report also warns that those companies that have chosen a private cloud architecture as their main cloud strategy will not realize the savings that can be made through public cloud services or through an architecture that combines both public and private cloud IT capabilities - usually known as a hybrid cloud architecture.

--->Interesting but don’t agree as Private Cloud is initial strategy for most organization to move into cloud and eventually mature into public cloud. My research on private cloud can be found here Architecting the Private Cloud.

My parting words to Antony Sawas  author of the Forrester article we need factual data to support the text.

Saturday, 5 May 2012

PaaS (Platform as a Service)–The Choice for New Applications on Cloud

PaaS or Platform as a service as a concept has been well received, however one really needs to understand when is it likely to hit the mainstream. In this post I will start with the the basics of PaaS and IaaS, dig deeper into PaaS,  notes on Windows Azure PaaS programming model & lastly what’s the roadmap of Windows Azure really looking like. As usual a disclaimer “ this post is my personal views I don’t write for MSFT”. Humble request to the readers would really love have some feedback. Happy reading…

Cloud platform technologies are broadly divided into 2 categories PaaS & IaaS. Amazon Web Services(AWS) Elastic Cloud (EC2) first hit the market in IaaS segment. PaaS is something we are given to believe is expected to hit  the mainstream soon, the question is when & with the new developments I see the timeline just stretching.

The key point is that IaaS is dominant in the market, its about 10 times the market share of PaaS (courtesy:Gartner Inc.). It sounds little disruptive but Azure is adding true IaaS support by the end of this year.

If we look into Windows Azure today which is purely PaaS what exists as of current in the Azure Platform is

  • Web/Worker roles
  • Persistent VM roles expected to hit the market later this year. (VM Roles already exist as of current and are not very useful). Persistent VM Role is true IaaS functionality.
  • Web Sites expected to hit the market later this year.

 

Getting Definitions right….

Understanding IaaS:  Understanding IaaS from a scenario per say. 

image

From an example standpoint of explaining IaaS , a developer is running a multi tier application and has to deploy this application on a cloud would include the following steps

  • Choose a pre installed VM which included the OS & the database.
  • Choose a pre installed VM which included the OS and Application support such as IIS
  • Provision database and create the tables and add data
  • Install application
  • Configure the load balancer
  • From time to time manage the VM’s and DBMS from a patch management point of view.

 

Understanding PaaS

If one needs to deploy the same application on the PaaS platform, it would look some what like below. The PaaS platform come pre installed with the Database, Application and load balancer

image

The steps involved in deploying the application are only 2

    • Provision database and create the tables and add data
    • Deploy the application

From the abovementioned scenarios PaaS seems much simpler and this simplicity will drive the usage of PaaS in the future.

Benefits of PaaS 

  • PaaS is faster
    • Reason: Theirs is less work for developers to do
    • Benefit: Applications can from idea to availability more quickly.
  • PaaS is Cheaper
    • Reason: There’s less administrative work to do
    • Benefit: Organizations spend less supporting applications
  • PaaS is lower risk
    • Reason: Platform gives so much predefined , the window of error is reduced
    • Benefit:Creating and running applications gets more reliable.

* With all these benefits IaaS is 10 times more popular the question how come? The answer is fairly complex and will explain in the remaining of the post

Drawbacks of PaaS

  • Unfamiliar for developers
    • Its harder to adopt because they much learn the PaaS platform
  • Developer have less control
    • They must work within the constraints of the PaaS technology. Each PaaS technology is different from another comparing Azure from AWS quite different. There is no standardization so moving across PaaS platforms can become very difficult
  • PaaS isn’t identical to an existing on premise environment
    • This can raise fears of vendor lock in , example is Salesforce.com PaaS is completely different and building an application on that can mean married to the same for life.
    • Moving existing on premise application to PaaS can be hard. There can be a considerable amount of rewrite on moving existing applications to PaaS.
  • PaaS supports fewer useful scenarios  than IaaS . IaaS in its current form is much more flexible to allow on premise application to move to cloud.Lets take a quick comparison from scenario standpoint between PaaS & IaaS

 

Scenarios IaaS PaaS
Running New Cloud Native Application Yes Yes
High Performance Computing and Big Data Yes Probably
Running a Standard Database Yes No
VM’s for a Dev/Test Lab Yes No
Running existing Web App/Sites Yes Maybe
Running Standard Packaged Apps Yes No
Virtual Data Center (VM;s for on Demand Use) Yes No
Disaster / Recovery similar to the on Premises world Yes No
  • Running New Cloud Native Application works fine on PaaS as long one does have issues with the vendor lock.
  • HPC and Big Data very apparent in IaaS world, in the PaaS still getting there, again moving an existing HPC on premise to PaaS may not be possible
  • Running a Standard Database such as Sql Server or Oracle is not supported by PaaS as of current.
  • VM’s for Dev/Test Lab not possible on PaaS
  • Running existing Web App/Sites on PaaS not possible as of today
  • Running Standard Packaged Application such as SAP, SharePoint on PaaS not possible.
  • Virtual Data Center a fantastic offering from IaaS not possible on PaaS
  • Disaster Recovery , IaaS can be a good foundation which replicates the on premise world on cloud. PaaS however cannot do that.

IaaS addresses a lot more scenarios than PaaS. On the contrary there is still an argument i.e cost of operation Vs. abstraction.

Cost of Operation Vs. Abstraction

From a cost of operation standpoint the physical machines are the most costly and least level abstraction, then came virtual machines which brought down the cost further and increased the level of abstraction.Subsequent to this we see the IaaS which reduced cost of operation further and increased the level of abstraction further. Finally came PaaS which reduces the cost of operation further and increased level of abstractions. How long will it take for the enterprise take to move into PaaS no correct answer?

 

Benefits of PaaS – A Closer Look

Dwelling into the benefits of PaaS the platform on which the conclusion are drawn is Windows Azure. Looking at following key parameters

  • Application Design
  • Application Development
  • Application Test
  • Application Deployment
  • Storage
  • Administration & Management

 

Application Design

  • The starting point on Application Design on PaaS is at much higher level from a design point of view there are lesser things to do.
  • Virtualized Images which is important in IaaS one doesn’t need to bother in PaaS. So in way one need not look at security too much in depth when designing for PaaS
  • Designing for redundancy at a VM level is not required as PaaS manages it internally

 

Application Development

  • PaaS provides a lot more services than IaaS a developer needs to write lesser code.
  • PaaS hides most of the configuration related stuff and developer has to do very little. In scenario where you have teams working globally integration problems stemming from diverse environment are reduced as there is very little for configuration and the environment is one (Azure).

 

Application Testing

  • As there is lesser code to write apparently there is lesser code to test
  • Azure provides single environment to test
    • Teams don’t need their own test platform
    • Test teams don’t need to understand and track configuration changes

Application Deployment

  • One key thing in PaaS as a developer one gives the tested code to the PaaS platform (assuming the role level segregation) and PaaS is responsible for deploying, so the timeline for deployment comes down in contrast to this IaaS is the same as on premise deployment.
  • Another important feature of PaaS is “in place update without downtime”. Updated applications can be deployed in place without any downtime. Again this is a platform feature.
  • Caching and Storage is inbuilt feature of PaaS , developer can use this in their code without really bothering about the setup or configuration related details.

Storage

  • Considering one is using on cloud storage there is zero administration.
  • HA comes in automatically
  • Data is replicated automatically: Doing backup solely for recovery failure is less necessary.

 

Administration and Management

  • No need for administrators
  • No need to management team

 

*In my next post I will be publishing a comparison of the actual data on timeline for building an on-premise application vs. on a PaaS platform & the complexities associated with the same

 

 Getting Into Windows Azure Programming Model

Why is there a need to create a new programming model?

The PaaS platform comes with a lot of pre canned features and in order to effectively use it one has to follow a certain discipline which eventually is a new programming model.

PaaS sets in some ground rules… they are

  • Role Segregation: PaaS ideally segregates the applications into roles ex: web role, worker
    • Web Role which accepts request from users (Web Role synonymous to IIS)
    • Worker Role: Runs code
  • Multiple Instances: PaaS application runs multiple instances of each role. PaaS has an SLA of 24X7 availability so the bare bone requirement of this is 2 instance to manage HA. Its not mandatory to have 2 instances of each role and function without HA.
  • Application Behavior: If one of roles (which the application is hosted) fails the applications should behave correctly.  Its required that Application have to survive failure of any instance. This is a hard rule. What does it mean
    • Storage must be external to Web/ Worker role instance. An instance shouldn’t store data locally.  It should use Sql Azure , Tables or blobs to store the state. Most of us many think of the lines of components been stateless. Stateless is a confusing term.
    • Interaction between Roles should be generic: In other words Web/Worker role should not care which instance of another role it interacts with. Example a Web role instance in time may open a tcp/ip connection to a specific worker role and hope that the worker role continues to live in the bigger scheme of things. Understanding the basic premises that communications across roles also needs to be loosely coupled and the expectation that next time the web role is going to connect to the same worker role is not appropriate as “The worker role may been recycled and all the state is lost. Go with the basic assumption that any role can fail any time and that’s way the PaaS platform wants you to build.
    • No Sticky Sessions in PAAS:  A client shouldn’t assume that all of its request will be handled by the same Web Role Instance.

There are constraints around how you build the application which needs to run PaaS there is a rationale as to why these constraints have come into existence.

Fabric Controller – A Background

Most PaaS implementation has this component called the Fabric Controller and all the machines in a particular data center are its ownership.

  • It creates and monitors role instances on those machines.
  • It starts new instances when – a new application is deployed or an running application fails or when it needs to update system software in an instance virtual or physical machine.
  • The FC is smart enough not to assign the same roles of the application on the same physical machine.

Fabric Controller 101….

Lets say we have a set of computers to be exact 40 of them each with 4 cores. We have a total of 160 cores at our disposal. There is a need to run a variety of applications on these cores. So architecturally speaking I would need a central software which we call is a Controller and I would need Agents installed on all the computers.

1.An application run request would come to the Controller. The controller has a complete inventory which computer and which core is been assigned to what application.

2.The controller finds the appropriate computer passes the application binaries to agent (computer) which in turn has running virtual instances of Windows Server 2008 .

3.The agent picks up one of the virtual instances and hands them over the binaries.

4.The application binaries are scanned for the type of role if it’s a web role the binaries are copied to c:\inetpub\wwwroot\ creates a virtual directory & application sends the endpoint back to the agent.

5.The agent in turn sends the physical endpoint to the controller.

6.The controller registers the endpoint into some kind of registry. The logical endpoint is something which is given to the end-user.

7. The FC can kill any of running instance at any point of time.

Somewhere in the description one will soon realize there is an service bus also initially called the internet service bus.

 

Microsoft’s Fabric Controller

Microsoft’s data center stores all the data of Windows Azure storage and all Windows Azure applications. Windows Azure Fabric Controller controls manages the servers, the set of machines which are dedicated to Windows Azure and the software that runs on the Microsoft Data Center. Windows Azure Fabric Controller is a distributed applications that is replicated among a group of machines.It has its own of resources in its own environment like computers, load balancers, switches etc. Windows Azure Fabric Controller can communicate with the fabric agent on each machine. It keeps track of all Windows azure application in the fabric.

image

This helps the Windows Azure Fabric Controller to perform useful activities like monitoring all the running applications. The Windows Azure fabric controller decides where new applications will run and also selects the physical server so that hardware is utilized optimally. This is achieved using the configuration information which is uploaded with each Windows Azure application. The FC controller achieves this using the configuration information which is uploaded with each application on Windows Azure. The configuration file is an XML file which explains the various instance of the application, the number of virtual machines to be created for the applications/

Because of this understanding the FC does a number of things like monitoring all running application, decides where a new application should run , optimize hardware utilization by choosing the physical server.

OpenStack Compute fabric controller is called Nova.

Windows Azure is a 1 million core machine and I’m assuming the FC in itself is Server Farms locally and distributed.

 

Interacting with the Operating System

In PaaS at any given point of time your code will never interact with the operating system directly , the FC own the OS. It updates each’s OS when necessary. Any changes made must be applied each time an instance starts. Any changes made from a configuration stand point have to reapplied each time an instance starts. In case there is a requirement to have software which is not already there at the platform level what does do.  This can be done in more than one ways lets say you need to have telerik support on your web role you need install this every time the role starts up. In case there are too many things to be installed the “time to get started will be too long” and this solution may not look feasible.

This is scenario we can use current VM role provided by Azure where the developer gets to supply the image but any changes to VM are lost at every restart this is a problem hence MSFT will be including persistent VM Role where the state is stored in the blob.

Summarizing- PAAS Programming Model

  • Application are more available and cheaper to run on PaaS
  • What it offers
    • Protection against hardware failures
    • Protection against software failures
    • No downtime application updates
      • With a single step update called the whipsaw
      • With a rolling update using update domain
    • No downtime system software updates
    • No administrative efforts

Moving Applications to Windows Azure PaaS

  • An ASP.NET application with multiple load balanced instance that share state stored in Sql Server
    • An easy move
    • Perfect fit for PaaS platform
  • An ASP.NET application that runs multiple instances that maintain per instance state and relies on sticky session
    • Requires some work
  • A client accessing WCF services running in a middle tier
    • If the service don’t maintain per client state between calls , an easy move
    • Otherwise some redesigning effort is required
  • An application with a single instance running on Windows Server that maintains state on its own machine
    • Some redesign needed
    • This application might run well in Persistent VM role.

 

Innovative Business Idea: Writing a Migration Tool for on premise windows application to Azure….

Introduces Web Sites in Azure

PaaS & IaaS are cloud platform technologies. Cloud computing and hosting which were 2 different worlds  years are no longer separate. Customers can now buy a wide range of platforms offering from various service providers which include IaaS & PaaS or could be buying hosting servers by the month.

Cloud categorization has predominantly understood as IaaS, PaaS & SaaS a more cleaner way to look is SaaS, IaaS, cloud Platforms like Azure, AWS & Private Cloud.

Hosting - Common Technology options today

image

Difference Between Hosting to Cloud Computing

image

Hosting & Cloud Computing- Categorizing options

image

 

Azure is likely offer monthly pricing for Persistent VM Role & Websites which are actually a part Windows Azure. Microsoft will also make WebSite software for service providers.

Windows Azure WebSites- Provides shared hosting for WebSites- Application can also access other Windows Azure roles.

WebSites are different from Web/Worker Roles on following accounts

  • Web Sites provides a standard IIS Web environment it supports sticky session. Web Role are stateless low admin application.
  • Web Sites will help in running existing Web Application unchanged on Azure as compared to Web Role which mandates a change depending on how the application is written
  • WebSites are shared on the same virtual instance on contrast Web Role which is dedicated to a virtual instance.
  • Web Sites are best suited for new and existing small to medium Web Sites/ app on contrary Web Role are meant for large cloud apps.
  • Application deployment for WebSites its liking a creating a new site on an existing VM on Web Role it’s a new VM Role
  • WebSites allow deployment of updates without downtime same as Web Role

 

Web Sites in Azure will help bring customer onto Azure with lesser effort this is a well thought out strategy for the long term.

Windows Azure provides multiple choices for the customer to move to cloud & it is worth the effort in terms of reduced costs.

 

 

 

 

 

 

Tuesday, 17 April 2012

Amazon–Microsoft SharePoint on AWS–Reference Architecture

Amazon – AWS has seemed to have caught the imagination of a lot folks lately with its “increasing love for Microsoft products”. AWS provides a complete set of services and tools for deploying Windows workload including Microsoft SharePoint. Some kind of a satirical comment apparently no, just new out of Amazon yard is “Microsoft SharePoint on AWS – Reference Architecture”.

AWS & MSFT have partnered to enable customers to deploy enterprise class workloads involving Windows Server and Microsoft SQL Server on a pay as you go, on-demand elastic infrastructure, thereby eliminating the capital cost for server hardware and greatly reducing the provisioning time required to create or extend SharePoint Server farm. The on-demand elastic infrastructure on Amazon is somewhat of an icing on the cake and one need to really understand how it can really help. The pay as you go model is interesting and how does it differ from Microsoft SharePoint online.

Amazon seemed to have used SharePoint for its own corporate intranet; this is really interesting to see “Dog food before sell mantra” really may work.