Sunday, 14 July 2013

Big Data–Back to the Basics

 

The Big Data Landscape is for ever changing. While studying what’s there in market and how do I really get a handle at understanding Big Data I come to find that every 2 weeks there is a new name in the landscape. Also the flip side is there names which would just vanish in a short time. It would be good to get a basic understanding of Big Data.  In this post I don't necessarily talk about a specific technology I’m just trying to get the understanding right. Big Data if you see is classified around 3 areas

  • Batch
  • Interactive
  • Real-time Tools.

One really has a challenge in terms of how to understand Big Data, what is base classification of the Big Data space. This is what the landscape looks like as of early Jan 2013 this year, this has changed.

image

Broadly this gives an idea of all the different things that are in the big data landscape. The only way one can tend to understand the Big Data space is the following high level concepts.

Batch Processing

The data provided needs to processed. Large amounts of data needs to be processed fairly quickly. Typically most seen in the world of Hadoop or HDInsight on Windows Azure. Essentially what does it entail.

  • It is has data spread over n number disk on each of the nodes.Has distributed cluster to process that data.
  • Volume of the data is generally TB to PB.
  • The primary programming model used is MapReduce , essentially what we are doing is Mapping out operations to each of the machine and post that there is aggregation using the reduce function.

The MapReduce function have to be typically written by a developer. The original project was started by Google Map/Reduce.Currently 2 open source projects – Hadoop & Spark. Spark provides primitives for in-memory cluster computing: your job can load data into memory and query it repeatedly much more quickly than with disk-based systems like Hadoop MapReduce.

Interactive Analysis

The large set of data needs to analysed interactively. There are 2 technologies generally been employed her is column based databases which has sequentially indexed data. It has capability of doing table scans pretty quickly. The second is to have as much data in the memory cache. This is pure play interactive platform which are in a position to analyse large sets of data at very low latency. One such good example is  Palantir’s Project Horizon majorly building interactive analysis of big data for US government. Below is a cool video which explains it more depth. What seems be important in these kinds of implementation

  • Data is never duplicated.
    • Compact in memory representation – The in memory data representation needs to real small on the memory footprint unto 16GB. Compression must be lightweight -  Dictionary & prefix based compression , localized block based scheme are effective.
  • Analyse functionality here is not business specific “its more like analyse any kind of data”
  • Partitioning of processing:  Shared nothing/Sharded architecture .
  • Partition Id for Objects would be a good idea, but sending half billion Partition Id from the client to the serve will not be a good idea. Another way is to have has Partition object ID’s in subset using the same hash, use the hash for query.
  • Other option of interactive analysis is Drill, Shark, Impala & Hbase. Originally started with Google Dremel.

<Video on Palantir’s Interactive Analysis Platform – Project Horizon>

Stream Processing

Hadoop’s batch-oriented processing is sufficient for many use cases, especially where the frequency of data reporting doesn’t need to be up-to-the-minute. However, batch processing isn’t always adequate, particularly when serving online needs such as mobile and web clients, or markets with real-time changing conditions such as finance and advertising.”

The real-time use case is an obvious one. If you need to respond or be warned in real-time or near real-time, for example, security breaches or a service impacting event on a VoIP or video call, the high initial latency of batch oriented data stores such as Hadoop is not sufficient.

Moreover the data is not valuable without analysis. In a typical real-time scenario data is fed from multiple sources and analyse this data on fly is seen as a business requirement in multiple industry Segments.

Streaming Big Data analytics needs to address two areas. First, the obvious use case, monitoring across all input data streams for business exceptions in real-time. This is a given. But perhaps more importantly, much of the data held in Big Data repositories is of little or no business value, and will never end up in a management report. Sensor networks, IP telecommunications networks, even data center log file processing – all examples where a vast amount of ‘business as usual’ data is generated. It’s therefore important to understand what’s being stored, and only persist what’s important (which admittedly, in some cases, may be everything). For many applications, streaming data can be filtered and aggregated prior to storing, significantly reducing the Big Data burden, and significantly enhancing the business value of the stored data.

Stream Processing was started by a project called Storm called Twitter the need out there was to able store the data like images, feeds and second was to aggregation very quickly.

Storm, Apache & Kafka are some of the Stream Processing Platform.

Quick Comparison Sheet

image

The No SQL Paradigm

The Relational Database built by Codd came into existence in the 70’s majored by IBM with a multiple query language variants. Post that one we saw.

In 80’s we saw a lot of applications been built , the need for faster speed was important. Ingres.They came up with the idea of have to save some part of the big database in another small area what they termed as Index which improved performance.

Then come along the web which kind change the dynamics on the database which is were the concept of No Sql came from. No Sql means Not Only SQL.

The focus on any relational database design is on “how can we store” without duplicating the same. There is very little focus given on how do we use this. Given that in the current times the focus is more “How do I Use this data” obliviously with the best performance and scalability at center of the conversation. NoSQL ideally is focused on “How Do I Use This Data” for example is the data going to use for job queue, shopping cart, cms or multiple other usage scenarios. Below is JSON of a shopping cart item, which consist of id, user id, line items all in one place and that’s how it actually gets used.

{
id : 3,
user_id : 25,
line_items: [
{ sku : '123', price: 1000,
name : 'Nunemaker Autograph'},
{ sku : '124', price: 1000,
name : 'Banker Autograph'},
],
shipping_address: {
street : '123 Some St.',
city : 'South Bend',
state : 'IN',
zip : '11216'
},
subtotal : 2000,
tax : 140,
total : 2140
}

Its best to keep all the data which kind of related in one place because that’s the way its going to be used. In the relational world the emphasis is around how are we going to store it and separate it VS in the No SQL world its kept altogether and usage becomes far simplified.

NoSQL is about data OR Pro Data and its about How you Use the Data.The debate of SQL Vs. NoSQL is never around Scalability, but when get to a point of Scaling its far more easier to scale in the NoSQL world.

No SQL is analogous to OLTP

In coming section I will concentrate on some of important Interactive Analysis & Streaming platform.

HBaseInteractive Platform

HBase is used by Facebook. All the messages that one sends to Facebook are actually done through HBase. Its all about high volume super fast INSERTS. Its also good at volatile READS. HBase was designed to a highly transactional system owing to the fact the data is pretty much in the memory.

HBase is an in memory , column store database. Apparently this has to be understood properly especially the column store is not the same as RDBMS column. A very efficient INSERT or writes engine. Also the definition of database for HBase is different.

image

  HBase relies on Hadoop for its persistent storage , that kind pretty explain the rest of the figure.

Zookeeper is a distributed coordinating service this keeps tracks all the HRegion Server and make sure whatever is written into the memory os also written in the HDFS. By definition the way HDFS works is that if you lose a node you already 3 replica’s.

Cassandra - Interactive Platform

Cassandra is very similar to HBase in terms of functionality. Its got a SQL like query language they call it the CQL. Both HBase and Cassandra are very good at writes and super fast and both are in memory.

Facebook initially started out of Cassandra that moved on to HBase. The real reason of moving to HBase is not very clear.

Drill

Apache Incubation Project inspired by Dremel. Designed to scale to 10k servers , query PB’s in minutes. Traditional PetaByte Map Reduce will take hours today , Drill apparently takes minutes and some cases seconds. This is an Open Source reimplementation of Dremel.

A little background on Dremel

Some Google Terminologies which are associated with Drill- Big Data & Big Query- Big Data in the industry by definition is about 500 million rows and above.  Big Data has a limitation on size of the column at 64kb. 

Big Data at Google – What does it mean from a number per say

  • 60 hours of YouTube video uploaded per minute
  • 100 million gigabytes search Index / Analysis of 2010
  • 425 million Gmail users.

Looking at the size of the data a relational database is an obvious no, that leaves us with an option of doing a full table scan, which can be expensive in the relational world, that’s where Dremel was born.

Big Query is the externalization of this technology.

What does a Big Query really look like ex: Finding top installed markets apps?

SELECT  top(appId 20) AS app, count( * ) AS count FROM installing.2012

ORDER BY count DESC

Result in last than 20 seconds.

Where can we use Big Query?

  • Game and social media analytics
  • Infrastructure monitoring
  • Advertising campaign optimization
  • Sensor data analysis.

Apache Drill is Google Incubation project around Interactive Analysis of Large Datasets. Map Reduce is a batch mode tool and there is a latency associated with the same. There are cases where one would like to real-time data faster some of the scenario are

  • Adhoc –analysis with interactive tools
  • Real-time dashboards
  • Event/trend detection and analysis
    • Network intrusion
    • Fraud
    • Failures

The key point in Dremel is it uses Nested Data Model.

Apache Drill its system designed to support Nested Data.

  • Flat record are the simplest case of nested data that is root only
  • Support Schema(Protocol Buffers, Apache Avro) and Schema less (JSON, BSON) formats.

What is Nested Query Languages supported by Drill?

  • DRQL
    • SQL like query language for nested data
    • Compatible with Google BigQuery/Dremel
    • Designed to support efficient column based processing
  • Mongo Query Language
  • Other language/ programming models can plug.

image

 

The data model is a nested Data Model for Document split across Basic Document Data and URL entries. The query is very SQL like.

How does Data Flow work with Drill?

Data loaded into Hadoop Cluster by one of many mechanism i.e Hive, HDFS command line, Map/Reduce or NFS interface. The data in Hadoop is stored as row fashion. The Drill Load is responsible for converting the row based data into columnar manner.

Alternatively for the first time Row Based query is allowed which in turn helps in creating a columnar copy of the same.

image

 

What does the Query Execution of Drill look like at high level?

image

At a very high level the Query Execution involves the following step.

  • The Driver submits the Query(text) for the Parser. The Parser does a parsing builds the abstract syntax tree and hands it over to the Compiler.
  • The Compiler does the optimization and builds the execution plan
  • The Execution Engine is responsible for scheduling this Storage which can on any server

Typical Sql Query Components support? What are they

Query Components

  • SELECT
  • FROM
  • WHERE
  • GROUP BY
  • HAVING
  • (JOIN)

Key logical operators

  • Scan
  • Filter
  • Aggregate
  • (Join)

What is so unique about the SCAN logical operator?

 One of architecture goal for Drill has been to support multiple formats that is achieved by having a SCAN operator for each format. So for example the query consists of a Where which has JSON data the query would involve

SELECT Json(data URI)

Field and predicates are pushed down into the scan operator.

What’s actually involved in the Execution Engine?

Drill Engine Execution has 2 layers

image

  • Operator Layer which is serialization layer- This where individual records are processed, for example doing a count on a table there local table scan done followed by local aggregation and at the global aggregation a sum is done.
  • Execution Layer is not serialization aware, all it does is transfer blobs across nodes and cluster and this layer is responsible for communication between nodes, dependencies(what has to finish before) & fault tolerant.

A Complete Video on Drill can be found here

Introduction to Apache Drill

 

ImpalaInteractive Analysis

Impala kind of solve the same problem as Drill i.e moving beyond Map/Reduce , batch processing with low latency problems.  Very similar Columnar Storage and the complete works as Drill. So for the sake of simplicities will just get to the points of differentiations here.

However, it would be unfair of me to compare the two in detail in terms of maturity and functionality at the present time. As of October 29th, 2012, the Drill source code repository at [1] has code for a query parser and a plan parser which includes a reference plan evaluator which can perform scans against JSON-formatted data in flat files. Impala's tree at [2] includes a distributed query execution engine with support for cancellation, failure-detection, data modification via INSERT, integration with HDFS and HBase, JIT-compiled execution fragments via LLVM and a bunch of other stuff.

Impala is completely dependent on Hadoop. It utilizes Hive QL.Impala is progressing to becoming an MPP (Massively Parallel Processing) Architecture.

The query processing is similar to Drill.

image

The high drive from MSFT towards Impala is pretty evident as they want a good Interactive tool in this arena without that they would be doomed.

 

Storm- Stream Analysis

Storm is a free and open source distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use!

Storm has many use cases: real-time analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.

Storm integrates with the queuing and database technologies you already use. A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.

Closing Notes……………

There is a lot happening in this space. The need for interactive analysis and stream processing is a must for any big data implementation.

Google has been pretty much the innovator of Map/Reduce way back in 2003, then Dremel  They are pretty much leading this space with open source world really quick to capitalize on the same and bringing them to market. On the contrary the Rest of the Google world (Microsoft, Oracle, IBM…) have a long way to sprint to keep up. The easiest way from them to accept an open source implementation and roll it out quickly example HDInsight.

Its very clear that MSFT is pushing Impala in the Interactive space alongside HDInsight.

KQFM5PRP7JD8

 

Sunday, 23 June 2013

IaaS Inherent Part of the PaaS Architecture

 

We wish for a perfect world, honestly this exist only in utopian terms and so does an Architect realize while architecting a solution for the Cloud PaaS there is not perfect architecture. There is an architecture which fits the bill for the moment based on the shortcomings.

Normally while migrating an on premise application on Cloud PaaS we run into multiple scenarios where we see PaaS is not a perfect fit and one soon starts to ponder what are the alternatives and easiest is “Have that application or component build and deployed on an IaaS” as this would be very similar what we already have on premise.

So IaaS is something which is a solution in the short term. With Windows Azure correcting there mistake and bringing Persistent IaaS in 2012 and providing better basic features like clustering and support early this year, IaaS does become an attractive choice.

So what does Persistent VM in Azure really have to offer?

  • Storage: Persistent Storage – Easily Add new Storage.
  • Deployment: Build the VHD in the cloud or build on premise and deploy,
  • Networking:Internal end points are open as default. Access control with firewall or guest OS. Input endpoints controlled through management portal , services and API.
  • Primary Use: Application that requires persistent storage easily run on Windows Azure.

What OS images Azure IaaS comes with?

  • Windows Server 2008 R2
  • Windows Server 2008 R2 with Sql Server 2012 Evaluation
  • Windows Server 2008 R2 with BizTalk Server 2012.
  • Windows Server 2012
  • Open SUSE 12.1
  • CentOS 6.2
  • Ubuntu 12.04
  • Suse Linux Enterprise Server SP2.

Which key Server Applications does Azure IaaS support?

  • Sql Server 2008 , 2008 R2, 2012 ---> Note Sql Azure comes with very strip version of Sql Server so in case one is planning on using anything beyond transactional , one has to look at Sql Server on IaaS ( SSAS, SSRS, SSIS…..)
  • SharePoint 2010 , 2013(assuming) –> Note: Given Sharepoint Online is strip down version of SharePoint one will have to look at SharePoint Server on IaaS for more functionalities.
  • BizTalk Server 2010 – The BizTalk PaaS is in its infancy very limited features EDI / EAI integrations. A complete reference can be found here.
  • Windows Server 2008 R2, 2012

* The biggest work load on the cloud for any enterprise application is Sql Server on IaaS. This list is going to grow over time. There is also customer support for the above list.

What is the difference between Virtual Machines and Cloud Services?

The Virtual Machines that one creates are implicitly on Cloud Services. The Cloud Services may appear to be segregated from VM but apparently they are not.

To explain things better. Lets take an example.

Let assume we have a cloud services with Web Role ( 3 instances) and Worker Role ( 3 Instances). The Cloud Services acts more like a container consisting of Web and Worker Role,

  • its like a management container when one deletes/update the cloud services it deletes all the entities in it.
  • Its also a security boundary i.e roles in the same cloud service can interact with one another which cannot be done across cloud services unless they explicitly allow it.
  • Its a network boundary, each of the roles are visible to each other on the network.

image

 

When creates a Virtual Machine (which are roles with exactly one instance) they are in an implicit cloud service.

image

When one creates a VM it appears in the VM section of management portal and not under the cloud services. The implicit cloud service is the dns name which is been assigned to the virtual machine. So for example if one has created the first virtual machine with the name mymachinedemo and creates the second virtual machine with the same name mymachinedemo and chooses to create the virtual machine to connect with an existing virtual machine it will give a list of existing virtual machines. So essentially the cloud services act as a container for the virtual machines.

image

When one creates multiple virtual machine via the option of “connect to an existing virtual machine” what it does it places the new virtual machine under the same cloud service and then the dns name will start showing in the list of cloud services.

image

The hiding of the cloud service only happens in the portal

Images and Disks, What are these?

Images are base images provided by the create from gallery functionality where one has a bunch of pre-existing images of Operating System, Post creation of the Images you get is an OS disk which your specific operating system disk and associated with these are data disk. By the way the disk are writable disks for Virtual Machines. The VM sizes supported by MSFT currently and subject to change, Additionally you have 28 & 56 GB RAM sizes as well.

image

The Data disk can go up to 1TB in size. One can attach multiple data disk with one VM.

Images and Disk are stored as Windows Azure Storage Blobs, Data is triplicated i.e 3 copies. It also supports Disk Caching read and readwrite.

OS disk size is about 127 GB.

What is availability story around virtual machines?

The service level agreement for 99.95% for multiple role instances(web and worker) which 4.38 hours of downtime/year. Multiple role instance ideally means 2 VM in the same role. So idea is to a minimum of 2 vm in a role. What’s included in the 99.95% is

  • compute hardware failure (disk, cpu, memory),
  • Data Center failure - network and power failure.
  • Hardware upgrades- Software Maintenance – Host OS Updates

What is not included? – VM Container crashes, Guest OS Updates.

What does this SLA means VM ?

It means if one deploys 2 instances of the same virtual machine in the same cloud service (dns name) which the same availability set then one gets a 99.95% SLA.

image

What is the concept of availability set?

By default for every role which has 2 instances Windows Azure create 2 instances in Fault Domain and 2 instances in Update Domain. i.e if you have defined Fault and Update Domain . Fault Domain gets defined on the basis of single point of failure in this case its  the top of rack router.

Fault Domain represents groups of resources which are anticipated to fail together i.e same rack or same server. Fabric spreads instances across fault or at least 2 fault domains.

Update Domain represent groups of resources that can be updated together.

The availability set comes with the same concept of a fault and update domain concept. So for example if you had 2 instance of the same vm defined in an avail. set , you are going to get instances of same vm in fault and update domain i.e a bare minimum. So in all there are 6 instances of VM running.

 image

The story would be incomplete without proper Networking capability of Azure IaaS.What are the options?

So what has MSFT done for Azure IaaS networking, some of the features include

  • Full control over machine names
  • Windows Azure provided DNS- Resolves VM’s by name within the same cloud services. Machine names are modelled and explicitly published in the DNS services
  • Use an on premise DNS Server.

Note: In PaaS Web and Worker communication happens via messaging in the VM world its DNS lookup.

Protocols Supported

  • UDP traffic supported – Load balancing incoming traffics and allows outbound traffic
  • Support all IP Based Protocols (VM to VM communication)- Instance to instance communication TCP, UDP & ICMP.
  • Port Forwarding – Direct communication to multiple VM’s in the same cloud service.
  • Custom Load Balancer Health Probes- Health check with probe timeouts. HTPP based probing, allowing granular control of health checks.

Load Balanced Sets for IaaS

Similar to Avail. Set is Load Balance Sets which allows a set of VM within the same cloud service to be load balanced.

image

Load Balance with Custom Probes

In IaaS there are no agents installed on the VM , so there was a requirement to define a point which could be probed example /health.aspx which is the probe path, if we get an HTTP 200 it assumes everything is healthy.

image

Cross Premise Connectivity

Connecting with on Premise Active Directory or connecting on premise network Windows Azure Connect has been around, but not very well accepted. Windows Azure Connect using IP Sec tunnelling concept and agent hosted on both the machines which need to communicate. If one is doing a domain join with Azure VM the problem has been is to have the Agent install on the DC which has not gone very well with the many.

Alternatively Site to Site Connectivity – Windows Azure Network came into play. It provides a virtual network and gateway. The gateway using a standard VPN device.

image

What would one need to take care when migrating application to Azure IaaS?

  • Sql Server installed on IaaS needs to be clustered so having 2 instances of the Sql Server in the same cloud service will help. One may need size up the data disks required.
  • Built in Load Balancing Support, So if you deploy Web Application on IaaS one can be relieved on LB.
  • Integrated Management and Monitoring provided by Azure itself for VM;s
  • Fault and Upgrade Domain for all VM’s are a must.
  • Windows Azure Network can be used to connect with an on premise application, domain join
  • Hourly Billing Support: In addition to making it easier and faster to get started, these SQL Server and BizTalk Server images also enable an hourly billing model which means you don’t have to pay for an upfront license of these server products – instead you can deploy the images and pay an additional hourly rate above the standard OS rate for the hours you run the software. This provides a very flexible way to get started with no upfront costs (instead you pay only for what you use). You can learn more about the hourly rates .
  • Workloads to be shifted on cloud need to be looked from a compute, storage & networking stand point of view.

image

The philosophy for the cloud world is lift and shift workloads.

As of 2013 we still need to use a lot of application as is include to cloud and IaaS is an inherent part of overall PaaS. May be in years to come it will be a complete PaaS architecture.

 

Wednesday, 22 May 2013

Mobile Devices Application Architecture

Mobile Platforms are increasing finding there right places in the enterprises. The Mobile Platform vary from an insignificant non UI based devices to a full-fledged tablets and SMART TV. The compute and storage are a fast moving segment for mobile device capabilities, building applications for these platforms can be a daunting task for developers. Most mobile devices have connectivity in some form or the other and the applications build for devices connect with services which are hosted on premise OR cloud. Concentrating on the mobile device Architecture it would be good to have some guidance for developers what constitute a typical mobile architecture.

clip_image002

High Level Device Architecture

Since most mobile devices as of current are UI intensive the starting point is more around the UI architecture.

The High Level Mobile /Device Application Architecture includes the following components. It’s typically a layered architecture.

clip_image003

Presentation Layer

The UI architecture is heavily driven but certain patterns

  • MVC – The Model View Controller is an architectural pattern that divides an interactive applications into 3 components. The model contains the core functionality and data. View displays information to the user. Controller handles the user input. View and controller together comprise the user interface. A change propagation mechanism ensures consistency between the user interface and model. This architectural pattern came up first with Smalltalk 80 and post that a large number of UI frameworks have been built around MVC and now become a defacto standard for UI development. For more on MVC refer here.
  • Delegation- Most User Interface in mobile devices are rich from a functionality standpoint view and have delegate i.e. transfer information, data and processing to another object typically referred to as a background object. Delegation is a design pattern. For more on Delegation refer here. The delegation pattern is a design pattern in object-oriented programming where an object, instead of performing one of its stated tasks, delegates that task to an associated helper object. There is an Inversion of Responsibility in which a helper object, known as a delegate, is given the responsibility to execute a task for the delegator. The delegation pattern is one of the fundamental abstraction patterns that underlie other software patterns such as composition (also referred to as aggregation), mixins and aspects.
  • Target actions- The User Interface is divided up into View, Controller which dynamically establish relationships by telling each other which object they should target and what action or message to send to that target when an event occurs. This is especially useful when implementing graphical user interfaces, which are by nature event-driven. Most UI on devices are event driven based on the user input raise the desired event and the event handler associated with the same will executed by the event handler associated.
  • Block objects – Most device based application interact with services or other applications for data services or more. The need of having asynchronous call backs will help saving compute. The services have to modelled differently for devices that’s a whole different topic will be covered in a separately.

Management Layer

The management is crucial piece in devices ranging from memory management, state management all the way to device management. Below are some of the key components.

Memory Management-

Device have less usable memory and storage in comparison to a desktop computer, all applications built need to be very aggressive on deleting unneeded objects and be lazy about creating objects.

Foreground and Background Application Management

Applications on device have to be managed differently when in foreground and background. The operating system limits what your application can do in the background in order to improve the battery life and the user experience with the foreground application. The OS notifies your application when it moves from background to foreground which requires special handling for data loading and UI refresh. So typically from transition between these states what does one really have to take care?

  • Moving to Foreground: Respond appropriately to the state transitions that occur. Not handling these transitions properly can lead to data loss and a bad user experience.
  • Moving to Background make sure your app adjusts its behaviour appropriately. Devices which support multitasking have this option in other cases application is terminated. The elementary steps in taking a snapshot image of the current User Interface, save user data and information and free up as much memory as possible. Background application continue to stay in the memory until a low memory situation occurs and OS decides to kill your application. Practically speaking, your app should remove strong references to objects as soon as they are no longer needed. Removing strong references gives the compiler the ability to release the objects right away so that the corresponding memory can be reclaimed. However, if you want to cache some objects to improve performance, you can wait until the app transitions to the background before removing references to them.

Examples of objects that you should remove strong references to as soon as possible include:

  • Image objects
  • Large media or data files that you can load again from disk
  • Any other objects that your app does not need and can recreate easily later
  • Handling Interrupts

The devices are more complicated than one can think classic scenario handling interrupts like an incoming phone call, when an alert-based interruption occurs, such as an incoming phone call, the app moves temporarily to the inactive state so that the system can prompt the user about how to proceed. The app remains in this state until the user dismiss the alert. At this point, the app either returns to the active state or moves to the background stat. In most devices in this state the application don’t receive notification and other types of events. There needs to be some nature of application state management.

State Management

Irrespective what state the application is in foreground, background or suspended, the application’s data has to stored and restored. Even if your app supports background execution, it cannot run forever. At some point, the system might need to terminate your app to free up memory for the current foreground app. However, the user should never have to care if an app is already running or was terminated. From the user’s perspective, quitting an app should just seem like a temporary interruption. When the user returns to an app, that app should always return the user to the last point of use, so that the user can continue with whatever task was in progress. Preserving and restoring the view controllers and view is something which has to be implemented application specific. State preservation needs to considered at the following scenarios

  • Application delegate object, which manages the top level state
  • View Controller object which manages the overall state of the app’s user interface.
  • Custom View, custom data.

State preservation and restoration is an opt-in feature and requires help from your app to work. When thinking about state preservation and restoration, it helps to separate the two processes first. State preservation occurs when your app moves to the background. The restoration process uses the preserved data to reconstitute your interface. The creation of actual objects is handled by your state management.

Core Data Management

The Model in the MVC in the Presentation Layer holds references to business data which may be displayed in the views. The Models generally can end up holding a lot of data can become a major performance issue. The Models should ideally load data which is most relevant to the scenario with support for lazy loading. The responsibility of loading Core Business Data in the Business entities is handled by the Core Data Management. Core Data is generally a schema driven object graph management and persistent framework. Fundamentally, Core Data helps save the model objects, retrieve model objects from the business layer.

  • Core Data provides an infrastructure for managing all the changes to your model objects. This gives you automatic support for undo and redo, and for maintaining reciprocal relationships between objects.
  • It allows you to keep just a subset of your model objects in memory at any given time,
  • It uses a schema to describe the model objects.
  • It allows you to maintain disjoint sets of edits of your objects. This is useful if you want to, for example, allow the user to make edits in one view that may be discarded without affecting data displayed in another view.
  • It has an infrastructure for data store versioning and migration. This lets you easily upgrade an old version of the user’s file to the current version.
  • It interacts with Services or Business Level API to perform CRUD on the business entities.

Core Data Management is runs on a background thread(s).

Application Resource Management

Aside from images and media files, most devices have a lot more capability ranging from accelerometer, camera, Bluetooth, gps, gyroscope, location services, telemetry, magneto meter, microphone, telephony, Wi-Fi….. Most of these have platform based API to access the same. In certain cases there may be a requirement to manage this resources for example video telephony module. The nature of the API provided by platform provides direct access mechanism to access these resources.

Services Helper Layer

The Services Helper Layer on the device serve the purpose providing caching, api, logging or notification services. The Service Helper Layer function more as an assistance to the device hiding the complexity of the inner working thus simplifying the programming model for the developer and not having to worry about low level functions.

Caching Services

The application running on the devices does have limited memory and compute. Most UI elements, pages, data may require to be in memory while they may not be displayed. A efficient disk and memory based caching strategy needs to implemented for the application for better performance.

API Services

In the cloud world or on premise world the devices are likely to talk via services. These services are infact well defined API’s of the business. APIfying the services is a core concept on the cloud which will be covered later. A standard API is

clip_image005

The API Services will interact with the Services API and manage the lifecycle of the service call asynchronously. It will manage aspects such as format conversion, lazy loading and much more.

Logging Services

This is a generic services provides for application level and business level logging. The data collected will be sent back to the Services serves multi-purpose use.

Notification Services

This is asynchronous module which will manage application based notification and events from the services and other application

Friday, 3 May 2013

Azure SDK 2.0 - Quick Snapshot

 

Azure SDK 2.0 released 2 days ago pretty good features a lot of work done in Diagnostic areas this was long over due, high memory VM and the wondering Message Pump in Service Bus.

Here is a quick snapshot of the important features - Blog to Azure 2.0 Features

  • Stream Diagnostic Logging is a good feature looks like limited to Web Site - This may change are logging capabilities.
  • Cloud Service Support for High Memory VM instances 4 core x 28GB RAM (A6) and 8 core x 56GB RAM (A7) VM sizes.
  • Faster Deployment Support for Simultaneous Update Options- this is more of a parallel update if you cloud package consists of multiple web and worker roles this will do parallel update as opposed to earlier sequential
  • Cloud Services also has a separate diagnostic tab - The custom plan is pretty rich and enables fine grain control over error levels, performance counters, infrastructure logs, collection intervals and more.
  • View Diagnostic on a Live Service, the interesting part here is dynamically turn on detailed diagnostic capturing without having to redeploy.
  • Storage 2.0 looks a little improved with capabilities of create and delete Windows Azure tables from Visual Studio Explorer.
  • Service Bus Enhancement - Message Browse Support enables to view messages available in queue without locking the message or performing an explicit receive operation on it.

New Message Pump Programming Model - Similar to event driven or push based processing model approach support concurrent message processing & enables processing message at variable rate. This is a pure replacement to the polling based mechanism to a pure event driven approach.

// Example Code for Pump Programming Model

var eventDrivenMessagingOptions = new OnMessageOptions();

eventDrivenMessagingOptions.AutoComplete = true; – // This indicates post reading the message it’s removed from the queue

eventDrivenMessagingOptions.ExceptionReceived += OnExceptionReceived; à Cleaner Exception Management Handler

eventDrivenMessagingOptions.MaxConcurrentCalls = 5; // Multiple Readers at the same time

// Subscribe for messages.

var queueClient = QueueClient.Create("customers");

queueClient.OnMessage(OnMessageArrived, eventDrivenMessagingOptions);-//On MessageArrived is the Handler which is written when a message arrives.

Find example for the same here Message Pump Programming Sample

· Power Shell Advancements

Additionals

Sunday, 7 April 2013

Large-scale Implementation on Azure Platform

 

I have been closely reading the Azure CAT (Customer Advisory Team) which helps a lot of customer deliver large, complex projects. The Azure CAT site can be found here guidance on how to use different architectural artefacts in Azure. After some digging this what I have come to understand are some of the largest implementation on Azure Platform. This is however some amount of reverse engineering and research. Hope this is helpful to the readers.

So what has Azure handled so far from a number standpoint of view?

There can various architectures which Azure addresses but in a nutshell its enterprise ready. From a number standpoint of view this is what Azure is handle for applications where each line item below is an individual applications

  • Largest sharded SQL database – 20 TB- Sql Azure maximum container size of 150 GB, so there have been multiple containers used to resolve the size issues.Of course the querying strategy around this has to be well defined.
  • Largest number of database- 11,000
  • Most number of worker instances – 24,000- This is an on demand application which spins 24k worker role instance to perform some complex calculation and shuts down. This is more like HPC to address complex algorithms.
  • Largest Customer Application- 50 PB

So what are the largest case studies in Azure?

Florida Presidential Election 2012:

This is not the largest but it was highly mission critical. Find the complete write up here.

This was a mission critical for a couple of days with some very high volume numbers to manage.

Florida Election Presidential Election 2012 had the following metrics

  • Max peak 40k page hits / second
  • 6 million hits peaked in 1 hour
  • Caching in front DB was a big architectural success.- The very nature of the application been short lived it made sense to push as much data into the cache and have separate update strategy
    • 3 minute TTL
    • Separate Worker Role to refresh cache.

What does the architecture look like?

 

image

Enight Florida Presidential Election – From an architectural decomposition standpoint of view below it is the layered architectural explanation below.

Presentation Layer:

Main URL - https://enlight.elections.myflorida.com using Azure Traffic Manager with Availability , performance fail over between the Primary Site US East Coast & US North Central.

The Primary site of the application was hosted on US East Coast.

Enight Public Site: This is a set of web roles hosting voter turnout, election result Florida state wide and other real time election related data. The Application had a lot of real time data which was expected to come in from various data sources. The web role will read the data from the cache if valid if not will get the data from the database.

Enight SOE-  At a very high level the Enight SOE acts as data synchronizer between the primary and the secondary. Additionally it reads the data from the blobs and also pushes it to the secondary.

Caching Layer

Azure Dedicated Cache Roles Used. Given the frequency of reads and writes been very high in a short period of time, It was advisable to go with in memory database i.e. cache. All the entities would continue to remain in the database until 3 minutes TTL. A separate worker role to refresh the cache. The Sql Azure Database reads, writes ended up using a CQRS based patterns to manage the reads, writes – this is a guess.

Database Layer

The Storage Layer comprises of Sql Azure and Blob Storage( data coming in from various counties.

Identity & Access Management

The IDM used here is the standard provided by Azure Access Control Service. The out of box features of Azure ACS like integration with Facebook and other social media very typically used.

Azure Auto Scaling Application

Given that peak traffic could get to 40k pages hit/second. The need of a good auto scaling application must have been required to get the elastic advantage.

* Monitoring Tools most probably used were Cerebrata.

Bing Games

Bing Games- MSFT has finally got to a culture of eating its own dog food, Bing Games is one such example where the Scoring Tracking and Ranking System was entirely on Azure. Find the articles here.

For some reason case study has been removed from the MSFT site.Thanks to Google I managed to get the cached page here.

Below are the high level metrics

  • 1900 instances- app servers (various roles)
  • 398 SQL Azure database- Scale Out
  • 30 million unique users/ month
  • 200k concurrent users
  • 3 months, 7 developers.
  • 99.975% uptime in past 12 months

Why did they pick up Azure Database?

Given the database access patterns it was better to pick Sql Azure over Table Storage. This is however not a defacto standard it depends on the requirement. The choice of Sql Azure was based on the fact they had an existing application already in Sql. The testability i.e easy to pre-populate with millions of records faster.

Partition Strategy

Each user the data would remain the same database. So the scale out based on users was easier and faster. The partitioning strategy is static by nature.

Production Statistics

  • 1200 Azure database request/second spread across all partitions during peak loads.
  • 18k connections in Connection Pool and which could grow with traffic

Database

  • 90-10 read vs. writes

Glassboard.com

Glassboard.com is similar to facebook is a private social network for groups. This is a mobile only application.

The Initial Architecture

image

The Initial Architecture pretty straight forward REST based API and table, queue or blob storage. Additional components such as social collaboration and analytics. 

My personal opinion is this architecture is not quite right, below section I explain why and the glassboard folks have found & corrected the same.

The Wrong Architecture for Devices – Which otherwise would be correct in all cases……..

What started up as REST API based programming turned out a total disaster as the underlying storage was table storage. Its not that the table storage was a disaster. Its just that when we start designing the very academic way of doing thing we have an API for every call, soon to be realized the table cost for each read will start hitting your pocket. With a little fiddler exercise one can realize the same

<excerpt from Glassboard site>

To demonstrate the problem I have set up my Glassboard API instance to do no caching whatsoever. I then ran a unit test which simulates a user getting his Newsfeed, then posting a status. I highlighted each set of repeated calls in a different color.

Repeated Storage Requests

</excerpt from Glassboard site> Find the complete commentary here

Not to get alarmed the architectural change on going the feeds way i.e one call like a feed which has pretty much all the data at startup and incorporating caching helped.

Especially when getting into device based programming we need to bring in some of the client server concepts in here this helps a ton.

The solution is here.

Samsung – Worldwide TV Management

Samsung SMART TV started as a concept now a reality, the device has a base software which needed to be updated from time to time. The cloud was the perfect solution assuming these TV’s are connected to the internet. Azure or AWS became the obvious choice not knowing which way the sales would go for Samsung, betting the update and management solutions on the Platform As A A Service was a good decision. Some Key Features

  • Frequent updates with new applications and software changes for better support and compatibility.
  • Due to high sales the need of a scalable and elastic system was required.
  • Utilized 20 large size web roles with ASP.NET.
  • Have a good Web API layer for the same functionality find the developer documentation here.
  • Caching seems the one candidate which all azure application need for rescue.

Solution Architecture

image

The Architecture is fairly straightforward.

  • Firmware Download Website- Set of web roles which connects with the Smart TV via set of REST based API post authentication will check for the updates and push the updates to the device.
  • Administration Firmware Upload Website – A set of web roles which provide basic administration and reporting functionality.
  • Worker Role  which does task automation – firmware encryption and batch updates. Additionally push the logs to the Sql Azure Database. The updates goes to a blob storage and uses the Azure CDN functionality to push to edge servers.

MYOB

MYOB(Manage Your Business) is a large Australian based ISV developing Accounting software for small business. The new release AccountRight Live, lets users run their account on a PC or a cloud. Or both at once depending on their preferences.  The hybrid arrangement was not however a cunning innovation design to catapult the company and its customer into a bold and cloudy future.  Based on surveys run with the customer did MYOB decide on rolling out a hybrid strategy. Users count about 150k.  Find the link to the site here.

Based on the multi-tenancy requirement each customer required a separate database. Caching is used as standard feature, reserved CPU per database.

MYOB Solution Architecture

  • Each user installs the client software via a box offering
  • Choice to use the business and data tier either on Azure or on premise
  • The application is developed using C#/.NET using LINQ to SQL and Entity framework. Which is very bad……. LINQ is fairly single threaded process which works very well on an Intel processor with a high clock rate. On a web, worker role AMD processor with very low clock rate, the performance on Azure will be slower. Work around is run LINQ on small CPU with a single core.
  • Database on premises and Azure are kept in sync via Sync framework
  • Each customer has their own Azure SQL Database per business entity.
  • DB to be backed up nightly using DAC Import/Export services keeping 2 days rolling backup files in blob storage.

image

Identity Management used is Azure ACS (STS inbuilt) most probably with a Custom Identity Provider with option of windows live. With some guess work I see MYOB Identity Management use ADFS to integrate with corporates Active Directory as well.

A simple layered architecture.

  • User Interfaces can be a browser, Client Desktop thick client.
  • Services Layer – This includes the Collaboration Services, Authentication Service, Customer File Service, Huxley Services (Transactional Services). All services are exposed as REST API. The Client Desktop connects with the Collab Service post authentication which uses the Azure ACS receives the ACS token. The Collaboration Services validates with the Billing System and post that connects to the correct User Database depicted as User DB1… DBn.
  • Storage Services: This is basically a Sql Azure set of databases.

What I love about this architecture is the simplicity. Keep tenancy at a database perhaps may not be the most economical solution but its simple.

Caching has been used at every layer. Judging from the speeds they seem to have a set of dedicated caching servers how many is guess work again.

MYOB Implementation had some key lessons to learn – what are they

  • Cloud Platforms
    • Enable massive scalability
    • HA at lower costs
    • Expose rich cloud based API’s
  • Identity Foundation
    • Well integrated with WCF and highly customisable Scaling Database
    • Sharding is the foundation

Issues on WCF throttling had been handled with different architectural solutions. Some of the solutions are here

Key Takeaways 

Couple of Key Areas to watch out for in Azure Application

  • If Mobile is a part of the overall architecture special considerations are required, well have a separate post on the same.
  • Study Storage choice properly Tables, Sql Azure – there is no straightforward answer it varies.Every Sql Azure database has been allocated a maximum of 180 concurrent threads
  • Use Caching wherever possible. This is an architectural decision not a developer.
  • Code Right-  Profile the code as much as possible.
  • Sql Azure is a Relational Database as a Service.SQL Server is the core engine and Sql Azure is logical abstraction over the same. SQL Azure is a subset of SQL Server features. It provides tremendous scale out features.