Monday, June 16, 2014

01- What Is AppFabric Cache

Windows Server AppFabric provides Distributed Caching features.

 

Windows Server AppFabric Caching features use a cluster of servers that communicate with each other to form a single, unified application cache system.

As a distributed cache system, all cache operations are abstracted to a single point of reference, referred to as the Cache Cluster.

In other words, your client applications can work with a single logical unit of cache in the cluster regardless of how many computers make up the cache cluster.

 

The primary components of the physical architecture consist of:

 

1.       Cache Server

2.       Cache Host Windows Service

3.       Cache Cluster (Cluster of Cache Servers)

4.       Windows Power Shell-Based Cache Administration Tool

5.       Cluster Configuration Storage Location

6.       Cache Client

 

The following diagram shows how all of these elements relate.

 

 

 

Cache Client

 

Any application server that is running a cache-enabled application may be loosely referred to as the cache client.

For an application to be cache-enabled, it must use the AppFabric caching assemblies and specify the appropriate application configuration settings programmatically or in an XML-based application configuration file

 

Cache Cluster

 

The cache cluster is a collection of one or more instances of the Caching Service working together in the form of a ring to store and distribute data.

Data is stored in memory to minimize response times for data requests. The operations of the cache cluster are managed by a role, named the cluster management role

 

Cache Hosts

 

The AppFabric Caching Service is a Windows service that runs on one or more servers.

 

Cache Server

 

Each server that runs the Caching Service is referred to as a cache server. For each cache server, only one instance of the Caching Service can be installed.

 

Windows Power Shell-Based Cache Administration Tool

 

Windows Power Shell is the exclusive management tool for the Cache Service. Windows PowerShell cache administration cmdlets can be installed on any domain computer or on the cache servers themselves. This is done by installing the Cache Administration feature of AppFabric. You must have Administrator privileges on all of the cache servers for the tool to function properly.

 

Cluster Configuration Storage Location

 

Each time the cluster starts, it must retrieve configuration information from the cluster configuration storage location. The generic term "storage location" is used because the location is determined by how you choose to store the cluster configuration settings.

 

 

Architecture Overview of App Fabric

 

The diagram below provides a high-level overview of the AppFabric system.

 

 

 

·      Apps are deployed into a farm of AppFabric servers that share Workflow Instance  Stores and Monitoring Stores (Databases).

·      Distributed Cache provides unified data cache view to Apps while distributing and replicating data across multiple machines

 

The Windows Server AppFabric Cache stores Dot Net objects, scales seamlessly, and manages data location and redundancy by distributing the cache across a cluster of machines.  

The developer can simply put data in the cache and retrieve it when needed - AppFabric handles all the underlying complexity for you.

 

Functional View

 

The diagram below illustrates key components of the AppFabric system and their functions from a single-machine view (one AppFabric Server view) in a typical setup.

 

 

 

 

Thus Windows Server AppFabric adds improved hosting capabilities to Windows Server to efficiently host all your WCF services, including WF Services.

Additionally, AppFabric provides common services that help you as a developer build scalable Web and composite applications.

 

There are many NFRs which are ultimately getting addressed with App Fabric. Some of them are :

 

·      Reliability – Overall (pertains to system’s maturity, fault tolerance level, recoverability and product relevance duration)

·      Reliability – Availability ( The fraction of the time when the system is operative and accessible to all it users)

·      Reliability – Integrity ( The probability that a system will not be compromised such as to perform incorrectly or return incorrect results)

·      Reliability – Security ( The probability that a system will not be compromised via access for read- write operations by unauthorized users or other software )

·      Monitor ability : ( Ability to monitor run time - sessions, auditing of transactions )

·      Stability ( a facet of performance, availability and manageability; devoid of memory leaks , dangling areas and other creeping issues)

·      Availability –HA ( with Recoverability as a sub-requirement. Also MTBF – the mean time expected between two run time failures  MTTR the mean time required to fix a bug after it is noticed at run time;  software failover, hardware failover, standby hardware and so on ; strict ACID transactions, Tracing and Auditing; Business continuity against disasters; Recovery from malicious attacks )

 

Note:

 

AppFabric Caching requires less management; the "fabric" takes care of everything. 

Adding new machines to the cache is handled in the AppFabric Configuration Wizard. 

The cache takes a central configuration approach, so all the servers in the cache cluster know about each other. 

You can choose to store this configuration in SQLServer or in a file share.   

 

 

Distributed Cache

 

The Distributed Cache of App Fabric provides a distributed, in-memory application cache for developing scalable, available, and high-performance applications.

 

Distributed Cache Service runs across multiple machines and forms a tight cluster offering data replication and data consistency across multiple machines.

Distributed Cache Service can run on the same machine as the application code or run in a remote dedicated farm of machines.

 

 

How To Setup AppFabric Cache

 

Please check the link below.

 

http://www.hanselman.com/blog/InstallingConfiguringAndUsingWindowsServerAppFabricAndTheVelocityMemoryCacheIn10Minutes.aspx

 

 

 

Thanks & Regards,

Arun Manglick,

Project Manager - Forecasting
Mobile
: 9158041782| 9850901262| Desk Phone: +912040265348 | http://www.synechron.com

SYNECHRON -
- 4,000+ professionals globally.
- USA | Canada | UK | The Netherlands | UAE | India | Singapore | Hong Kong | Japan

 

02- How to SetUp AppFabric Cache

How To Setup AppFabric Cache

 

1.       Download and Install App Fabric.

2.       Run the installer and select AppFabric Cache. The configuration tool will pop up, and walk you through a small wizard.

 

 

1.    Accept the default settings on Customer Experience page as shown in below image.

2.    On the Features Selection Page select all components for installation.

3.    On the Confirmation Page Click ‘Install’

4.    Click ‘Finish’ on the Results page.

 

5.    This will launch the Configuration Wizard as shown below.

 

 

6.    Select ‘Next’ on the above Welcome Screen above to get the next screen.

7.    Select ‘Next’ on the above Hosting Screen above to get the following screen.

Select all the option as mentioned in below screen.

 

 

 

8.    Then press Configure button and the following window appears.

Select ‘Register AppFabric Caching Service configuration database’ and ‘Create AppFabric Caching Service configuration database’.

 

 

9.    Press OK.

10. Then Select ‘Next’ on the Configure Caching Service Wizard page - Step 9

11. Press Yes on the next dialog box.

12. Then press Next on the Cache Node wizard page (With Default Settings)

 

 

13. Select Yes on the next dialog box.

14. On the last page click Finish the Cache Configuration.

 

 

Reference:

Please check the link below.

http://www.hanselman.com/blog/InstallingConfiguringAndUsingWindowsServerAppFabricAndTheVelocityMemoryCacheIn10Minutes.aspx

 

 

Regards,

Arun Manglick

 

03- How to Start & Administer AppFabric Cache

Start and Administer your Memory Cluster from PowerShell

 

After Installation & Setting up Cache, do as below to start using Cache.

 

1.       Go to the Start Menu and type in Caching. You'll have an item called "Caching Administration Windows PowerShell."

2.       Run it as Administrator.

 

 

3.    Type "get-command *cache*" to see all the different commands available for cache management.

4.       To Know all up & running - Type:  Start-Cachecluster

 

HostName : CachePort      Service Name            Service Status Version Info
--------------------      ------------            -------------- ------------
HANSELMAN-W500:22233      AppFabricCachingService UP             1 [1,1][1,1]

 

 

5.       Also Note, Installing Cache add two below library in Visual Studio.

·         Microsoft.ApplicationServer.Caching.Core

·         Microsoft.ApplicationServer.Caching..Client

 

6.       Also Note - Whenever there is a memory upgrade/degrade of RAM on any of the servers where the App fabric cache is installed, by default, the cachesize should be allocated half of the installed physical memory i.e. if the RAM is increased from 2GB to 4GB, the following command should be run

 

Set-Cachehostconfig -hostname <HOSTNAME>  -cacheport 22233 -cachesize 2048

 

7.       To Restart the Cache Cluster - Use the command Restart-CacheCluster

 

8.       After Complete Installation of App Fabric Cache, you can change the startup Type of the ‘AppFabric Caching Service’ from ‘Manual’ to ‘Automatic’ from Services.msc. This will ensure that the caching service will restart after server restart

 

 

Chopping  Cache Memory:

 

App Fabric beauty lies due to its "Memory Chopping' capabilities. I.e.  Memory caches can be chopped off into logical buckets (partitions) and a memory cluster can serve more than one application, if you wanted.

This can be done in the web.config or from code (however you like).

 

Here's a code example helper method where the sample does this manually.

This data could come from wherever you like, you just need to tell it a machine to talk to and the port number. It'll automatically connect to the Caches can also be partitioned.

 

using Microsoft.ApplicationServer.Caching;

using System.Collections.Generic;

 

public class CacheUtil

{

  private static DataCacheFactory _factory = null;

  private static DataCache _cache = null;

 

  public static DataCache GetCache()

  {

      if (_cache != null)

          return _cache;

 

      //Define Array for 1 Cache Host

      List<DataCacheServerEndpoint> servers = new List<DataCacheServerEndpoint>(1);

 

      //Specify Cache Host Details

      //  Parameter 1 = host name

      //  Parameter 2 = cache port number

      servers.Add(new DataCacheServerEndpoint("mymachine", 22233));

 

      //Create cache configuration

      DataCacheFactoryConfiguration configuration = new DataCacheFactoryConfiguration();

      

      //Set the cache host(s)

      configuration.Servers = servers;

      

      //Set default properties for local cache (local cache disabled)

      configuration.LocalCacheProperties = new DataCacheLocalCacheProperties();

 

      //Disable tracing to avoid informational/verbose messages on the web page

      DataCacheClientLogManager.ChangeLogLevel(System.Diagnostics.TraceLevel.Off);

 

      //Pass configuration settings to cacheFactory constructor

      _factory = new DataCacheFactory(configuration);

 

      //Get reference to named cache called "default"

      _cache = _factory.GetCache("default");

                 

                  // Create Region

                  string region1 = "shoppingcart";

                  string region2 = "productcatalog";

                 

                  _cache.CreateRegion(region1);

                  _cache.CreateRegion(region2);       

      

    return _cache;

  }

}

 

 

 

 

 

Reference:

Please check the link below.

http://www.hanselman.com/blog/InstallingConfiguringAndUsingWindowsServerAppFabricAndTheVelocityMemoryCacheIn10Minutes.aspx

 

 

Thanks & Regards,

Arun Manglick,

Project Manager - Forecasting
Mobile
: 9158041782| 9850901262| Desk Phone: +912040265348 | http://www.synechron.com

SYNECHRON -
- 4,000+ professionals globally.
- USA | Canada | UK | The Netherlands | UAE | India | Singapore | Hong Kong | Japan

 

Wednesday, June 11, 2014

Deployment Performance Patterns - Scaling (Up/Out)

There are below deployment patterns

 

1.       Performance Patterns

a.       Application Farms

b.      Web Farms

c.       Load Balancing Cluster

 

Consider the use of Web farms or Load Balancing Clusters, when designing a Scale Out strategy.

 

2.       Reliability Patterns

a.       Fail-Over Cluster

 

3.       Security Patterns

a.       Impersonation/Delegation

b.      Trusted Subsystem

c.       Multiple Trusted Service Identities

 

1). Performance Patterns:

 

A). WebFarm & Affinity:

 

A Web farm is a collection of load-balanced array of servers where each server replicate/run the same application.

A Web farm with local business logic is a common deployment pattern that places all application components—user interface components (ASP.NET pages), user process components (if used), business components, and data access components—on each Web farm servers.

 

A number of technologies can be used to implement the load-balancing mechanism, including

·         Hardware solutions such as those offered by Cisco and Nortel switches and routers, and

·         Software solutions such as Network Load Balancing

 

Requests from clients are distributed to each server in the farm, so that each has approximately the same loading.

Depending on the routing technology used, it may detect failed servers and remove them from the routing list to minimize the impact of a failure.

In simple scenarios, the routing may be on a "round robin" basis where a DNS server hands out the addresses of individual servers in rotation.

 

This pattern provides the highest performance, because all component calls are local, and only the databases are accessed remotely.

 

Figure 3 illustrates a simple Web farm where each server hosts all of the layers of the application except for the data store.

 

In Non-distributed Architecture - Scaling out Web servers in a Web farm

 

 

Affinity and User Sessions

 

Web applications often rely on the maintenance of Session State between requests from the same user.

A Web farm can be configured to route all requests from the same user to the same server – a process known as Affinity – in order to maintain state where this is stored in memory on the Web server.

However, for maximum performance and reliability, you should use a separate session state store (SqlServer/StateServer) with a Web farm, to remove the requirement for affinity.

 

 

B). Application Farms

 

In Distributed Deployment , where the Business & Data layer runs on different physical tiers from the Presentation layer, Application Farm is used to Scale Out the Business & Data layer.

Applications farms are conceptually similar to Web farms, but they are used to load balance requests for business components across multiple application servers.

 

Similar to Web-Farm,  Application-Farm is a also collection of servers where each server replicate/run the same application.

Requests from clients(presentation tier) are distributed to each server in the farm, so that each has approximately the same loading.

 

 

 

C). Clustering - Load Balancing Cluster

 

When we have multiple processors, we should have provision which can spread multiple requests coming to multiple machines.

The solution can come from a number of mechanisms, but they are all grouped under the term "load balancing."

 

Application/Service can be installed onto multiple servers that are configured to share the workload, as shown below.

This type of configuration is a Load-Balanced Cluster.

 

Two Approaches:

 

·         SLB - Software Load Balancer

·         Clustering

 

Software Load Balancer performs the following functions :

 

        Intercepts network-based traffic (such as web traffic) destined for a site.

        Splits the traffic into individual requests and decides which servers receive individual requests.

        Maintains a watch on the available servers, ensuring that they are responding to traffic. If they are not, they are taken out of rotation.

        Provides redundancy by employing more than one unit in a fail-over scenario.

        Offers content-aware distribution, by doing things such as reading URLs, intercepting cookies, and XML parsing.

 

 

Clustering:

 

Clustering offers a solution to the same problems that SLB addresses, namely high availability and scalability, but clustering goes about it differently.

Clustering is a highly involved software protocol (proprietary to each vendor) running on several servers that concentrate on taking and sharing the load.

 

Clustering involves a group of servers that accept traffic and divide tasks amongst themselves.

This involves a fairly tight integration of server software. This is often called load balancing

 

Load-balanced Servers also serve a failover function by redistributing load to the remaining servers when a server in the load-balanced cluster fails.

 

With clustering, there is a fairly tight integration between the servers in the cluster, with software deciding which servers handle which tasks and algorithms determining the work load and which server does which task, etc.

 

 

 

Load balancing Scales The Performance of server-based programs, such as a Web server, by distributing client requests across multiple servers.

Load balancing technologies, commonly referred to as Load Balancers, receive incoming requests and redirect them to a specific host if necessary.

The Load-Balanced Hosts concurrently respond to different client requests, even multiple requests from the same client.

 

For example, a Web browser may obtain the multiple images within a single Web page from different hosts in the cluster.

This distributes the load, speeds up processing, and shortens the response time to clients.

 

 

2). Reliability Patterns

 

Reliability deployment patterns represent proven design solutions to common reliability problems. The most common approach to improving the reliability of your deployment is to use a fail-over cluster to ensure the availability of your application even if a server fails.

 

Fail-Over Cluster

 

A Failover Cluster is a set of servers that are configured so that if one server becomes unavailable, another server automatically takes over for the failed server and continues processing.

 

Failover is a key technology behind clustering to achieve fault tolerance. By choosing another node in the cluster, the process will continue when the original node fails.

Failing over to another node can be coded explicitly or performed automatically by the underlying platform which transparently reroutes communication to another server.

 

 

 

Application/Service can be installed onto multiple servers that are configured to take over for one another when a failure occurs.

The process of one server taking over for a failed server is commonly known as Failover. Each server in the cluster has at least one other server in the cluster identified as its standby server.

 

 

3). Security Patterns

 

Security patterns represent proven design solutions to common security problems.

 

Impersonation and Delegation approach is a good solution when you must flow the context of the original caller to downstream layers or components in your application.
Trusted Subsystem approach is a good solution when you want to handle authentication and authorization in upstream components and access a downstream resource with a single trusted identity.

 

Impersonation/Delegation

In the impersonation/delegation authorization model, resources (such as tables and procedures in SQL Server) and the types of operation (such as read, write, and delete) permitted on such each resource are secured using Windows Access Control Lists (ACLs) or the equivalent security features of the targeted resource.

Users access the resources using their original identity through impersonation, as below.

 

The impersonation/delegation authorization model

 

 

Trusted Subsystem

In the trusted subsystem (or trusted server) model, users are partitioned into application defined, logical roles.

Members of a particular role share the same privileges within the application.

Access to operations is authorized based on the role membership of the caller.

 

With this role-based (or operations-based) approach to security, access to operations (not back-end resources) is authorized based on the role membership of the caller.

Roles, analyzed and defined at application design time, are used as logical containers that group together users who share the same security privileges or capabilities within the application.

The middle tier service uses a fixed identity to access downstream services and resources, as illustrated in Figure 7.

 

 

 

Multiple Trusted Service Identities

In some situations, you may require more than one trusted identity. For example, you may have two groups of users, one who should be authorized to perform read/write operations and the other read-only operations. The use of multiple trusted service identities provides the ability to exert more granular control over resource access and auditing, without having a large impact on scalability.

 

 

 

Reference: MS Patterns & Practices - Web Application Architecture Guide

 

Hope this helps.

 

Regards,

Arun Manglick

 

Monday, June 9, 2014

Sticky Session

An ASP.NET State is used to keep data across a sequence of user requests to the server.

This Session State is usually maintained from the first user request until a specific period after the user's last request, usually around 20 minutes.

If your application is running on only one server, you can use the standard ASP.NET Session State without any problems.

 

However, if you want your application to run in a server-cluster/server-farm (I.e. More than one server), then you need to make sure that either:

 

A.      Session State is available from all the servers in the farm (State Server / SQL Server) - Out-of-Proc Session State

B.      Use "Sticky Sessions" feature of your Load-Balancer - In-Process Session State

 

Both of these approaches has its own scalability problems.

Both of them force you into a Single-Point-Of-Failure scenario and they're also Not Very Scalable because all the load of accessing the Session State is shifted to one server.

 

 

Option B):

 

Read This - Link

 

In the In-Proc Session mode, the session has "affinity" with the server.  I.e. Session Id only exists in the memory of that one server to which the first request hits.

That means that if www.mydomain.com is the first server the user ever hits, the user will get an ASPSESSIONID that only exists in the memory of that one server. 

 

To guarantee the user gets back to his/her session on the next HTTP Request (when you have more than one server), - Requires using "Sticky Connections" or Node-Affinity feature of your Load-Balancer.

I.E. if you want your application to run in a server-cluster/server-farm, then you need to make sure that you use "Sticky Sessions" feature of your load-balancer.

 

When you load balance your ASP.Net application(or any web application), the sticky session ensures that all the subsequent request will be send to the server who handled the first request corresponding to that request.  

 

The problem with sticky sessions is that they limit your application scalability because the load balancer is unable to truly balance the load each time it receives request from a client.

With sticky sessions, the load balancer is forced to send all the requests to their original server where the session state was created even though that server might be heavily loaded and there might be another less-loaded server available to take on this request.

 

Better Solution:

 

The answer to all of this is to have Session State Truly Clustered. This way, it does not matter on which server the session was actually created because the session lives in the entire cluster.

The user request can then be forwarded by the load-balancer to the most appropriate server. This approach is also highly scalable and there is no single point of failure.

Sticky sessions work with the load balancer to improve efficiency of Persistent Sessions in a clustered configuration. In a clustered configuration, the load balancer sends requests to multiple backend Resin servers. Each session has an owning Resin server and a backup Resin server. The load balancer will send a session's request to the owning server or to the backup if the owning server is not available.

The association of a session with a backend server is called "Sticky Sessions".

 

Because the load balancing occurs before any interpretation of the Virtual Host or Web Application, it's a <server> configuration variable, with the <session-cookie> tag.

 

Reference: Link

 

Thanks & Regards,

Arun Manglick,

Project Manager - Forecasting
Mobile
: 9158041782| 9850901262| Desk Phone: +912040265348 | http://www.synechron.com

SYNECHRON -
- 4,000+ professionals globally.
- USA | Canada | UK | The Netherlands | UAE | India | Singapore | Hong Kong | Japan