wesnoor
New member
ASP.NET has become a very popular technology for developing web applications because of the ease of rapid development it provides through its rich set of development tools. As a result, more and more people are using ASP.NET in high-traffic situations. Seeing 10-20 server web farms is now a common thing and many companies have 100+ servers in a load balanced ASP.NET web farm.
An ASP.NET application that is lightening fast with 10 concurrent users is no good if it grinds to halt with 1000, 10,000, or 100,000 concurrent users. Although, ASP.NET architecture itself scales very nicely by allowing you to deploy ASP.NET on a web farm, it is the fact that ASP.NET (like any web application) makes frequent but expensive database trips. And, this slows down ASP.NET performance as you grow the number of concurrent users or requests/sec. And, slow ASP.NET performance can mean that you will lose customers who do not have the patience for slow response times.
The reason for a slow-down in ASP.NET performance is because the database is not able to handle such large number of requests which a high-traffic ASP.NET application generates. ASP.NET relies on the database for storing and accessing application. And, for web farms, it also typically stores ASP.NET session state in the database.
So, although, the ASP.NET application scales nicely in a web farm, the database server becomes a bottleneck for ASP.NET performance.
Now that we know what causes ASP.NET performance bottleneck, the question arises as to what needs to be done to fix this problem and improve ASP.NET performance even under peak loads? The answer to this problem is simple! You can remove these ASP.NET performance bottlenecks by using an in-memory distributed cache for storing all the data used by ASP.NET.Unlike a database server, a distributed cache can scale out into 10s or even 100s of cache servers clustered together. Just like a RAID disk divides and replicates data among multiple inexpensive hard drives, a distributed cache scales by dividing the entire cache into partitions and distributing them across multiple inexpensive cache servers. It then adds reliability by replicating each partition to at least one other server in the cache cluster. This way, even if a cache server goes down, no data is lost.
An ASP.NET application that is lightening fast with 10 concurrent users is no good if it grinds to halt with 1000, 10,000, or 100,000 concurrent users. Although, ASP.NET architecture itself scales very nicely by allowing you to deploy ASP.NET on a web farm, it is the fact that ASP.NET (like any web application) makes frequent but expensive database trips. And, this slows down ASP.NET performance as you grow the number of concurrent users or requests/sec. And, slow ASP.NET performance can mean that you will lose customers who do not have the patience for slow response times.
The reason for a slow-down in ASP.NET performance is because the database is not able to handle such large number of requests which a high-traffic ASP.NET application generates. ASP.NET relies on the database for storing and accessing application. And, for web farms, it also typically stores ASP.NET session state in the database.
So, although, the ASP.NET application scales nicely in a web farm, the database server becomes a bottleneck for ASP.NET performance.
Now that we know what causes ASP.NET performance bottleneck, the question arises as to what needs to be done to fix this problem and improve ASP.NET performance even under peak loads? The answer to this problem is simple! You can remove these ASP.NET performance bottlenecks by using an in-memory distributed cache for storing all the data used by ASP.NET.Unlike a database server, a distributed cache can scale out into 10s or even 100s of cache servers clustered together. Just like a RAID disk divides and replicates data among multiple inexpensive hard drives, a distributed cache scales by dividing the entire cache into partitions and distributing them across multiple inexpensive cache servers. It then adds reliability by replicating each partition to at least one other server in the cache cluster. This way, even if a cache server goes down, no data is lost.