Sunday, October 14, 2012

Introduction to jboss clustering and load balancing


A single JBoss server can handle several hundred concurrent requests; but, if your
application has to scale to support multiple thousands of concurrent requests or
multiple millions of requests a day, then a single application server probably won’t
do the trick. JBoss enables you to simultaneously run your application on multiple
application servers. Requests going to your application can then be balanced across
these servers, and your application can also withstand individual server failures.
This deployment architecture allows you to achieve maximum scalability with minimal
downtime. Clients need not know that their different requests may be handled
by different servers.
Java EE doesn’t specify any standards for how clustering services should work.
Every application server implements clustering differently and provides a different
set of clustering capabilities and services. Red Hat has set out to make cluster setup a simple task. As you’ll learn in this chapter, clusters are easy to create in JBoss and
require minimal configuration. Adding nodes requires no administrative management
because nodes detect each other automatically over network protocols. JBoss
also provides a sophisticated distributed cache that allows stateful components to replicate
their states across multiple nodes in a cluster, enabling you to easily develop
fault-tolerant applications, with very little code.


1 comment: