Log In   View a printable version of the current page.
  Dashboard > [fleXive] > ... > How-Tos > Build a fleXive cluster

Added by Daniel Lichtenberger, last edited by Markus Plesser on Nov 28, 2011  (view change) show comment
Labels: 
(None)

Clustering [fleXive]

The [fleXive] framework is built from ground up to support clustered environments. All shared data and all caches are based on JBoss Cache, a distributed cache with transaction support.

On this page we will describe a basic 2-node setup with JBoss, using Apache 2.2 with mod_proxy for load balancing.

Prerequisites

Install [fleXive]

Install [fleXive] on (at least) 2 computers as described in the installation guide, with the following deviations:

  • install into the all server profile instead of default (which is required for clustering)
  • install the external cache configuration file, as described in Refining your configuration. While this is not strictly necessary, it greatly simplifies setup since you do not have to repackage flexive.ear just to change the cache configuration.
  • install the native Tomcat connectors for your operating system. At least in our cluster setups, these connectors turned out to be more reliable than the pure Java connectors. The DLLs/shared libraries should be put in the bin/native directory of your JBoss installation.

Install Apache 2.2+

Install the Apache HTTP Server version 2.2 or greater. Be sure that the mod_proxy and mod_proxy_ajp are included and enabled.

Cluster Setup

[fleXive] configuration

Apart from the installation instructions above, you can deploy the exact same version of [flexive.ear] on each cluster node . You might consider farming deployment for easier updates, although it's not required.
Just remember that all node members must use the same database host, which consequently needs to be reachable across the network (test this before starting [fleXive] since it's a very common issue).

Tomcat configuration

We will setup a load-balancing cluster without session replication. This means that HTTP sessions will not be shared betweens cluster members, reducing overhead. Once a client has established a HTTP session with a cluster member, all future requests must be serviced from this member. This behaviour is called session stickiness.

To enable session stickiness, the Servlet container has to append a unique node ID to the JSESSIONID variable. On every JBoss installation, open ${jboss.home}/server/all/deploy/jboss-web.deployer/server.xml and set the node ID in the jvmRoute attribute of the Engine tag, e.g.

<Engine name="jboss.web" defaultHost="localhost" jvmRoute="node1">

When a request is served by the cluster, this node ID is appended to the session ID (e.g."JSESSIONID=ECB99E48A4A0129127085A3711465D22.node1").

mod_proxy configuration

Now that we have a cluster of JBoss nodes we need a load balancer to distribute requests between the cluster members. A basic software solution is Apache's mod_proxy. It can connect to Tomcat using the binary AJP1.3 protocol, which should be a good deal faster than HTTP. You need to enable the following modules (on Unix, create symlinks from /etc/apache2/mods-available to /etc/apache2/mods-enabled):

  • proxy.load
  • proxy_ajp.load
  • proxy_balancer.load
  • status.load

Following the mod_proxy documentation, we define two nodes:

  • node1 on 192.168.1.40 and
  • node2 on 192.168.1.41.

An example configuration that clusters the /flexive and /products applications (add the code to your apache2.conf):

ProxyRequests Off

<Proxy *>
    Order deny,allow
    Allow from all
</Proxy>

ProxyPreserveHost On
ProxyPass /products balancer://my_cluster/products stickysession=JSESSIONID nofailover=On
ProxyPass /flexive balancer://my_cluster/flexive stickysession=JSESSIONID nofailover=On

<Proxy balancer://my_cluster>
    BalancerMember ajp://192.168.1.40:8009 route=node1 loadfactor=1
    BalancerMember ajp://192.168.1.41:8009 route=node2 loadfactor=1
</Proxy>

<Location /balancer>
    SetHandler balancer-manager

    Order Deny,Allow
    Deny from all
    Allow from all
</Location>

ProxyStatus On
<Location /status>
    SetHandler server-status

    Order Deny,Allow
    Deny from all
    Allow from all
</Location>

The most interesting part is the balancer setup:

<Proxy balancer://my_cluster>
    BalancerMember ajp://192.168.1.40:8009 route=node1 loadfactor=1
    BalancerMember ajp://192.168.1.41:8009 route=node2 loadfactor=1
</Proxy>

The node name specified in route must match the identifier set in the jvmRoute attribute of this node's Tomcat instance (see Tomcat configuration). The loadfactor determines how the requests will be distributed: in this case, both nodes are assumed to be equal and capable of handling the same number of requests. If you used loadfactor=2 on node1, this would mean that for every request routed to node2, two would be routed to node1.

After restarting Apache, you should be able to access the clustered applications on your Apache host under /products/ and /flexive/. You can also show the cluster status pages at /status/ and /balancer/.

Cluster operation

Starting cluster nodes

When starting a JBoss instance, bind it to the external IP address and specify the cluster partition name and IP addresses to identify the members of your cluster. For example, to bind the JBoss instance on 192.168.1.40 to the external address and specify cluster membership parameters:

run.sh -c all -b 192.168.1.40 -Djboss.partition.name=flexiveCluster 
       -Djboss.partition.udpGroup=228.2.3.8 -Djboss.hapartition.mcast_port=38241

Stopping cluster nodes

In this setup, we did not enable session replication of JBoss. Since all JSF applications are bound to HTTP sessions, stopping a cluster node will destroy all sessions of this node. However, it will join the cluster automatically if it is started again.

Resources

JBoss Server Clustering Guide
Apache mod_proxy documentation

Site running on a free Atlassian Confluence Open Source Project License granted to [fleXive] . Evaluate Confluence today.
Powered by Atlassian Confluence, the Enterprise Wiki. (Version: 2.6.1 Build:#916 Nov 09, 2007) - Bug/feature request - Contact Administrators