On Sep 17, 2013, at 11:10 AM, Nicholas Violi <nvi...@globalgiving.org> wrote:
> Hi Daniel, Please don't top post. Either reply at the bottom or reply inline. That is the convention we try to follow on this list. > Thanks for the response. It seems that the ports (you're correct, 4000 and > 4001) aren't open; telnet reports Connection refused and nmap lists the > ports as closed. Can your run netstat and see if anything is listening on those ports? "netstat -tln" should work on Linux or "netstat -an | grep LISTEN" on Mac. Sorry I'm not sure about the command on Windows. You should see something which lists the ports. Example from my Mac. tcp4 0 0 192.168.0.6.4001 *.* LISTEN tcp4 0 0 192.168.0.6.4000 *.* LISTEN tcp46 0 0 *.8080 *.* LISTEN tcp46 0 0 *.8081 *.* LISTEN ... > Shouldn't tomcat be opening them? Yes it should and the logs indicate that it appears to be doing so. Output from netstat should confirm. Dan > I'm not running a firewall or anything. > > I'll come back to your questions about my apache config if we get stuck, > but I suspect that's not the issue. > > Thanks, > Nick > > > On Tue, Sep 17, 2013 at 10:52 AM, Daniel Mikusa <dmik...@gopivotal.com>wrote: > >> On Sep 17, 2013, at 9:59 AM, Nicholas Violi <nvi...@globalgiving.org> >> wrote: >> >>> Hello, >>> I'm setting up clustering/replication on Tomcat 7 on my local machine, to >>> evaluate it for use with my environment/codebase, and sessions don't >> appear >>> to be replicating. Hopefully I've provided enough information below, but >>> please let me know if you have any more questions. >>> >>> ___Setup___ >>> >>> I have two identical tomcat servers in sibling directories running on >>> different ports. >> >> Good. Out of curiosity, are they listening on HTTP or AJP? >> >>> I have httpd listening on two other ports and connecting >>> to the two tomcat instances as VirtualHosts. >> >> This sounds a little weird, can you explain further? >> >> - Why are you listening on two ports? Is one HTTP and one HTTPS? >> >> - Where and why are you using VirtualHosts? That's unnecessary for a >> simple clustering setup and is probably just complicating things. >> >> - How are you connecting to your Tomcat instances? mod_proxy or mod_jk? >> Can you include the config? >> >>> I can access and interact with >>> both environments on the configured ports; everything is working as >>> expected. >>> >>> The tomcat servers have clustering enabled like this, in server.xml: >>> >>> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" >>> channelSendOptions="8"> >>> >>> <Manager className="org.apache.catalina.ha.session.DeltaManager" >>> expireSessionsOnShutdown="false" >>> notifyListenersOnReplication="true"/> >>> >>> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> >>> <Membership >>> className="org.apache.catalina.tribes.membership.McastService" >>> address="228.0.0.4" >>> port="45564" >>> frequency="500" >>> dropTime="3000"/> >>> <Receiver >>> className="org.apache.catalina.tribes.transport.nio.NioReceiver" >>> address="auto" >>> port="4001" >>> autoBind="100" >>> selectorTimeout="5000" >>> maxThreads="6"/> >>> >>> <Sender >>> className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> >>> <Transport >>> >> className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> >>> </Sender> >>> <Interceptor >>> >> className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> >>> <Interceptor >>> >> className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> >>> <Interceptor >>> >> className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/> >>> </Channel> >>> >>> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" >>> filter=""/> >>> <Valve >>> className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> >>> >>> <ClusterListener >>> className="org.apache.catalina.ha.session.ClusterSessionListener"/> >>> </Cluster> >> >> Are you trying to setup sticky sessions? If so, what are you setting for >> "jvmRoute"? >> >>> >>> and I added the distributable tag to the very beginning of web.xml: >>> >>> <web-app xmlns="http://java.sun.com/xml/ns/javaee" >>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >>> xsi:schemaLocation="http://java.sun.com/xml/ns/javaee >>> http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" >>> version="3.0"> >>> <distributable /> >>> >>> (lots more...) >>> >>> </web-app> >>> >>> ___What's working___ >>> >>> When the servers start, they log >>> >>> Sep 16, 2013 1:44:23 PM org.apache.catalina.ha.tcp.SimpleTcpCluster >>> startInternal >>> INFO: Cluster is about to start >>> Sep 16, 2013 1:44:23 PM org.apache.catalina.tribes.transport.ReceiverBase >>> getBind >>> FINE: Starting replication listener on address:10.0.0.100 >>> Sep 16, 2013 1:44:23 PM org.apache.catalina.tribes.transport.ReceiverBase >>> bind >>> INFO: Receiver Server Socket bound to:/10.0.0.100:4001 >>> Sep 16, 2013 1:44:23 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl setupSocket >>> INFO: Setting cluster mcast soTimeout to 500 >>> Sep 16, 2013 1:44:23 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers >>> INFO: Sleeping for 1000 milliseconds to establish cluster membership, >> start >>> level:4 >>> Sep 16, 2013 1:44:24 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers >>> INFO: Done sleeping, membership established, start level:4 >>> Sep 16, 2013 1:44:24 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers >>> INFO: Sleeping for 1000 milliseconds to establish cluster membership, >> start >>> level:8 >>> Sep 16, 2013 1:44:25 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers >>> INFO: Done sleeping, membership established, start level:8 >>> >>> When the second server starts up, the first one logs >>> >>> Sep 16, 2013 2:17:30 PM >>> org.apache.catalina.tribes.group.interceptors.TcpFailureDetector >>> messageReceived >>> FINE: Received a failure detector >>> >> packet:ClusterData[src=org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, >>> 0, 0, 100}:4000,{10, 0, 0, 100},4000, alive=112208, securePort=-1, UDP >>> Port=-1, id={118 6 107 -67 88 98 72 95 -73 41 4 -108 58 -5 -127 -41 }, >>> payload={}, command={}, domain={}, ]; id={25 110 120 -2 -25 6 78 -97 -84 >>> -34 2 -11 49 -62 -8 -56 }; sent=2013-09-16 14:17:30.139] >>> Sep 16, 2013 2:17:30 PM >>> org.apache.catalina.tribes.transport.nio.NioReplicationTask remoteEof >>> FINE: Channel closed on the remote end, disconnecting >>> Sep 16, 2013 2:17:30 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl memberDataReceived >>> FINE: Mcast add member >>> org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 0, 0, >>> 100}:4001,{10, 0, 0, 100},4001, alive=1010, securePort=-1, UDP Port=-1, >>> id={82 -45 -109 -56 -110 -5 78 -10 -103 61 -40 -59 -36 -79 104 120 }, >>> payload={}, command={}, domain={}, ] >>> Sep 16, 2013 2:17:30 PM org.apache.catalina.ha.tcp.SimpleTcpCluster >>> memberAdded >>> INFO: Replication member >>> added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 0, 0, >>> 100}:4001,{10, 0, 0, 100},4001, alive=1011, securePort=-1, UDP Port=-1, >>> id={82 -45 -109 -56 -110 -5 78 -10 -103 61 -40 -59 -36 -79 104 120 }, >>> payload={}, command={}, domain={}, ] >>> >>> and when one is shutdown, the other one logs >>> >>> Sep 16, 2013 2:28:05 PM >>> org.apache.catalina.tribes.membership.McastServiceImpl memberDataReceived >>> FINE: Member has >>> shutdown:org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, 0, >> 0, >>> 100}:4001,{10, 0, 0, 100},4001, alive=422279, securePort=-1, UDP Port=-1, >>> id={54 43 17 -9 13 -11 72 -63 -107 -78 -8 65 -21 -77 115 88 }, >> payload={}, >>> command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ] >>> Sep 16, 2013 2:28:05 PM >>> org.apache.catalina.tribes.group.interceptors.TcpFailureDetector >>> memberDisappeared >>> INFO: Verification complete. Member >>> disappeared[org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, >> 0, >>> 0, 100}:4001,{10, 0, 0, 100},4001, alive=422279, securePort=-1, UDP >>> Port=-1, id={54 43 17 -9 13 -11 72 -63 -107 -78 -8 65 -21 -77 115 88 }, >>> payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ]] >>> Sep 16, 2013 2:28:05 PM org.apache.catalina.ha.tcp.SimpleTcpCluster >>> memberDisappeared >>> INFO: Received member >>> disappeared:org.apache.catalina.tribes.membership.MemberImpl[tcp://{10, >> 0, >>> 0, 100}:4001,{10, 0, 0, 100},4001, alive=422279, securePort=-1, UDP >>> Port=-1, id={54 43 17 -9 13 -11 72 -63 -107 -78 -8 65 -21 -77 115 88 }, >>> payload={}, command={66 65 66 89 45 65 76 69 88 ...(9)}, domain={}, ] >>> >>> so I know they're aware of each other. >> >> Good. This would seem to indicate that your multicast setup is working >> properly. That's half the battle. >> >> The second half is making sure that session data can be passed back and >> forth via TCP. From the output above, it looks like the servers are >> listening on port 4000 and 4001. Are these ports accessible? Can you >> connect to them? >> >>> >>> Finally, when I use the Cluster/Operations MBean in jconsole to try to >> set >>> property "foo" to "bar", jconsole reports "method successfully invoked", >>> and the server logs >>> >>> Sep 16, 2013 2:30:18 PM org.apache.catalina.ha.tcp.SimpleTcpCluster >>> setProperty >>> WARNING: Dynamic setProperty(foo,value) has been disabled, please use >>> explicit properties for the element you are trying to identify >>> >>> I'm not too worried about that error; mostly included to demonstrate that >>> setProperty creates a log statement. >>> >>> ___What's not working___ >>> >>> As far as I can tell, no session information is being replicated in my >> app. >>> >>> The tomcat manager only lists sessions started on the server it's >>> monitoring, and not the other one in the cluster. >>> >>> I'm under the impression that whenever the app calls >>> HttpSession.setAttribute, that attribute should be replicated to the >> other >>> cluster nodes, and I would expect that some record of that would be >> logged. >> >> I don't think you'll get logging by default. You'd need to increase the >> logging levels to see something. >> >>> My app includes this line: >>> >>> public static void saveBillingInfo(IPageContext pageContext, >> BillingInfo >>> billingInfo) >>> { >>> pageContext.getSession().setAttribute("billingInfo", billingInfo); >>> //etc... >>> } >>> >>> where BillingInfo is a Serializable class containing only one field, a >>> HashMap of information about the billing info. >> >> Try a simpler app. My favorite app for testing is a simple session backed >> counter. You can implement it in single JSP and there's no doubt when it >> is or isn't working. >> >> Dan >> >>> >>> No log statements are written when this or any other line processes, and >> I >>> don't see any evidence that session information is actually being shared. >>> >>> Any suggestions or further questions are welcome. Thanks in advance! >> >> >> >> >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org >> For additional commands, e-mail: users-h...@tomcat.apache.org >> >> --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org