Jonathan, On 5/12/2020 8:20 AM, Jonathan Yom-Tov wrote: > The problem is that my application is running on AWS which apparently > doesn't support multicasting so I can't use Tomcat's DeltaManager. I > thought of using one of the Store implementations for PersistentManager but > that has the issues which I mentioned earlier. My aim is to get to the > point where I can add or take away servers from the cluster without > impacting user experience. Ideally all state would be stored in a central > location (e.g. Redis). But, since this is difficult because of the way the > application is built I thought of using one server and only persisting the > sessions when the server goes down. But I still have to solve the issues I > mentioned. > > > > > On Tue, May 12, 2020 at 6:06 PM Christopher Schultz < > ch...@christopherschultz.net> wrote: > > Jonathan, > > On 5/12/20 05:51, Jonathan Yom-Tov wrote: >>>> I have an application which changes the state of user sessions in >>>> lots of places in the code. Is it possible to do a seamless switch >>>> of Tomcat servers, preserving all sessions? >>>> >>>> I know I can use PersistentManager to persist sessions and load >>>> them. I can think of two strategies: >>>> >>>> 1. Persist sessions periodically. This is more robust as I might >>>> not have control of when the server shuts down. 2. Persist sessions >>>> on server shutdown. >>>> >>>> >>>> The problem with the first approach is that I might lose the latest >>>> changes when the new server comes up. The problem with the second >>>> is that I'll have to lock access to the session until the old >>>> server is done saving it, which may make response times very slow. >>>> >>>> Is there a good solution to this that I might have overlooked? > > If you want to solve these problems: > > 1. Seamless (uninterrupted) restarts > 2. Always up-to-date (well, as much as possible) > 3. No downtime > > Then you really need a cluster where the sessions are being replicated > around the cluster. > > This will solve some other problems as well: > > 4. Expected downtime (e.g. OS/Tomcat/application upgrade) > 5. Unexpected downtime (network outage, hardware fault) > 6. Scaling-out (either manually or automatically) > > You can do it with as little as two Tomcat instances. If you only care > about being able to restart your application (and not the whole > server, for example), then you can even run them side-by-side on the > same server. You won't get protection against OS upgrades and > unexpected downtime in that case, but you can get familiar with the > setup without a whole lot of infrastructure. > > -chris
Could you use the RedissonSessionManager and an AWS - distributed Redis server? You could put all of your Tomcat servers in an elastic group, and let AWS manage that. The real problem with this approach is deployment. How do you deploy across an elastic group of Tomcat servers when you may not know the IP addresses of the servers or how many you have? I can think of some really kludgy ways to do this with S3 and AWS events, but I've not worked out the details. Another way to approach this is to run Docker on AWS (along with Redis), and then deploy a new version by deploying a new Docker image in a rolling fashion. If your session interface changes a lot, that could create issues. That's one of the advantages of using versioned deployment (app.war##nnn) with a cluster. Old apps stay around until the session expires, while new sessions get the new version. Maybe -- just thinking out loud -- you could use an elastic group, AWS events, Redis (RedisSessionManager), and numbered WAR files to simulate a Tomcat cluster. Another question: Is the database-backed session manager provided with Tomcat slow? You could use that instead of the third party RedissonSessionManager. You should be able to test everything but the deployment locally. Just run a Docker implementation on your development machine, and then test either RedissonSessionManager or the JDBC backed session store. Docker will (can) be set up to mimic AWS elastic group behavior (expansion / contraction of containers), so the only question will be updates. Use something like JMeter to test sessions and hammer your Docker cluster. By default, Docker routes every request to a new container in a multi-container group. You'll know really quickly if distributed sessions aren't working. I need to get back to this for $work, but I've been getting yanked around a bit. Hopefully, I'll be able to start testing all of these ideas in the next month or so. . . . just my two cents /mde/
signature.asc
Description: OpenPGP digital signature