Are you validating the XML just to make sure it's not a syntax error? I get stuff onto disk with a configuration like this (and a producer sending 1 MB messages):
<destinationPolicy> <policyMap> <policyEntries> <policyEntry queue=">" memoryLimit="50mb"/> <policyEntry topic=">" memoryLimit="50mb"> ... <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="20 mb"/> </memoryUsage> ... Thanks, Aaron On Tue, Jul 15, 2008 at 10:50 AM, aliu <[EMAIL PROTECTED]> wrote: > > Hi, > Thanks for the quick reply. If I don't want to keep everything in memory, > and I want to be able to set how much should be stored in memory and if > there are more stuff coming, it'd be spilled out to disk, what configuration > settings can I use? > > I actually tried setting the destination limits high and the system limit > low, with the following config: > <policyEntry queue=">" memoryLimit="1 gb"/> > ... > <memoryUsage> > <memoryUsage limit="30 mb"/> > </memoryUsage> > > When I'm trying to send 50mb worth of data, my producer blocks after sending > about 30mb of data. And I also tried with destination limits low and system > limit high, my producer also blocks after sending the lower limit of the two > settings. > > Just wondering if anyone can give me some advice on what I can set on how > much to be allowed in memory? > > Thanks in advance. > Audrey > > ammulder wrote: >> >> I am by no means an authority on this, but I'm not sure the store >> usage or temp usage settings are actually used for anything. >> >> I believe you want your memory usage setting to be high enough to hold >> all the traffic you expect to be in memory at once. I think this is >> e.g. non-persistent messages that have not yet been delivered (regular >> in-flight messages, queues that are backed up, topics with durable >> subscribers who are away, etc.). There are separate settings that can >> be applied to the memory limit for each destination (or wildcard >> groups of queues/topics, etc.). >> >> If you set the destination limits high and the system limit low, you >> can cause ActiveMQ to start using disk space (as "swap" I guess), but >> I hear that's prone to deadlocks and etc. -- plus it doesn't seem >> bound by the store/temp usage settings. >> >> If you are up against memory limits (due to slow/disconnected >> consumers or whatever), there are various additional considerations >> and configuration options you may want to use. >> >> Thanks, >> Aaron >> >> On Mon, Jul 14, 2008 at 6:47 PM, aliu <[EMAIL PROTECTED]> wrote: >>> >>> Hi, I am a new user of the ActiveMQ and I was trying to figure out how to >>> tweak different memory settings. >>> I read on the forum that storeUsage controls the maximum size of the >>> AMQMessageStore, and memoryUsage is the maximum amount of memory the >>> broker >>> will use. >>> >>> However, when I ran a test of sending 50,000 messages with 1k for each >>> message, and I've used the following config: >>> <systemUsage> >>> <systemUsage> >>> <memoryUsage> >>> <memoryUsage limit="30 mb"/> >>> </memoryUsage> >>> <storeUsage> >>> <storeUsage limit="1 gb" name="foo"/> >>> </storeUsage> >>> <tempUsage> >>> <tempUsage limit="1 mb"/> >>> </tempUsage> >>> </systemUsage> >>> </systemUsage> >>> >>> My producer would block after sending about 30mb worth of data, even >>> though >>> I have specified the storeUsage to be big enough to contain all my >>> messages. >>> I thought what it would be is to have 30mb worth of data in memory, then >>> it'll spill over once it exceeds the limit. >>> >>> Can someone please tell me in detail how these settings work together? I >>> am >>> at a loss here. >>> >>> I am using 5.1, persistent messaging with transactions. >>> >>> Thanks in advance. >>> Audrey >>> >>> -- >>> View this message in context: >>> http://www.nabble.com/systemUsage-configuration-tp18454843p18454843.html >>> Sent from the ActiveMQ - User mailing list archive at Nabble.com. >>> >>> >>> >> >> > > -- > View this message in context: > http://www.nabble.com/systemUsage-configuration-tp18454843p18467144.html > Sent from the ActiveMQ - User mailing list archive at Nabble.com. > > >