Re: Need help getting DBI to work under mod_perl
Give it a try and add the hostname you are connecting to to the DSN. Tom Boysenberry Payne wrote: I'm able to run DBI->connect() fine when I run my .pl and .pm scripts as mod_cgi. Once I set up my httpd.conf and startup.pl files it no longer works. I get the following: Can't connect to MySQL server on 'localhost' (49) I can't find what (49) means anywhere... Tom -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.8.5 - Release Date: 03.02.2005
Re: Need help getting DBI to work under mod_perl
On Feb 6, 2005, at 1:09 AM, Boysenberry Payne wrote: I running Mac OS X darwin, and get everything as showing correctly in the %ENV. my $dsn = "DBI:mysql:database=test"; my $test_db = eval{ DBI->connect( $dsn, "...", "...", { RaiseError => 1, PrintError => 1, AutoCommit => 1 } ) }; first off, i'd double check that you can connect to the db test with the values '...' for user and pass that you substituted '...' with above forever, i've used the same simple chunk of code, which has worked fine under osx + mod_perl it gives me a db object that holds all of the db handles i need (so i can create sep. handles for a write master or read slaves), and i just access everything through there per project it generally looks like this === package my::DB; use DBI; my $DB = 'dbi:mysql:test'; my $DBuser = 'test'; my $DBpass = 'test'; sub new { my $proto = shift; my $class = ref($proto) || $proto; my $this = bless ( {} , $class); $this->dbconnect; return $this; } sub dbconnect { my $this = $_[0]; $this->{'DBH'} = DBI->connect( $DB, $DBuser, $DBpass ); if (!defined $this->{'DBH'} ) { print STDERR "Cannot connect to sql server !"; } print STDERR "CONNECTING VIA $DB DB \n"; } 1;
Re: Logging user's movements
ben syverson wrote: That's not how it works. The entire cache IS invalidated when a new node is added. What I'm saying is that you only invalidate the entire cache right now because you have no way of telling which nodes are affected by the change. If you had a full-text index, you could efficiently determine which nodes are affected by a change and only invalidate them. But when you request one of the nodes, it checks to see what the new nodes are. It then searches the node text for those new node names. If there are no matches, it revalidates the cache file (without regenerating it), and serves it. Otherwise, it regenerates the node. Yes, I understood all of that. That's what I meant by "regenerates." I'm suggesting an approach that lets you skip revalidating, since the cache would only be invalidated on documents that actually contained matches. But if you have 1,000,000 documents (or even 10,000), do you really want to search through every single document every time a node is added? Have you ever used an inverted word index? This is what full-text search usually is based on. Searching a million documents efficiently should be no big deal. You also only have to do this as part of the job of creating a new node. You don't need to do it when serving files. Furthermore, do you really want every document loaded into the MySQL database? I suggested MySQL as an easy starting point, since it allows incremental updates to the text index. There are many things you could use, and some will have more compact storage than others. My thinking is that if you have many documents, odds are only a small subset are being actively viewed, so it doesn't make sense to keep those unpopular documents constantly up-to-date... You can use this approach for invalidation and still wait until the pages are requested to regenerate them. If the system is running fast enough and not having scalability problems, there's no reason for you to get into making changes like what I'm describing. I thought you were concerned about the time wasted by revalidating unchanged documents, and this approach would eliminate that. - Perrin
Restricting maximum parallel connections
Hi Everyone ! Currently, I am implementing a mechanism to restrict the number of parallel connections to the server from a single client/user. The mechanis is something like : - Use Cache::FastMmap to share the data between multiple processes. - PerlAccessHandler will get the key for cache to get the current count of alive connections. The key could be some URL parameter, cookie or IP. It increments the count by 1 and stores the result back into the cache. Deny the connection if count exceeds the maximum limit. -PerlCleanuphandler decrements the count. I am interested in knowing if I am missing some obvious point here. This mechanism is working nicely. Are there any better alternatives available without any overhead of chache/locking etc. Thanks, Pratik -- http://pratik.syslock.org
Re: Need help getting DBI to work under mod_perl
It's trying to connect to port 49. By default MySQL listens to 3606. Somewhere it was changed. Jay Scherrer On Sunday 06 February 2005 06:24 am, Tom Schindl wrote: > Give it a try and add the hostname you are connecting to to the > DSN. > > Tom > > Boysenberry Payne wrote: > > I'm able to run DBI->connect() fine when I run my .pl and .pm > > scripts as mod_cgi. > > Once I set up my httpd.conf and startup.pl files it no longer > > works. I get the following: > > > > Can't connect to MySQL server on 'localhost' (49) > > > > I can't find what (49) means anywhere... > > Tom
Re: Restricting maximum parallel connections
Pratik wrote: I am interested in knowing if I am missing some obvious point here. This mechanism is working nicely. Are there any better alternatives available without any overhead of chache/locking etc. What you're doing sounds fine, but you might be interested in this approach that Randal demonstrates: http://www.stonehenge.com/merlyn/LinuxMag/col17.html He is using it to throttle by CPU, but it's easy to make it use number of connections within a time window instead. - Perrin
Re: mod_perl.c:61: `my_perl' undeclared under Cygwin
> >Nick *** wrote: >> > >> >Stas Bekman wrote: >> >> Nick *** wrote: >> [...] >> > [...] >> >It's fixed in r149218, was a simply typo in lib/ModPerl/BuildMM.pm >> >> What is 'r149218'? I used the latest svn snapshot from today and it wasn't >> working. >> > >r149218 is subversion revision 149218 of the asf repository. > >You can see it here: > >http://svn.apache.org/viewcvs?view=rev&rev=149218 > Thanks Philippe, it works fine now. I have one request. I don't know who I have to ask for this. It's about the 'dllexport' issue with cygwin. I'd like to supply a patch, but for this patch to work I need the compiler flag '-DCYGWIN' to be set for all .c files in the WrapXS dir when compiling (or src/modules/perl when building with MP_STATIC_EXTS=1) When I try to build as a dso, -DCYGWIN is set, but when I build static MP2 with MP_STATIC_EXTS=1 - the flag isn't there (I haven't tested static mp2 without MP_STATIC_EXTS=1). Can somebody "fix" this, please? I need the -DCYGWIN to be set everywhere not only when building dso. Sorry for not doing this myself, but someone more familar with mp2 can do it much faster and better. > >Philippe M. Chiasson m/gozer\@(apache|cpan|ectoplasm)\.org/ GPG KeyID : >88C3A5A5 >http://gozer.ectoplasm.org/ F9BF E0C2 480E 7680 1AE5 3631 CB32 A107 >88C3A5A5 > - http://www.bulgariantop.com - Регистрирай своя сайт сега!
Re: Need help getting DBI to work under mod_perl
It's localhost, and I did. I'm developing on a local environment. On Feb 6, 2005, at 8:24 AM, Tom Schindl wrote: Give it a try and add the hostname you are connecting to to the DSN. Tom Boysenberry Payne wrote: I'm able to run DBI->connect() fine when I run my .pl and .pm scripts as mod_cgi. Once I set up my httpd.conf and startup.pl files it no longer works. I get the following: Can't connect to MySQL server on 'localhost' (49) I can't find what (49) means anywhere... Tom -- No virus found in this outgoing message. Checked by AVG Anti-Virus. Version: 7.0.300 / Virus Database: 265.8.5 - Release Date: 03.02.2005
Re: Need help getting DBI to work under mod_perl
On 6-Feb-05, at 12:46 PM, Jay Scherrer wrote: Boysenberry Payne wrote: I'm able to run DBI->connect() fine when I run my .pl and .pm scripts as mod_cgi. Once I set up my httpd.conf and startup.pl files it no longer works. I get the following: Can't connect to MySQL server on 'localhost' (49) I can't find what (49) means anywhere... It's a system error number. On FreeBSD, at least, it means this: 49 EADDRNOTAVAIL Cannot assign requested address. Normally results from an attempt to create a socket with an address not on this machine. but YMMV
Re: Logging user's movements
On Feb 6, 2005, at 11:04 AM, Perrin Harkins wrote: Have you ever used an inverted word index? This is what full-text search usually is based on. Searching a million documents efficiently should be no big deal. You also only have to do this as part of the job of creating a new node. You don't need to do it when serving files. Yes, an unrelated part of the app relies on an inverted word index. This is definitely how I would approach the "Wiki" aspect of the app, if I was only matching whole words. However, this implementation needs to be able to match the "perl" in "mod_perl," or the "net" in "cybernetic." - ben
Re: mod_perl.c:61: `my_perl' undeclared under Cygwin
Oops, there are more issues with MP_STATIC_EXTS=1 option. The initial bug is fixed and the httpd executable builds fine, but after that I get this: make[3]: Entering directory `/usr/src/modperl/WrapXS/APR/Base64' cp Base64.pm ../../../blib/lib/APR/Base64.pm make[3]: *** No rule to make target `../../../blib/arch/auto/APR/Base64/Base64.dll', needed by `dynamic'. Stop. make[3]: Leaving directory `/usr/src/modperl/WrapXS/APR/Base64' make[2]: *** [subdirs] Error 2 make[2]: Leaving directory `/usr/src/modperl/WrapXS/APR' make[1]: *** [subdirs] Error 2 make[1]: Leaving directory `/usr/src/modperl/WrapXS' make: *** [subdirs] Error 2 It doesn't need to do anything with Base64.dll since it shouldn't exist. - http://www.bulgariantop.com - Регистрирай своя сайт сега!
Re: Restricting maximum parallel connections
[ Please keep conversation on the list. ] Pratik wrote: He is using it to throttle by CPU, but it's easy to make it use number of connections within a time window instead. But I am more interested in restricting parallel connections rather than request/rate. You can store simple +1/-1 values if you want to. The interesting part is that it packs the data into a small fixed-size chunk which makes appends atomic without flocking. However, Cache::FastMmap is very fast, so I'm not sure it would be worthwhile for you to change things. - Perrin
Re: Restricting maximum parallel connections
On Sun, 06 Feb 2005 16:40:57 -0500, Perrin Harkins <[EMAIL PROTECTED]> wrote: > [ Please keep conversation on the list. ] Oops...sorry for the goof-up. Can't help it :D > You can store simple +1/-1 values if you want to. The interesting part > is that it packs the data into a small fixed-size chunk which makes > appends atomic without flocking. However, Cache::FastMmap is very fast, > so I'm not sure it would be worthwhile for you to change things. I'll look into it. Thanks, Pratik -- http://pratik.syslock.org
Re: Need help getting DBI to work under mod_perl
I think I figured out what's wrong, but I don't know how to correct it. After reading a web-blog that said to rename any fink created conflicting directories namely /sw/lib/mysql or /sw/lib/perl5 I tried it. Nothing worked afterwords. So, I'm guessing I have conflicting modules being used. Try as I might to reinstall mod_perl I can't get it right. The perl MakeFile.PL asks me to find the apache source directory, which on Mac OS X 10.3 is /usr/include/httpd/ I believe. When I try it the sub directories it looks for the apache handlers are all wrong. I tried ln my /Library/MySQL/lib/mysql/ to the /sw/lib/mysql/ directory to no avail. I needs the following: dyld: /usr/sbin/httpd can't open library: /sw/lib/mysql/libmysqlclient.14.dylib (No such file or directory, errno = 2) [Sun Feb 6 16:08:18 2005] [notice] child pid 779 exit signal Trace/BPT trap (5) Anyone know what my next step is? On Feb 6, 2005, at 11:46 AM, Jay Scherrer wrote: It's trying to connect to port 49. By default MySQL listens to 3606. Somewhere it was changed. Jay Scherrer On Sunday 06 February 2005 06:24 am, Tom Schindl wrote: Give it a try and add the hostname you are connecting to to the DSN. Tom Boysenberry Payne wrote: I'm able to run DBI->connect() fine when I run my .pl and .pm scripts as mod_cgi. Once I set up my httpd.conf and startup.pl files it no longer works. I get the following: Can't connect to MySQL server on 'localhost' (49) I can't find what (49) means anywhere... Tom
[OSCon 2005] rfc Inherited Method Handlers for mod_perl
I'm planning on submitting the following talk proposal for a 45 minute presentation session: Inherited method handlers for mod_perl 1. What is an inherited handler and its benefits (5min) a) Single handler for all packages. b) Speeds code development. c) Removes repeated code. 2. A simple example (10min) a) How to inherit a handler into multiple packages. 3. Treating handlers completely like objects, a complex example. (10min) a) A single generalized handler. b) Overriding configuration methods. c) Putting it all together. 4. Multiply inherited handlers (10min) a) Customizing the handler for a specific application. 5. Who is doing this now. (5min) a) Projects that implement this idea * All examples will utilize mod_perl 2.0, though this idea can be used identically in mod_perl 1.x The only uncertainty I have right now is on on the last section of the talk. I don't know of any project currently using this method, though I have not looked too far. If you know of a project that does use an approach please let me know. -- Nicholas Studt <[EMAIL PROTECTED]>
Re: Restricting maximum parallel connections
About Stonehenge::Throttle: -It does all important operations inside PerlLogHandler. -For my requirements - where the request is serving large files for download - I will need to move those operations in higher level handlers - Like PerlAccessHandler or PerlInitHandler. -In which case, I'd be doing "stat" many times before the request is served. So, I guess using Cache::FastMmap would be a better choice than doing "stat" multiple times for every request. Consider a case where in 50 request to download some different chunks come at a same time. In that case I believe that Cache::FastMmap should be much faster. Any inputs ? Thanks. Pratik -- http://pratik.syslock.org