On Tue, 04 Mar 2008 08:18:08 -0500, Phil wrote:
> Just inheritance from an old design that has passed it's limits.
Just checking :)
I was talking to someone about redundancy in a table and he was like
"that's good though, because there are multiple (blah, blah, blah)...but
it does screw up som
Just inheritance from an old design that has passed it's limits.
I actually have a development version which does just that, but there is a
lot of work to convert many php scripts and sql to include the new column.
It's some way away from live though, so the problem I outlined still exists.
Phil
On Thu, 28 Feb 2008 11:19:40 -0500, Phil wrote:
> I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
> daily refresh with updated (and sometimes new) data.
>
> I insert the data into a temporary table using LOAD DATA INFILE. This
> works great and is very fast.
May I ask wh
Just a little more info on this.
I tried setting all of this up on a home server with, as far as I can see,
more or less identical specs with the exception being that it's a 64bit
linux build rather than 32bit.
Same insert on duplicate update takes 3 mins.
I spent all day yesterday trying to fig
I'm trying to figure out which limits I'm hitting on some inserts.
I have 50 plus tables lets call them A_USER, B_USER, C_USER etc which I
daily refresh with updated (and sometimes new) data.
I insert the data into a temporary table using LOAD DATA INFILE. This works
great and is very fast.
Then
>
> Search speeds and CPU with MyISAM is quite good. I tried InnoDb and insert
> speeds was far too slow because of its row locking versus MyISAM's table
> locking. Some people have been able to fine tune InnoDb but it requires
> even more RAM because InnoDb works best when the entire table fits i
At 12:18 PM 2/5/2007, kalin mintchev wrote:
> Put as much memory in the machine as possible. Building indexes for a
> table
> of that size will consume a lot of memory and if you don't have enough
> memory, building the index will be done on the hard disk where it is 100x
> slower. I've had 100M
> Put as much memory in the machine as possible. Building indexes for a
> table
> of that size will consume a lot of memory and if you don't have enough
> memory, building the index will be done on the hard disk where it is 100x
> slower. I've had 100M row tables without too much problem. However
At 09:44 PM 2/4/2007, kalin mintchev wrote:
hi all...
i just wanted to ask here if somebody has experience in pushing the mysql
limits... i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitati
kalin mintchev" <[EMAIL PROTECTED]>
To: "ViSolve DB Team" <[EMAIL PROTECTED]>
Cc:
Sent: Monday, February 05, 2007 4:07 PM
Subject: Re: mysql limits
thanks... my question was more like IF mysql can handle that amount of
records - about 100 million... and if it's jus
columns.
>
> Thanks
> ViSolve DB Team.
> - Original Message -
> From: "kalin mintchev" <[EMAIL PROTECTED]>
> To:
> Sent: Monday, February 05, 2007 9:14 AM
> Subject: mysql limits
>
>
>> hi all...
>>
>> i just wanted to ask here if someb
value can be used by
a single column itself or depends on the size of the columns.
Thanks
ViSolve DB Team.
- Original Message -
From: "kalin mintchev" <[EMAIL PROTECTED]>
To:
Sent: Monday, February 05, 2007 9:14 AM
Subject: mysql limits
hi all...
i just wanted to ask
hi all...
i just wanted to ask here if somebody has experience in pushing the mysql
limits... i might have a job that needs to have a table (or a few tables)
holding about a 100 million records. that's a lot of records is there
any limitation of some kind that wouldn;t allow mysql to h
: "RV Tec" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, May 18, 2004 9:28 AM
Subject: MySQL limits.
> Folks,
>
> I have a couple of questions that I could not find the answer
> at the MySQL docs or list archives. Hope you guys can help me.
>
>
Let's see if I can give you some ideas.
> -Original Message-
> From: RV Tec [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 18, 2004 8:28 AM
> To: [EMAIL PROTECTED]
> Subject: MySQL limits.
>
> We have a database with approximately 135 tables (MyISAM).
>
Folks, Tim,
Oops! Forgot to mention that... we are running MySQL 4.0.18.
Thanks a lot!
Best regards,
RV Tec
On Tue, 18 May 2004, Tim Cutts wrote:
>
> On 18 May 2004, at 2:28 pm, RV Tec wrote:
>
> >
> > Is MySQL able to handle such load with no problems/turbulences
> > at all? If so, what
On 18 May 2004, at 2:28 pm, RV Tec wrote:
Is MySQL able to handle such load with no problems/turbulences
at all? If so, what would be the best hardware/OS
configuration?
What is the largest DB known to MySQL community?
We regularly run databases with around 200 GB of data per instance,
Folks,
I have a couple of questions that I could not find the answer
at the MySQL docs or list archives. Hope you guys can help me.
We have a database with approximately 135 tables (MyISAM).
Most of them are small, but we have 5 tables, with 8.000.000
records. And that number is to incr
Hi,
I need to set a variable limit on the MySQL file size (Average row
length * no of rows )
When we insert data in to the table using JDBC .i should get a
unique JDBC exception (so that i trigger an archive).
Is this posible in MySQL?
I notice that during creation of table i can give such op
Hi,
how can i check the performance of my mysql db?
Would a MySQL DB work well as a "offline" machine to store a
huge amount of data (up to 2.000.000 measurements) to generate
time controlled output...
I am a "Newby" in using MySQL with that big amount of data, so
it would be nice if someone of
20 matches
Mail list logo