Hi, Sergey!
On Feb 12, Sergey Vojtovich wrote:
> > This is used when you insert data into the empty table. Or when you add
> > an index to the existing table. Or when you enable indexes. That is:
> >
> > 1. LOAD DATA INFILE 'everything, not in chunks'
> > (a convenient way would be to load f
Hi Sergei, Pierre,
On Wed, Feb 12, 2014 at 03:45:06PM +0100, Sergei Golubchik wrote:
> Hi, Pierre!
>
> Okay, there were quite a few replies already here.
> I'll chime in, but only to confirm that there's no easy solution,
> unfortunately :(
>
> On Feb 10, Pierre wrote:
> > Hello,
> >
> > Mariad
> And also it's worth noting that MariaDB-10.0 rebuilds both UNIQUE
> and non-UNIQUE indexes the fast way when you do ALTER TABLE ... ADD KEY.
Well this is a good news ! I'm going to give it a try.
Le 12/02/2014 15:45, Sergei Golubchik a écrit :
Hi, Pierre!
Okay, there were quite a few repli
hi guys
one doubt... using btree have a O(log N), what about using tokudb?
2014-02-12 12:45 GMT-02:00 Sergei Golubchik :
> Hi, Pierre!
>
> Okay, there were quite a few replies already here.
> I'll chime in, but only to confirm that there's no easy solution,
> unfortunately :(
>
> On Feb 10, Pier
Hi, Pierre!
Okay, there were quite a few replies already here.
I'll chime in, but only to confirm that there's no easy solution,
unfortunately :(
On Feb 10, Pierre wrote:
> Hello,
>
> Mariadb is getting slower and slower when I'm inserting a massive
> amout of data. I'm trying to insert 166 507
Yep,
myisam_sort_buffer_size and key_buffer_size were set both to 6G and show
variables confirmed it.
Moreover, when I created non-unique index both buffers were used (mysql was using 12G of RAM),
however, when I created UNINQUE index only key_buffer was used (mysql was using 6G of RAM).
I'l
Hi Pierre,
there are quite a few MySQL bug reports in MySQL bug database that can
affect your use case. Like:
http://bugs.mysql.com/bug.php?id=5731
http://bugs.mysql.com/bug.php?id=29446
http://bugs.mysql.com/bug.php?id=59925
http://bugs.mysql.com/bug.php?id=62570
http://bugs.mysql.com/bug.php?id=
Yes, that's what I'm saying, you can't create a UNIQUE index of a table where the keys doesn't fit
in RAM/key_buffer. (if it's a basic "not unique" INDEX it's not a problem and it doesn't apply).
If it's not a bug, it's a feature, then it should be documented.
Le 12/02/2014 11:15, Reindl Haral
your keys *must* fit in the RAM, period
your key-buffers must be large enough
Am 12.02.2014 11:10, schrieb Pierre:
> You don't undestand, I already did this. Read the thread since the beginning :
>
> https://lists.launchpad.net/maria-discuss/msg01338.html <= load data in empty
> table with index
You don't undestand, I already did this. Read the thread since the beginning :
https://lists.launchpad.net/maria-discuss/msg01338.html <= load data in empty
table with index
https://lists.launchpad.net/maria-discuss/msg01361.html <= Load the data in empty table THEN add the
index (what you are
What data type are the constraints over? A unique index over a raw md5 is much
better than one on a varchar(255) or even varchar(32)
Sent from my iPhone
> On Feb 12, 2014, at 12:49 AM, Pierre wrote:
>
> Ok I understand, but this doesn't resolve the initialization problem, I'm
> sure there is
Load the data THEN add the index. This will do the unique check once instead
of on every insertion.
On bloom filter miss, select from the table and insert if it isn't there. If
this is multithreaded use innodb or tokudb and select for update, to
prevent race.
Sent from my iPhone
> On F
Ok I understand, but this doesn't resolve the initialization problem, I'm sure there is a bug, or
something which can be improved a lot. I can't use the UNIQUE constraint when I have to much key
which doesn't fit in RAM.
Because even If have memcache/bloom filter in front, I still need to creat
Hi,
This is not a bug, but how b tree indexes work. For them to be efficient
they must fit in ram. There are buffering mechanisms that can be used for
secondary indexes in some cases, because the write can be done without a
read, but ONLY when the index is not unique. It if it unique, then the
Using this technique I have the same Issue. It's now running for severals hours, I'm at a 40% and
looking at show full processlist, it's getting slower and slower. It will never finish.
I think there is a bug here.
Firstly, regardly the memory usage, It doesn't correctly use the buffer I did se
Am 10.02.2014 13:45, schrieb Pierre:
> Mariadb is getting slower and slower when I'm inserting a massive amout of
> data. I'm trying to insert 166 507 066
> rows (12go) using load data infile '' into an empty table. I splitted my file
> in 13 parts of the same size because
> it was too long to i
Hello,
Mariadb is getting slower and slower when I'm inserting a massive amout of data. I'm trying to
insert 166 507 066 rows (12go) using load data infile '' into an empty table. I splitted my file in
13 parts of the same size because it was too long to insert in one shot. When I inserted more
17 matches
Mail list logo