On Friday, January 29, 2010 08:50 AM, Ross Walker wrote:
> On Jan 28, 2010, at 7:27 PM, Christopher Chan > wrote:
>
>> On Thursday, January 28, 2010 10:48 PM, Ross Walker wrote:
>>>
>>> On Jan 27, 2010, at 7:50 PM, Christopher
>>> Chan>>> wrote:
>>>
> Sorry to be the bearer of bad news
On Jan 28, 2010, at 7:27 PM, Christopher Chan wrote:
> On Thursday, January 28, 2010 10:48 PM, Ross Walker wrote:
>>
>> On Jan 27, 2010, at 7:50 PM, Christopher
>> Chan>> wrote:
>>
>>>
Sorry to be the bearer of bad news, but on top of LVM on CentOS/
RHEL
the best assurance your g
On Jan 28, 2010, at 7:25 PM, Christopher Chan wrote:
>
>> There are concerns that everyone's currently fast performing LVM file
>> systems will suddenly become doggish once barrier support is included
>> and in some cases it will be true. Using a separate SSD device as a
>> journal can help in so
On Thursday, January 28, 2010 10:48 PM, Ross Walker wrote:
>
> On Jan 27, 2010, at 7:50 PM, Christopher Chan > wrote:
>
>>
>>> Sorry to be the bearer of bad news, but on top of LVM on CentOS/RHEL
>>> the best assurance your going to get is fsync(), meaning the data is
>>> out of the kernel, but
> There are concerns that everyone's currently fast performing LVM file
> systems will suddenly become doggish once barrier support is included
> and in some cases it will be true. Using a separate SSD device as a
> journal can help in some cases.
>
That's only if you are using ext3/ext4 and data
On Jan 28, 2010, at 6:58 PM, "nate" wrote:
> Ross Walker wrote:
>
>> Even directio by itself won't do the trick, the OS needs to make sure
>> the disk drives empties it's write cache and currently barriers are
>> the only way to make sure of that.
>
> Well I guess by the same token nobody in thei
Ross Walker wrote:
> Even directio by itself won't do the trick, the OS needs to make sure
> the disk drives empties it's write cache and currently barriers are
> the only way to make sure of that.
Well I guess by the same token nobody in their right mind
would run an Oracle DB without a battery
On Jan 28, 2010, at 11:37 AM, "nate" wrote:
> Les Mikesell wrote:
>
>> I wonder if the generally-horrible handling that linux has always
>> done
>> for fsync() is the real reason Oracle spun off their own distro? Do
>> they get it better?
>
> Anyone in their right mind with Oracle would be usi
Les Mikesell wrote:
> I wonder if the generally-horrible handling that linux has always done
> for fsync() is the real reason Oracle spun off their own distro? Do
> they get it better?
Anyone in their right mind with Oracle would be using ASM and direct
I/O so I don't think it was related.
http
On 1/28/2010 8:48 AM, Ross Walker wrote:
>
>>> Sorry to be the bearer of bad news, but on top of LVM on CentOS/RHEL
>>> the best assurance your going to get is fsync(), meaning the data is
>>> out of the kernel, but probably still on disk write cache. Make sure
>>> you have a good UPS setup, so the
On Jan 27, 2010, at 7:50 PM, Christopher Chan wrote:
>
>> Sorry to be the bearer of bad news, but on top of LVM on CentOS/RHEL
>> the best assurance your going to get is fsync(), meaning the data is
>> out of the kernel, but probably still on disk write cache. Make sure
>> you have a good UPS se
On Wed, 2010-01-27 at 10:10 -0600, Les Mikesell wrote:
>
> I've seen mysql do some really stupid things, like a full 3-table join
> into a (huge)disk temporary table when the select had a 'limit 10' and
> was ordered by one of the fields that had an index.
Very true. You can you use logic ope
On Wednesday, January 27, 2010 09:26 PM, Les Mikesell wrote:
> Chan Chung Hang Christopher wrote:
Ah, well #1 on his list then is to figure out what he is running!
>>> LOL, I know it sounds quite noobish, coming across like I've no idea
>>> what DBMS it is running on. The system currently runs
> Sorry to be the bearer of bad news, but on top of LVM on CentOS/RHEL
> the best assurance your going to get is fsync(), meaning the data is
> out of the kernel, but probably still on disk write cache. Make sure
> you have a good UPS setup, so the disks can flush after main power loss.
Or turn o
On Jan 27, 2010, at 10:20 AM, Noob Centos Admin
wrote:
> Hi,
>
> On 1/27/10, Ross Walker wrote:
>>
>> But if your doing mysql on top of LVM your basically doing the same,
>> cause LVM (other then current kernels) doesn't support barriers.
>>
>> Still if you have a battery backed write-caching
On 1/27/2010 8:30 AM, Ross Walker wrote:
>
>> This is part of what I was planning to do, there are a lot of stuff I
>> am planning to split out into their own tables with reference key. The
>> problem is I'm unsure whether the added overheads of joins would
>> negate the IO benefits hence trying to
Hi,
On 1/27/10, Ross Walker wrote:
>
> But if your doing mysql on top of LVM your basically doing the same,
> cause LVM (other then current kernels) doesn't support barriers.
>
> Still if you have a battery backed write-caching controller that
> negates the fsync risk, LVM or not, mysql or postgr
On Jan 27, 2010, at 7:30 AM, Chan Chung Hang Christopher
wrote:
>
> mysql's isam tables have a reputation for surviving just about
> anything
> and great builtin replication support...
>
> postgresql less so (I suspect due to fake fsync/fsyncdata in the days
> before barriers) but maybe things
On Jan 27, 2010, at 4:07 AM, Noob Centos Admin
wrote:
> Hi,
>
>> Split the TEXT/BLOB data out of the primary table into tables of
>> their
>> own indexed to the primary table by it's key column.
>
> This is part of what I was planning to do, there are a lot of stuff I
> am planning to split o
Chan Chung Hang Christopher wrote:
>>> Ah, well #1 on his list then is to figure out what he is running!
>> LOL, I know it sounds quite noobish, coming across like I've no idea
>> what DBMS it is running on. The system currently runs on MySQL but
>> part of my update requirement was to decouple the
MySQL's acquisition was one of the factor, the client wants to keep
everything on the opensource side as far as possible.
On the technical side, all tables are using the InnoDB engine because
myISAM doesn't support either. Also previously during development, it
was discovered that on some particul
>> Ah, well #1 on his list then is to figure out what he is running!
>
> LOL, I know it sounds quite noobish, coming across like I've no idea
> what DBMS it is running on. The system currently runs on MySQL but
> part of my update requirement was to decouple the DBMS so that we can
> make an even
Hi,
>>>
>>> I believe the OP said he was running postgresql.
>>>
>>
>> Quoted from OPs previous mail hes not sure lol
>>
>> """The web application is written in PHP and runs off MySQL and/or
>> Postgresql."""
>
> Ah, well #1 on his list then is to figure out what he is running!
LOL, I know it
Hi,
> Split the TEXT/BLOB data out of the primary table into tables of their
> own indexed to the primary table by it's key column.
This is part of what I was planning to do, there are a lot of stuff I
am planning to split out into their own tables with reference key. The
problem is I'm unsure wh
On Tue, 2010-01-26 at 09:48 -0500, Ross Walker wrote:
> > Great things started to happen with mysql @ version 5 >. Now it's
> > just
> > probally going to wither away. Who really knows?
>
> Some really nice things are happening with postgresql as well, you
> should check it out.
>
> -Ross
On 1/26/2010 9:46 AM, Kwan Lowe wrote:
> On Tue, Jan 26, 2010 at 9:48 AM, Ross Walker wrote:
>
>> Some really nice things are happening with postgresql as well, you
>> should check it out.
>>
>
> This was a great thread. For one, it's interesting to see the
> approaches you can take to solve an is
On Tue, Jan 26, 2010 at 9:48 AM, Ross Walker wrote:
> Some really nice things are happening with postgresql as well, you
> should check it out.
>
This was a great thread. For one, it's interesting to see the
approaches you can take to solve an issue. I.e., we can tune the OS to
quite a degree, b
On Jan 26, 2010, at 2:30 AM, JohnS wrote:
>
> On Tue, 2010-01-26 at 13:41 +0800, Christopher Chan wrote:
>> JohnS wrote:
>>> On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
Are complicated relationships being stored in postgresql and not in
mysql? I do not know how things are
On Tue, 2010-01-26 at 13:41 +0800, Christopher Chan wrote:
> JohnS wrote:
> > On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> >> Are complicated relationships being stored in postgresql and not in
> >> mysql? I do not know how things are now but mysql has a history of only
> >> bein
JohnS wrote:
> On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
>> Are complicated relationships being stored in postgresql and not in
>> mysql? I do not know how things are now but mysql has a history of only
>> being good for simple selects.
>
> Selects can get very upity for mysql a
On Tue, 2010-01-26 at 08:19 +0800, Christopher Chan wrote:
> Are complicated relationships being stored in postgresql and not in
> mysql? I do not know how things are now but mysql has a history of only
> being good for simple selects.
Selects can get very upity for mysql as in "VIEWS". They c
Noob Centos Admin wrote:
> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the raw disk data directly,
> nor do I think it would be safe since that bypasses the DBMS's checks.
This is what I use for MySQL (among other things)
log-querie
Noob Centos Admin wrote:
> Hi,
>
>> If you want a fast database forget about file system caching,
>> use Direct I/O and put your memory to better use - application
>> level caching.
>
> The web application is written in PHP and runs off MySQL and/or
> Postgresql. So I don't think I can access the
On Jan 25, 2010, at 7:02 PM, JohnS wrote:
>
> On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:
>
>>> Instead look at the way your PHP Code is
>>> Encoding the BLOB Data and if your really need the speed since now
>>> it's
>>> MySQL DB, make you own custom C API for mysql to encode the BLOB.
On Mon, 2010-01-25 at 18:51 -0500, Ross Walker wrote:
> > Instead look at the way your PHP Code is
> > Encoding the BLOB Data and if your really need the speed since now
> > it's
> > MySQL DB, make you own custom C API for mysql to encode the BLOB. The
> > DB can do this like that much faster
On Jan 25, 2010, at 6:22 PM, JohnS wrote:
>
> On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
>> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
>> wrote:
>>
>>> Hi,
>>>
20 feilds or columns is really nothing. BUT That's dependant on the
type
of data being inserted.
>>>
>>>
On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote:
> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
> wrote:
>
> > Hi,
> >
> >> 20 feilds or columns is really nothing. BUT That's dependant on the
> >> type
> >> of data being inserted.
> >
> > 20 was an arbitary number :)
> >
> >> Ok so br
On Jan 25, 2010, at 6:41 AM, Noob Centos Admin
wrote:
> Hi,
>
>> 20 feilds or columns is really nothing. BUT That's dependant on the
>> type
>> of data being inserted.
>
> 20 was an arbitary number :)
>
>> Ok so break the one table down create 2 or more, then you will have
>> "Joins" & cluste
Hi,
> 20 feilds or columns is really nothing. BUT That's dependant on the type
> of data being inserted.
20 was an arbitary number :)
> Ok so break the one table down create 2 or more, then you will have
> "Joins" & clustered indexes thus slowing you down more possibly. That
> is greatly depend
Hi,
> If you want a fast database forget about file system caching,
> use Direct I/O and put your memory to better use - application
> level caching.
The web application is written in PHP and runs off MySQL and/or
Postgresql. So I don't think I can access the raw disk data directly,
nor do I thin
nate wrote:
> Noob Centos Admin wrote:
>> I'm trying to optimize some database app running on a CentOS server
>> and wanted to confirm some things about the disk/file caching
>> mechanism.
>
> If you want a fast database forget about file system caching,
> use Direct I/O and put your memory to bet
On Mon, 2010-01-25 at 01:09 +0800, Noob Centos Admin wrote:
> e.g. the table may currently have rows with 20 fields and total
> 1KB/row, but very often say only 5/20 fields are used in actual
> processing. Reading x rows from this table may access x inodes which
> would not fit into the cache/mem
Noob Centos Admin wrote:
> I'm trying to optimize some database app running on a CentOS server
> and wanted to confirm some things about the disk/file caching
> mechanism.
If you want a fast database forget about file system caching,
use Direct I/O and put your memory to better use - application
l
I'm trying to optimize some database app running on a CentOS server
and wanted to confirm some things about the disk/file caching
mechanism.
>From what I've read, Linux has a Virtual Filesystem layer that sits
between the physical file system and everything else. So no matter
what FS is used, appl
44 matches
Mail list logo