On Mon, May 17, 2010 at 12:04 AM, Jayadevan M
wrote:
> Hello all,
> I was testing how much time a pg_dump backup would take to get restored.
> Initially, I tried it with psql (on a backup taken with pg_dumpall). It took
> me about one hour. I felt that I should target for a recovery time of 15
> m
On Tue, Mar 25, 2008 at 3:35 AM, sathiya psql <[EMAIL PROTECTED]> wrote:
> Dear Friends,
> I have a table with 32 lakh record in it. Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> S
> Dell acquired Equallogic last November/December.
>
> I noticed your Dell meeting was a Dell/EMC meeting. Have you talked to them
> or anyone else about Equallogic?
Now that you mention it, I do recall a bit about Equalogic in the Dell
pitch. It didn't really stand out in my mind and a lot of t
> It seems to me as such a database gets larger, it will become much harder to
> manage with the 2 systems. I am talking mostly about music. So each song
> should not get too large.
I was just talking about points to consider in general. Getting to
your specific situation...
As far as BLOBs
> > I am going to embarkon building a music library using apache,
> > postgresql and php. What is the best way to store the music files?
>
> Your options are either to use a BLOB within the database or to store
> paths to normal files in the file system in the database. I suspect
> using norm
Hi all,
I had a few meetings with SAN vendors and I thought I'd give you some
follow-up on points of potential interest.
- Dell/EMC
The representative was like the Dell dude grown up. The sales pitch
mentioned "price point" about twenty times (to the point where it was
annoying), and the pitch ul
This might be a weird question...is there any way to disable a
particular index without dropping it?
There are a few queries I run where I'd like to test out the effects
of having (and not having) different indexes on particular query plans
and performance. I'd really prefer not to have to drop an
> That's true about SANs in general. You don't buy a SAN because it'll
> cost less than just buying the disks and a controller. You buy a SAN
> because it'll let you make managing it easier. The break-even point has
> more to do with how many servers you're able to put on the SAN and how
> often yo
Thanks for all your input, it is very helpful. A SAN for our postgres
deployment is probably sufficient in terms of performance, because we
just don't have that much data. I'm a little concerned about needs for
user and research databases, but if a project needs a big, fast
database, it might be wi
Hi all,
We're considering setting up a SAN where I work. Is there anyone using
a SAN, for postgres or other purposes? If so I have a few questions
for you.
- Are there any vendors to avoid or ones that are particularly good?
- What performance or reliability implications exist when using SANs?
> I have serious performance problems with the following type of queries:
>
> Doesnt looks too bad to me, but i'm not that deep into sql query
> optimization. However, these type of query is used in a function to
> access a normalized, partitioned database, so better performance in this
> queries w
e client's database the biggest table has 237Gb+ (only 1
> > table!) and postgresql run the database without problem using
> > partitioning, triggers and rules (using postgresql 8.2.5).
> >
> > Pablo
> >
> > Peter Koczan wrote:
> >> Hi all,
> >>
&
Hi all,
I have a user who is looking to store 500+ GB of data in a database
(and when all the indexes and metadata are factored in, it's going to
be more like 3-4 TB). He is wondering how well PostgreSQL scales with
TB-sized databases and what can be done to help optimize them (mostly
hardware and
I recently tweaked some configs for performance, so I'll let you in on
what I changed.
For memory usage, you'll want to look at shared_buffers, work_mem, and
maintenance_work_mem. Postgres defaults to very low values of this,
and to get good performance and not a lot of disk paging, you'll want
to
> *light bulb* Ahhh, that's it. So, I guess the solution is either
> to cast the column or wait for 8.3 (which isn't a problem since the
> port won't be done until 8.3 is released anyway).
Just a quick bit of follow-up:
This query works and is equivalent to what I was trying to do (minus
the
> > Hmm - why is it doing that?
>
> I'm betting that the OP's people.uid column is not an integer. Existing
> PG releases can't use hashed subplans for cross-data-type comparisons
> (8.3 will be a bit smarter).
*light bulb* Ahhh, that's it. So, I guess the solution is either
to cast the colum
Hello,
I have a weird performance issue with a query I'm testing. Basically,
I'm trying to port a function that generates user uids, and since
postgres offers a sequence generator function, I figure I'd take
advantage of that. Basically, I generate our uid range, filter out
those which are in use,
> Anyway... One detail I don't understand --- why do you claim that
> "You can't take advantage of the shared file system because you can't
> share tablespaces among clusters or servers" ???
I say that because you can't set up two servers to point to the same
tablespace (i.e. you can't have serve
On 9/19/07, Carlos Moreno <[EMAIL PROTECTED]> wrote:
> Hi,
>
> Anyone has tried a setup combining tablespaces with NFS-mounted partitions?
>
> I'm considering the idea as a performance-booster --- our problem is
> that we are
> renting our dedicated server from a hoster that does not offer much
> f
19 matches
Mail list logo