So we've got a table called "books" and we want to build records of how
often each book is accessed and when. How would you store such
information so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this I figured I'd ask and
see if anyone had an
Eci,the usual way is:create table books (id_book serial, author text, title text ...)create table access (id_access serial, id_book int4, timeofaccess timestamp,...)then for every access you write 1 record to access.
A rough estimate: a book may be lent out every hour once, so that is 8544 records
What if instead of book checkouts we were looking at how often a book
was referenced? In which case we're talking multiple times an hour, and
we could easily have each book requiring hundreds of thousands of rows.
Multiply that by hundreds of thousands of books and a the table seems
to become
Eci,I could not google them up quickly, but there are people dealing with tables with millions of records in PostgreSQL.Per technical data the number of rows in a table is unlimited in PostgreSQL:
http://www.postgresql.org/about/There may be performance-reasons to split up a table of that size, but
On 7/16/06, Eci Souji <[EMAIL PROTECTED]> wrote:
So we've got a table called "books" and we want to build records of howoften each book is accessed and when. How would you store suchinformation so that it wouldn't become a huge unmanageable table?
Before I go out trying to plan something like this
Hi All,
I have a question about rights to browse a database and a schema's
structure. Is this normal that user who isn't have any permission as
select
to same db or scheme may explore it and see tables (table's fields
etc.) ?
How prohibit that ? Thanks
Regards
Qnick
---(
Thank you very much.
Much appreciated.
NK
- Original Message -
From: Bruno Wolff III <[EMAIL PROTECTED]>
Date: Friday, July 14, 2006 2:50 pm
Subject: Re: Dynamic table with variable number of columns
> On Wed, Jul 12, 2006 at 13:38:34 -0700,
> [EMAIL PROTECTED] wrote:
> > Hi,
> > Thanks
Hi,
On Sun, 16 Jul 2006, Eci Souji wrote:
What if instead of book checkouts we were looking at how often a book was
referenced? In which case we're talking multiple times an hour, and we could
easily have each book requiring hundreds of thousands of rows. Multiply that
by hundreds of thousa
On Sat, Jul 15, 2006 at 04:04:50AM -0700, [EMAIL PROTECTED] wrote:
> I have a question about rights to browse a database and a schema's
> structure. Is this normal that user who isn't have any permission as
> select to same db or scheme may explore it and see tables (table's fields
> etc.) ?
Users
vagner mendes wrote:
> how can i do, for to install Postgresql in my Mac ? what´s steps i have do ?
>
> Thank you by your attention.
(best to send these requests for help to the mailing list)
There are several options for OSX, there is an Apple article here:
http://developer.apple.com/intern
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Eci Souji wrote:
> What if instead of book checkouts we were looking at how often a book
> was referenced? In which case we're talking multiple times an hour, and
> we could easily have each book requiring hundreds of thousands of rows.
> Multiply th
Not an advocacy post. If I want advocacy, I know where to find it.
I have an application that uses Postgresql - nothing too fancy, some
plpgsql, a couple of custom types, lots of text and no varchar.
For business reasons I need to also support Oracle. On the app side
this is not a big probl
pg_dump by default dumps to STDOUT, which you should use in a pipeline to perform any modifications. To me this seems pretty tricky, but should be doable. Modifying pg_dump really strikes me as the wrong way to go about it. Pipelines operate in memory, and should be very fast, depending on how y
8.1 improved locking for foreign key references but had an unexpected
consequence to our application - no parallel loads in our application.
The application does an EXCLUSIVE lock on 'addresses'. It then gets all of
the keys from 'addresses' it needs, and adds new ones encountered in this
load.
There must be something simple that I am missing, but here is my
problem. I am setting up a standard pg install as a backend to a small
webapp. I want to create a user "webuser" with only enough privileges
to query all of the tables in my database. It has not been working for
me. What is the si
I think "books" may have thrown everyone for a loop. These are not
physical books, but rather complete scanned collections that would be
available for search and reference online. One of the most important
features required would be keeping track of how often each book was
referenced and when
On Sun, Jul 16, 2006 at 05:46:16PM -0500, Wes wrote:
> Previously (pgsql 7.4.5), multiple loads would run simultaneously - and
> occasionally got 'deadlock detected' with the foreign key locks even though
> they were referenced in sorted order. When loading tables other than
> 'addresses', foreign
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
IOW, files. No problem.
The # of files is known. That's a start. Is there any existing
metric as to how often they are accessed? That's what you need to
know before deciding on a design.
This simple design might be perfectly feasible:
CREATE TABL
On 7/15/06, Ed L. <[EMAIL PROTECTED]> wrote:
We'd like to attempt some log replay to simulate real loads, but
in 8.1.2, it appears the formal parameters are logged ('$')
instead of the actuals for prepared queries, e.g.:
EXECUTE [PREPARE: UPDATE sessions SET a_session = $1
WHERE id = $2]
Th
19 matches
Mail list logo