ase statement. If you can't do
that, then I think it would be easiest to code this up in a procedure,
but before anyone jumps on that you might want to let us know if you
are free to add columns to the schema?
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t our cache
management in place we don't need this, but yeah, this could have
simplified things in many ways. But, what's the mechanism / transport
for the notification? MQ type stuff ?
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make ch
ined) FK relationships. That's
probably ok for this kind of stuff, there are ways to get the
equivalent of strong typing back either on the DB side or at run
time. You're essentially end up hacking a relational database to
support network database type operations, so to the extent that people
need that you've got something useful...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
;t really
scale the results very far. However, if you've got some specific
goals in mind I might be able to provide some hints. If you're an
IEEE member I can point you at a presentation I did on the basics (I
think)...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-
On Thu, Jul 22, 2010 at 4:09 PM, Howard Rogers wrote:
>
> On Fri, Jul 23, 2010 at 12:35 AM, Peter Hunsberger
> wrote:
> > On Wed, Jul 21, 2010 at 11:13 PM, Howard Rogers wrote:
> >>
> there's a room-full of users who can look
> at code '4097'
broken. A database
should have content fields that map to the needs of the application.
As you describe your application requirements, that is a bit string
and not an integer. Use bit strings and your application logic is
transparent, obvious and easy to maintain. Use integers and you have
to
like that from Pentaho (Kettle / Spoon) might
be in order? One step to handle the escape chars and one to load the
actual CSV...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
00m+ rows, on commodity hardware (4
> SATA disks in raid 10), and inserts to the indexes on those tables remain
> quite acceptable from a performance standpoint.
>
Can you define acceptable? IIRC the OP is looking for 20,000+ inserts / sec.
--
Peter Hunsberger
--
Sent via pgsql-general m
t's own unique set of
states. The only ones you need worry about are the ones a _user_ is
actually interacting with at any given point.
>
> Looks like we are going to cut off a few options of the game.
> ps: do i top post or bottom post here?
>
Bottom post.
--
Peter Hu
e front end goes on it's merry way...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
e file edition of the Postgres documentation is in...pdf
> format. Huh?
>
I suppose the next thing you'll be suggesting is that, because
Postgres is a database, the documentation should be stored as some
form of searchable table within the database itself?
--
Peter Hunsberger
--
the function to create the index, something like:
create index gunk on tbl1 ( hidefunc( a, b, c ) )
The data created in this index might be viewable in a database dump by
simple inspection with something like an editor. How meaningful it
might be is another question...
--
Peter Hunsberg
have a small key size), so that some portion
of this is encrypted. However, if you're doing that, you might as
well just encrypt the data directly...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.po
an index, so although you could never see the
result directly (except in a dump) queries to get at it might perform
half reasonably.
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, May 21, 2010 at 6:40 AM, Sam Mason wrote:
> On Thu, May 20, 2010 at 09:33:23PM -0500, Peter Hunsberger wrote:
>> On Thu, May 20, 2010 at 8:03 PM, Richard Walker
>> wrote:
>> > If the hacker gets root access so they can read
>> > the raw database files
you, but I'm not
sure I'd want to deal with the performance implications...
However, I will point out that if you can't read the data you may be
able to tell who created a given row, but so what? All the variations
on your scenario that I can think of at the moment all se
On Thu, May 20, 2010 at 8:03 PM, Richard Walker wrote:
> Peter Hunsberger wrote:
>>
>> If you really need security of some form at the physical database
>> level then don't screw around with convoluted hacks. Encrypt the
>> critical data in the database and be
;t screw around with convoluted hacks. Encrypt the
critical data in the database and be done with it.
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
his, but you
can also figure it out if you spend a bit of time with Google
Basically, every node in the tree is a table row with two columns, say
left and right. All children are contained within the left and right
of the parent. Pre-order tree traversal gives the algorithm for
assigning left a
will
continue to allow usage of the view; though it is still marked as
needing to be rebuilt from a user perspective any calls to it will
succeed if there is a possible way for the view to still be valid. If
there is no possible way for any use of the view to succeed then the
calls fail.
On Fri, Apr 30, 2010 at 11:29 AM, Tom Lane wrote:
> Peter Hunsberger writes:
>> If all subscriptions are roughly equal in popularity then any single
>> select should give ~ 10% of the data. That would seem to be selective
>> enough that you'd really want an index?
>
d really want an index? If so, any answers to the
OP's main question; what would be the most efficient way to handle
this type of thing?
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
use this type of solution to manage trees up to
about 100,000 nodes in size with good performance. Other
non-recursive solutions include Vadim Tropashko's (now with Oracle)
Nested Interval Tree Encoding methods, which map directly to the
dotted path (1.1.3) type tree notations in the examples in
nce you have the default, set it to null (Which may be what
you thought you where doing?)
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
worded the
question I supsect you are not?) then that's a different question, in
that case I prefer to generate the data from the operational side of
the house so you're no longer paying an operational cost to maintain
the denormalized data (you've got a whole different set of costs
instead).
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
type. Personally, the only reason I'd
see for ever using an array type is when you have many very closely
related values that would cause some huge number of rows if the data
is stored normalized
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
(including entire genomes) into
Postgres. If anyone else is doing this I would welcome the
opportunity to discuss the issues off list...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
;ve always had to use the integrated Postgres / Ruby packages to have
Ruby work...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
n (and database) design and not rely on
database: history tables, etc...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
appist type
ales, bocks and doppelbocks available in the US. The whole category of
high test beer is a fav of mine, and I'd agree you shouldn't need more
than a couple of fingers to know when you've had enough. Personally,
I'd say it's more like:
select limit from be
t;inline" the
> SQL statement and I'll have the passwords in pw_res.
>
Do the simplest thing possible. Get it working, then see if you have
any new problems you need to solve. Every issue you've described so
far is database design 101 and should present no real problem. I
think you're agonizing over nothing...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
you
> can get good query performance, but it's quite hard to do both.
Not much argument there! However, it's a three way trade off: such a
schema can get good performance for a small number of users and / or a
small amount of data (we're about 300 users and 45gb total data).
poorly but
often, at least in the research world, the middle ground is good
enough. You are after all, talking about people who spit out MySQL
databases at the drop of a hat
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
7;s a beta product and still fairly
buggy but works.
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Oct 16, 2009 at 3:31 PM, marcin mank wrote:
> On Fri, Oct 16, 2009 at 9:19 PM, Peter Hunsberger
> wrote:
>
>> The basic problem I have is that I have some tables that are
>> potentially very long (100,000's to millions of rows) and very skinny,
>
>> an
with the idea, how far would you go in
embracing it? In particular, would you add code to Postgres to hide
the fact that you are joining via table name? Would you go as far as
to do it at the system level or would you stick to just wrapping it in
some functions (in which case the create table pri
and will
make the rest of the process close to trivial...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, Oct 14, 2009 at 4:50 PM, Tim Landscheidt
wrote:
> Peter Hunsberger wrote:
>
> After some tests with a data set of 7983 rows (and 1638 ran-
> ges): Don't! :-) The recursive solution seems to be more
> than double as slow as the iterative. I'll take it to -per-
&
On Tue, Oct 13, 2009 at 5:12 PM, Tim Landscheidt
wrote:
> Peter Hunsberger wrote:
>
> You can either use a PL/pgSQL function ("SETOF TEXT" just
> for the convenience of the example):
That works well, takes about 20 seconds to do the 6M+ rows
>
> or a recursive que
guessing there has to be
something more efficient with windowing or possibly grouping on min
and max (though I can't see how to make sure they are part of a
contiguous set). Anyone have any ideas?
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
lications. Is this an area where
optimizations are considered hard in Postrgres or hopefully, just
something that is on the todo list but just no one has gotten around
to yet? Similarly, are accurate table summary stats possible someday
or are they considered close to impossible in order to elimina
x27;t tell you the number of
people that told us exactly the same thing when we started on it.
That was 8 years ago. Not only can such systems be built, they can be
made to scale reasonably well. You do need to understand what you are
doing and why: the costs can be high, but when it comes to researc
On Thu, Sep 17, 2009 at 12:44 PM, hubert depesz lubaczewski
wrote:
> On Thu, Sep 17, 2009 at 12:31:14PM -0500, Peter Hunsberger wrote:
>> On Thu, Sep 17, 2009 at 11:40 AM, hubert depesz lubaczewski
>> wrote:
>> >
>> > So, since (as we know) foreign keys ar
gn key then just don't define
a foreign key. Load the data, clean it up, then create the foreign key
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
roOLAP ?
>
> thanx for help
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@p
uit fairly quickly. If the problem domain is non
specific then I don't think you'll be able to meet these requirements
without throwing a lot of money at the problem (The Cyc guys come to
mind ;-)...
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgre
n can be postgres specific but something
approaching ANSI SQL would also be helpful (if possible).
--
Peter Hunsberger
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jul 10, 2009 at 4:02 PM, Steve Crawford
wrote:
>
> Peter Hunsberger wrote:
>>
>> We're looking at potentially using Postgres to store a variety of molecular
>> and genetic data. At this point I have a bunch of general questions...
>
> I don't kno
talk about what it would take to add columnar indexes to
Postgres? Where would be the best place for discussion about what this
would take?
I can go into more details here if appropriate and if that will help...
--
Peter Hunsberger
49 matches
Mail list logo