This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
have mentioned this problem. When run as a prepared statement the first
statement will execute and return results, while the next two seem to
execute, but return no results. When run by hand, not prepared , each
statement runs
lue.starttime as aggregationvalue$starttime from
aggregationvalue where date_trunc('day', aggregationvalue.stoptime)
between '2008-12-18' and '2008-12-18' and
aggregationvalue.aggregatetype = 'HOURLY' and
split_part(aggregationvalue.value,':',1) =
version, which doesn't server prepare
statements.
thanks again, if only for the moral support.
On Thu, 2008-12-18 at 14:52 -0500, Tom Lane wrote:
> Jeremiah Jahn writes:
> > This is with jdbc3-415. None of the (jdbc, or pg) change logs since then
> > have mentioned this
doh! my second prepared statement is getting prepared as all text, when
the second and third parameters should be timestamps.
argh! Is there some sort of logging, that says how prepared statements
are getting prepared?
On Thu, 2008-12-18 at 13:13 -0600, Jeremiah Jahn wrote:
> moving on:
>
Just wanted to say thank you for version 8.3.
The ordered indexing has dropped some of my search times from over 30
seconds to 3. I've been beating my head against this issue for over 8
years. I will drink to you tonight.
thanx again,
-jj-
--
When you're dining out and you suspect something's
On Fri, 2009-01-09 at 08:17 +0100, Reg Me Please wrote:
> On Friday 09 January 2009 00:10:53 Jeremiah Jahn wrote:
> > Just wanted to say thank you for version 8.3.
> >
> > The ordered indexing has dropped some of my search times from over 30
> > seconds to 3. I'
will an alter table that removes an oid type also remove all of the
associated large objects. I've been using blobs but have converted to
byte arrays, now I need to get rid of all of the blobs. Will this be
enough? followed by a vaccum of course.
--
Jeremiah Jahn <[EMAIL P
base "copa" as user "copa"
pg_restore: creating table for large object cross-references
pg_restore: restored 5575606 large objects
pg_dump -F c -v -b -o -U copa copa > judici.pgsql
pg_restore -C -d template1 -F c -v -U copa < europa/judici.pgsql
thanx,
-jj-
tgresql-* to learn about how the packages
> are layed out.
>
>
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
>
> http://www.postgresql.org/docs/faqs/FAQ.html
--
Jeremiah Ja
Tom Lane wrote:
> Jeremiah Jahn <[EMAIL PROTECTED]> writes:
> > when I run the following two commands all of my OIDs for my blobs (about
> > 5.5 million of them) no longer reference anything in pg_largeobject.
> > All of the loid values change.
>
> pg_dump/pg_restore do no
dump/initdb/restore. It's just always seemed kind of misleading to me..
-jj-
On Mon, 2004-01-26 at 13:51, Martín Marqués wrote:
> Mensaje citado por Jeremiah Jahn <[EMAIL PROTECTED]>:
>
> > although it will be taken care of, make sure that initdb sets the local
> >
eat. I
> really don't think putting them in the database will do anything
> positive for you. :)
>
>
>
>
>
>
> ---(end of broadcast)---
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command
tend to use BLOBS or Bytea.
>
> J
>
>
> Jeremiah Jahn wrote:
>
> >There has got to be some sort of standard way to do this. We have the
> >same problem where I work. Terabytes of images, but the question is
> >still sort of around "BLOBs or Files?" O
your website?
>
> -a
>
> Jeremiah Jahn wrote:
>
> >There has got to be some sort of standard way to do this. We have the
> >same problem where I work. Terabytes of images, but the question is
> >still sort of around "BLOBs or Files?" Our final decision was to use
d work to build a current snapshot and use its pg_dump against
> older servers, if you need a solution now.
>
> regards, tom lane
>
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
--
Jeremiah Jahn <[EMAIL PROTECTED]>
---(end of broadcast)---
TIP 8: explain analyze is your friend
by comments, I mean these:
--
-- TOC entry 16 (OID 166152808)
-- Name: user_credit_card; Type: TABLE; Schema: public; Owner: copa
--
these have really gone away in a new version, and if so, which one..?
On Wed, 2004-04-14 at 10:26, Tom Lane wrote:
> Jeremiah Jahn <[EMAIL PROTECTED]&g
, but the optimizer feels that it would be quicker to do
an index scan for smith% then join using the pkey of the person to get
their role. For litigants, this makes since, for non-litigants, this
doesn't.
thanx for any insight,
-jj-
--
"You can't make a program without broken
I was wondering if there is something I can do that would act similar to
a index over more than one table.
I have about 3 million people in my DB at the moment, they all have
roles, and many of them have more than one name.
for example, a Judge will only have one name, but a Litigant could have
Here's a quick list of my experiences with BLOB's and such.
Performance is just fine, I get about 1M hits a month and haven't had
any problems. Use a BLOB if you don't need to search though the data.
The main reason being that bytea and text types are parsed. To explain,
your entire SQL statement
19 matches
Mail list logo