I've been trying to sort out the answer to this question for a while
now, I've received different answers from different places.
I'm looking for a definitive non-volatile list of timezones for use in a
web application. I can't use the OS's time zone list, as changing OSes
may cause some listed tim
Alvaro Herrera wrote:
> alvherre=# select * from pg_timezone_names ;
>name | abbrev | utc_offset | is_dst
> --+++
> Africa/Algiers | CET| 01:00:00 | f
> Africa/Luanda
Tom Lane wrote:
> Naz Gassiep <[EMAIL PROTECTED]> writes:
>
>> Ok, that's kinda cool. But can I trust those names to not change from
>> version to version?
>>
>
> No, you can't. The reason there is no "nonvolatile" list of t
OK so which is the "correct" way to do it?
E.g., Say I have a table with users, and a table with clubs, and a table
that links them. Each user can be in more than one club and each club
has more than one member. Standard M:M relationship. Which link table is
the "right" way to do it?
This:
CREAT
Hi,
If I have a table with users and a table with messages, is it
possible to have a query that returns user.* as well as one extra column
with the number of messages they have posted and the data and time of
the last message? At the moment I am using a subquery to do this,
however it seems sub
I'm using PG on windows for the first time (as of about 6 minutes ago).
I was thinking that it would be great to have a system tray icon with a
running indicator, kind of like the way Apache2.x for windows has, or
even MSSQL. Perhaps the PG logo with a small white circle with a red
square or a gree
s
the same as Apache for Win32, MSSQL any many other server programs.
- Naz.
Bruce Momjian wrote:
Naz Gassiep wrote:
I'm using PG on windows for the first time (as of about 6 minutes ago).
I was thinking that it would be great to have a system tray icon with a
running indicator, k
Hey,
I'm sure that'd be greatly appreciated, most other major servers and
DBs have a similar feature, and that's what the systray is for, i.e.,
viewing major user-installed services.
- Naz.
Tony Caduto wrote:
> Bruce Momjian wrote:
>> Naz Gassiep wrote:
>>
&
s Hagander wrote:
Naz Gassiep wrote:
Hey,
I'm sure that'd be greatly appreciated, most other major servers and
DBs have a similar feature, and that's what the systray is for, i.e.,
viewing major user-installed services.
Don't forget that the tray (or tas
Is there a limit on the length of table names?
Thanks,
- Naz.
---(end of broadcast)---
TIP 6: explain analyze is your friend
Why have I been told that I need to do a re initdb to change the char
encoding? The man says i can just createdb foodb -E utf8so why
would i need to dump/initdb/create/restore? cant i just dump/create/restore?
It'd save all the messing around with changing the data dirs etc.
Tom Lane wrote:
> Are you clear on the difference between encoding and locale?
>
I confidently reply with "maybe".
> You can make new databases with whatever encoding you say, but the
> server's lc_collate and lc_ctype are frozen at initdb, and it will
> not work well to select an encoding tha
Tom Lane wrote:
>> I've successfully created a utf8 database, does that imply that
>> because I was able to create a DB with a different encoding to the ones
>> all the others use (SQL_ASCII) that my locale is set to "C" ?
>>
>
> No, that implies a lack of error checking.
Surely, then, that's
Surely such a use case could, and more to the point *should* be met
using PITR?
Regards,
- Naz.
Alvaro Herrera wrote:
A.M. wrote:
On May 24, 2007, at 14:29 , Wiebe Cazemier wrote:
On Thursday 24 May 2007 17:30, Alexander Staubo wrote:
[2] Nobody else has this, I believe, exc
I think you might need to write a custom lexer to divide the strings
into meaningful units. If there are subsections of these names that
make sense to search for, then tsearch2 can certainly handle the
mechanics of that, but I doubt that the standard rules will divide
these names into lexemes u
I'm trying to remove a schema and move all the tables to another schema.
I've manually run alter table on every table to move them, however all
the foreign keys still reference the old schema, and there are too many
to do by hand.
Is there an easy way to update one of the system catalogs to do
When entering data into a timestamptz field, if no timezone is added
does it assume you've entered a UTC time, or the time at the timezone
set in the session with SET TIMEZONE, or the local system time ?
- Naz
---(end of broadcast)---
TIP 9: In ve
Hi,
Does the connection pooling feature of PHP cause the persistent
connections to keep the properties between accesses? E.g., if a user
takes a connection, sets a timezone to it using SET TIMEZONE, will the
next user who happens to take this connection get it in that same state,
or will it
As clearly stated in the documentation
http://www.postgresql.org/docs/8.2/interactive/datatype-datetime.html#DATATYPE-TIMEZONES
Perhaps I'm thick, but I don't find that particular page to be clear on
this at all.
- Naz.
---(end of broadcast)---
Hi all,
I am still, after quite some time, wrangling over the time zone
system in my app. I have sorted out all the internal handling, however I
am still uncertain as to what the best way to get the user to select
their time zone is.
I was thinking of having users just select their time
Do the views in 8.2 pg_timezone_abbrevs and pg_timezone_names help at all?
They are where I am currently getting the authoritative list of
timezones. However this list does not seem to be quite appropriate to
expose users to directly. Read my original post, I've explained it a
little more
I was wondering if there is any reason that accessing the system view
pg_timezone_names is extremely slow relative to other queries. The
following query:
SELECT * FROM pg_timezone_names;
Executes in between 29ms and 32ms on my server. It takes about the same
when I put a
WHERE name =
I'm making (slow) progress in my timezone system, and I just noticed
this little behavioral nugget, which surely is a bug. In the system view
pg_timezone_names is a few timezones that use leap seconds. An example
which I tested is Asia/Riyadh87. When I attempt to SET TIME ZONE using
this timezo
of
TIME/TIMESTAMP/INTERVAL data types, which, as noted by the changelog, is
needed for support of leap seconds. So this error very well may be a
behavioral bug. Should I post this to -hackers or -bugs ?
- Naz.
Naz Gassiep wrote:
I'm making (slow) progress in my timezone system, and I
I just noticed that in the pg_timezone_names system table, the name and
abbrev of the "Etc/%" timezones appear to be inverted with their
utc_offset value.
I never noticed before, as I filter these zones out (among others) and
do not use them in my app. I was just interested as to why the sugge
I'm pretty certain that this is a bug. Can anyone confirm?
It is a bug -- in the SQL standard definition. The meaning of the sign
is inverted w.r.t. the relevant POSIX (?) standard, AFAIU.
Unsurprisingly, we're following the SQL standard here.
Wow. Seriously, wow.
Good thing I'm filter
Aside from the messy nomenclature, is anyone able to spot why
the "sum" column from the first query is not returning 7, as
the second query suggests that it should? I know that this is
probably simple, and that It's probably going to jump out at
me the minute I hit "Send", but if I don't hit sen
Hi there,
I am creating functionality where there are blocks of text that are
being stored in the DB and that need to be searched for. No like or
pattern matching, just a plain old WHERE clause. Obviously, hash indexes
would be best here, however I've been warned away from PG's hash
impleme
Why are hash indexes "obviously" best? In an ideal world with a good
implementation maybe, but postgresql b-trees are really quite good.
Because doing normal queries on a table where there are large text
blocks is unlikely to be a good idea. E.g.,:
SELECT * FROM table WHERE textcol = 'a 4k
A tangentially PG related question:
In a PHP project I have several functions that I use for DB operations.
I only want to allow one of them to write, all the others are for
reading only. I was thinking that a way I can enforce this would be to
check that the read only ones only have queries w
If you have an index like this:
CREATE UNIQUE INDEX foo ON tablename (f1, f2);
Is there any value in having independent indicies on f1 and f2 as well
or are they unnecessary?
Thanks
- Naz.
---(end of broadcast)---
TIP 5: don't forget to increas
1) What type of names do you prefer?
---
a) old notation - createdb, createuser ...
b) new one with pg_ prefix - pg_createdb, pg_creteuser ...
c) new one with pg prefix - pgcreatedb, pgcreateuser ...
d) remove them - psql is the solution
e) remove them - pgadmin is th
I have just attempted to upgrade to 8.3.1 and I now get this error when
trying to create a UTF8 DB:
[EMAIL PROTECTED]:~$ createdb twerl -E utf8
createdb: database creation failed: ERROR: encoding UTF8 does not match
server's locale en_AU
DETAIL: The server's LC_CTYPE setting requires encodin
Short answer is: use en_AU.UTF-8 for your locale. If it doesn't exist
you can create it using /etc/locale.gen (assuming you're running some
kind of linux)
I've just installed that locale on my system (Debian Sarge). However I'm
still getting the error. I only set the locale for that user,
I have just created a table using SELECT INTO however the PK was
supposed to be a serial. It is now an integer. To make it a serial I
just create the seq and set the default to be the nextval() of that
sequence right? is there anything else I need to do? It'll maintain the
transactional safety
The following query is executing in a long time, 500ms or so. This needs to be about
100ms or so in order to be acceptable. Can anyone spot any optimisations that I could
make to this query to bring the exec time down? Have I designed this query correctly?
Is joining to the same table every time
version.
- Naz.
Craig Ringer wrote:
Naz Gassiep wrote:
JOIN files imageid_file ON (images.imageid =
imageid_file.fileid)
JOIN files size120_file ON (images.size120 =
size120_file.fileid)
JOIN files size240_file ON (images.size240
Is there a way to change the schema that all objects are in?
Essentially I want to move everything currently in the database into
public rather than having the complex schemas that I have at the moment.
They are unnecessary and the DB complexity is trivial, so using schema
partitioning is mo
It's the folks who think that non-Windows-using developers should care
about Vista that bug me. This is open-source code, people. Scratch
your own itch.
The "scratch your own itch" line can only be pushed so far, if it is
being said by a developer who works on a project that desires to be
So what do you suggest? -core vote and "order" someone to do the work?
Postgresql.org isn't a business and doesn't employ any developer - we
only have the option of accepting patches from people/companies with
itches.
I don't suggest any chance to any structures in place, it's a purely PR
p
So...
If you're not a PostgreSQL Win32 port dev, and *don't know* what they're
up to as far as Vista, why respond to the Q? Or why respond "fix it
yourself" instead of "ask this guy" or "nobody here will know yet" or
"post your query on -ports or -hackers".
Precisely. My point is not that
I was performing a bunch of INSERTs into a table, users, that has a
unique index on username. During the transaction, my internet connection
dropped. The INSERTs were being done inside a transaction.
Once I had manhandled my DSL router back online, I went back into a
console to redo the insert
I just did a vacuum analyze and I got a message I've never seen before:
conwatchlive=# vacuum analyze;
NOTICE: number of page slots needed (27056) exceeds max_fsm_pages (2)
HINT: Consider increasing the configuration parameter "max_fsm_pages"
to a value over 27056.
VACUUM
conwatchlive=#
You have to run ANALYZE; on your db after a drop/reload to recollect
the stats. In the rest db, jus run ANALYZE; and then see how fast it
is. I'd guess that this is your issue.
Regards,
- Naz.
Andrew Edson wrote:
I have a select statement, used in a Perl program, which is
supposed to find a
Using EXPLAIN ANALYZE I can get the execution time of a query. Is there
a command I can use to get the execution time without the planning
information? I just need to time lots of queries that have complex plans
and it'd be easier if I didn't have pages and pages of planning info
between tries.
That's not quite as fast as I would like to do it, that throws in a few
more steps which slow down the development process. However if there is
no way I will persevere with the method I have now.
Thanks,
- Naz.
Jorge Godoy wrote:
Naz Gassiep <[EMAIL PROTECTED]> writes:
Us
Is anyone able to tell me why in the last column of the returned result set,
the value calculated is always 0?
QUERY:
SELECT products.productid,
products.cost,
products.srp,
CASE WHEN products.srp > 0 THEN (products.srp - products.cost)
Indeed.
Thanks for that! I keep getting bitten by that too hehe.
- Naz.
William Garrison wrote:
My guess is that integer division is to blame: 50 divided by 1500 =
0.03 which rounds to zero. You probably have to cast them to real
before doing the division.
Naz Gassiep wrote:
Is anyone able
This is possibly not a DB only problem, the solution may involve
application logic as well. But PG users
are the smartest bunch I know. Ass kissing aside, here are the details:
*** The Scenario ***
We are running a customer loyalty program whereby customers earn points
for purchasing products.
Here it is again with more sensible wrapping:
*** The Scenario ***
We are running a customer loyalty program whereby customers earn points
for purchasing products. Each product has a value of points that are
earned by purchasing it, and a value of points required to redeem it.
In order to prev
I have been looking for such a function. Having Just upgraded to 8.2,
this function is a very welcome addition to my arsenal of tools.
Many thanks!
- Naz.
Reece Hart wrote:
On Sun, 2007-03-18 at 00:40 +0530, Mageshwaran wrote:
how to find the size of a particular database in postg
This problem is to do with bulk loading of data. I use the following
scripts to take data from a live DB and put it into a testing DB with
the current version of the schema:
# SCRIPT 1
pg_dump blogbog -a -D -f blogbog_data.sql
dropdb blogbogtemp
createdb blogbogtemp
psql blogbogtemp -f /www/htd
Joshua D. Drake wrote:
Example discussion with customer:
Customer: CMD, should we update to 8.2.3
CMD: Is there something in 8.2.3 that will benefit you?
Customer: We don't know
CMD: Are you having problems with 8.1? (We try to push all customers to
at least 8.1)
Customer: No, it is just that 8.
Tom Lane wrote:
Naz Gassiep <[EMAIL PROTECTED]> writes:
Joshua D. Drake wrote:
Example discussion with customer:
...
Finally, in the absence of security concerns or performance issues (and
I mean the "we can't afford to buy better hardware" type edge
This doesn't answer your question, but I thought I'd throw my opinion in
anyway.
My personal view is that in general, binary files have no place in
databases. Filesystems are for files, databases are for data. My design
choice is to store the files in a fileystem and use the database to hold
Hi,
I'm trying to do an update on a table that has a unique constraint
on the field, I need to update the table by setting field = field+1
however if this does not perform the updates on the table in a proper
order (from last to first) then the update will cause a violation of the
index *durin
Michael Glaesemann wrote:
>
> On May 2, 2007, at 23:01 , Naz Gassiep wrote:
>
>> I'm trying to do an update on a table that has a unique constraint
>> on the field, I need to update the table by setting field = field+1
>> however if this does not perform the
This may be a question for -hackers, but I don't like disturbing them
unnecessarily.
I've been having a look at memcached. I would like to ask, is there any
reason that, theoretically, a similar caching system could be built
right into the db serving daemon?
I.e., the hash tables and libevent cou
Hannes Dorbath wrote:
> I think this is close to what MySQL's query cache does. The question
> is if this should be the job of the DBMS and not another layer. At
> least the pgmemcache author and I think that it's better done outside
> the DBMS. See
> http://people.FreeBSD.org/~seanc/pgmemcache/pgm
>> This is exactly what I was asking about. So my theoretical idea has
>> already been implemented. Now if only *all* my ideas were done for me by
>> the time I came up with them :)
>
> Then you wouldn't be able to eventually patent them ;)
What an un-BSD licensish thing to say :P
--
> I have always found MySQL's query cache to be utterly useless.
>
> Think about it this way :
>
> It only works for tables that seldom change.
> It does not work for big tables (like the posts table of a forum)
> because the cache would have to be huge.
>
> So, the most freque
I'm working in a project at the moment that is using MySQL, and people
keep making assertions like this one:
"*Really* big sites don't ever have referential integrity. Or if the few
spots they do (like with financial transactions) it's implemented on the
application level (via, say, optimistic loc
I am getting an error that I think I understand, but that I didn't think
should happen.
Below is the output from psql that I am getting to trigger this error.
If the violation of the constraint really is being caused WITHIN the
query, doesn't that violate the principle of atomicity? I.e., oper
Peter Eisentraut wrote:
Naz Gassiep wrote:
If the violation of the constraint really is being caused
WITHIN the query, doesn't that violate the principle of atomicity?
I.e., operations and entities should be considered a single entire
construct rather than a collecti
inconsistent at some point DURING its execution. This seems odd to me,
as queries should not trigger errors like that if the DB is only out of
consistency DURING its execution, as long as it is consistent before and
after.
Regards,
- Naz.
Joshua D. Drake wrote:
Naz Gassiep wrote:
I am getting an
:
On Aug 29, 2006, at 4:46 , Peter Eisentraut wrote:
Naz Gassiep wrote:
conwatch=# UPDATE replies SET rgt = rgt + 2 WHERE postid = 18 AND rgt
= 11;
ERROR: duplicate key violates unique constraint "replies_rgt_postid"
This is a well-known deficiency in PostgreSQL. You will have to w
66 matches
Mail list logo