Hi Andres,
> No fsyncs will be issued in the common paths, and they won't be written
> to disk by !shutdown checkpoints. But they *will* be written to disk if
> there's not enough space in shared_buffers for all the data.
With unlogged tables I still see data written to the device every 10
second
Hi Vik,
> Yes. The only difference between logged and unlogged tables is the lack
> of WAL. As long as there's no crash, unlogged tables are treated the
> same as logged tables as far as flushing/fsync-ing is concerned.
Ok thats really bad news :/
After reading the discussion about calling unlo
Hi Andreas,
> They will lost after a crash, but after a regular shutdown / restart all
> data in the table.
Yes, the semantics are clearly stated in the documentation.
What I wonder is whether postgresql will issue flush/fsync operations
when unlogged tables are modified?
Regards, Clemens
--
Hi,
I would like to use postgresql's unlogged tables on an embedded system
to avoid frequent writes to flash memory.
While documentation clearly states that unlogged tables don't have to
go through the WAL, it doesn't mention what happens to the data when
it is written directly to the database.
W
Hi Peter,
Thanks a lot for your reply.
> What Postgres version? The WAL Writer will hibernate on Postgres 9.2+.
> walwriter.c says:
I am using Postgresql-9.1 shipped with Raspbian (debian for raspberry pi).
> /*
> * Number of do-nothing loops before lengthening the delay time, and the
> * mul
Hi,
Is there any technical reason why wal_writer_delay is limited to 10s?
I am using postgresql in an embedded system and writing every 10s
burns flash at an amazing rate (a 32GB SDHC card with ext4 survived 4 days with
only a few GB written).
I've now switched to nilfs2 which is way more friendl
Hi,
> If you really want ZFS, I would highly recommend looking into
> FreeBSD (Postgresql works great on it) or if you want to stick with Linux,
> look into mdadm with LVM or some other filesystem solution.
If you want to use ZFS because of its features, take a look at btrfs.
It provides a lot of
Hi,
Unfortunately the question is still open - is there any technical
reason why wal_writer_delay is limited to 10s?
I am using postgresql in an embedded system and writing every 10s
burns flash at an amazing rate (a 32GB SDHC card survived 4 days with
only a few GB written).
Thank you in advance
Hi,
Running postgres 9.1.11 + postgresql-9.3-1100.jdbc41.jar, I experience
a strange phenomenon using indexes.
I have the following schema:
CREATE TABLE wplog (id serial NOT NULL,ts timestamp without time zone,
sensor1 real, sensor2 real, noiselevel smallint, CONSTRAINT wplog_pkey
PRIMARY KEY (id
Hi,
Just to be curious, why is wal_writer_delay limited to 10s?
I am using postgresql in an embedded environment where every 10s
sensor values are logged and even with "synchronous_commit = off" and
wal_writer_delay=1 this burns quite a lot of nand cycles. For me
it wouldn't hurt loosing minut
Hi Kevin,
Sorry, that's the wrong way around. I should have said:
>
> Your BEFORE UPDATE trigger could leave the "synced" value in NEW
> alone if force_sync was **true**, and set "synced" to false
> otherwise. It could then set NEW.force_sync to false, to leave you
> ready for the next update.
>
Hi,
NEW reflects what the row will look like after the UPDATE. There
> is no way to tell which columns were specified in the SET clause of
> the UPDATE; a column which is omitted from that clause will look
> exactly the same as a column which is set to the value it already
> had.
>
Thanks a lot
Hi,
Sorry for this newbie-question, I am trying for quite some time now to get
the following trigger-function to work properly:
CREATE OR REPLACE FUNCTION update_synced_column()
RETURNS trigger AS
$BODY$
BEGIN
IF NEW.synced IS NULL THEN
NEW.synced := false;
END IF;
RETURN NEW;
END
Hi Misa
> But Example shows - that C app - actually asks all the time to get notify...
> when gets something more then 4 times - exit... until 4 times loops...
> The same you can achieve with PHP...
As far as I understood, with php I have to query the server again and
again, and pg_get_notify wil
Hi Misa
> What is the main goal?
The main goal is to perform some inter-system communication in the
case some rows in one table are updated (very seldom event).
> even using libpg - you need to call pg notify... Doc says, just using
> libpgtcl would be possible to get Notify event - without chec
Hi Bill,
>> Is there any way to listen to NOTIFY in php without polling using a
>> callback or blocking call?
>
> Not at this time.
Too bad ... Thanks for the confirmation.
I'll try to invoke a native libpg binary which stays alive until a
NOTIFY is received, should do the trick in case update-f
Hi,
Sorry for asking such a newbie-question, I've used a search engine -
however I haven't found what I am searching for.
Is there any way to listen to NOTIFY in php without polling using a
callback or blocking call?
I've only found pg_get_notify(), however it requires polling as far as
I underst
Hi Richard,
> Might be worth looking at PgQ - a queueing system underlying Londiste. That
> would handle tracking the changes in PostgreSQL leaving you to just handle
> the MySQL end. Timestamps will do the job as long as you are careful to
> allow enough slack to deal with clock updates.
Thanks
Hi Richard,
>> Will triggers (after
>> update specifically) cause the execution of SQL-commands to pause
>> until the trigger-function has returned (at statement execution time
>> or commit)?
>
> The trigger will block. If it didn't then it couldn't abort the transaction
> if it needed to.
Thank
Hi,
We are currently evaluating the feasibility of executing long-running
scripts written in shell-script (plsh) called by triggers (after
update/insert) to synchronize two databases. Will triggers (after
update specifically) cause the execution of SQL-commands to pause
until the trigger-function
Hi,
> I think I'm trying to learn information what is the technique for managing
> large databases
> not to philosophize what was my server.
In this case it starts to get very specific about what you are trying
to accomplish.
Transactional databases offer a lot guarantees, which makes it hard to
Hi again,
We are still constantly getting postgresql processes killed by signal
9 from time to time, without any idea why or how.
Syslog seems completly clean.
In case a postgresql process would exceed some restricted resources
like file descriptors, would the kernel choose to terminate it using
Hi Steve,
> Out of memory or OOM killer?? Any such messages in system logs?
That was my first thought too - but I could't find anything indicating
an OOM event in the logs.
Usually the server only uses ~110mb out of the available 2GB assigned to it.
So if this isn't a known postgres behaviour, I
Hi,
Recently single postgres processes are killed by SIGNAL 9 on our
virtual vvmware managed server without any manual interaction -
causing lost transactions.
Any ideas what could be the reason? Could postmaster the source of the signal?
We are running postgreql 8.4.7 on Linux 64-bit.
Thank you
HI Tomas,
>> No, at least in the current version. The next version (9.2) will support
>> checksums, but it's meant mostly as a protection against failures at the
>> I/O level. It might catch some memory issues, but it certainly won't be
>> 100% protection.
>
>Oh, I see - it was bumped to 9.3 and
Hi,
Is there a tool for checking a postgresql database for integrity, so I
would be notified immediatly if something went wrong e.g. with memory
and can go back to the last good backup?
Such a tool would guarantee me I am not sitting on a ticking time bomb
(corrupted database) without even noticin
Hi,
> Do you have more than one device mounted and if so are you sure you are
> working
> on the right one?
As often, the problem turned out to be a simple one.
With OpenWRT, /var is just a symlink to /tmp - which itself is located
on a ramdisk.
After moving the data-directory to /postgresql, e
Hi,
>> There are about 800mb free space on /, any idea what could be the problem
>> here?
>
> Could you be running out of file system nodes?
Don't think so - its a standard ext4 filesystem on a flush drive,
nothing special.
Also I've added about 128mb of swap space, so that shouldn't be an issue
Hi,
I try to run postgresql on my mips based openwrt router.
Although openwrt provides precompiled packages, I have troubles
initializing postgresql's file system:
# creating template1 database in /var/postgresql/data/base/1 ...
FATAL: could not write to file "pg_xlog/xlogtemp.1791": No space le
Hi,
I've defined a small trigger to increment a field each time the row is updated:
> CREATE TRIGGER inc_trigger BEFORE UPDATE ON Table FOR EACH ROW EXECUTE
> PROCEDURE inc_function();
Works quite well, however the trigger is also fired if the table
itself is modified.
When deleting or insertin
Hi Pavel,
>> If so, are there some examples how to use "COPY FROM STDIN" with the
>> native C API?
>
> look to source code on \copy implementation in psql
>
> http://doxygen.postgresql.org/bin_2psql_2copy_8c.html
>
> http://www.postgresql.org/docs/8.1/static/libpq-copy.html
Thanks for the pointer
Hi,
We are working on an university project for network traffic accounting.
We use ulogd-2 to log netfilter packets, however it creates for each
packet a seperate transaction and also doesn't use prepared
statements, which results in horrible performance.
What we are looking for is a low-overhead
Hi Robert,
> does this work for you?
> select u.id from (your unions) as u order by u.id
Unfourtunatly not, it converts my union-results from INTEGER to RECORD.
However, it seems to be possible to order the unions directly:
> result1 UNION result2 ORDER BY u.id
Hmm, the query plan looks
Hi,
I have a query where I UNION several select statements which return
IDs of type INTEGER.
This works well, if the IDs don't need to be sorted:
> SELECT id FROM table WHERE id IN ((select id FROM table WHERE ...) UNION
> (SELECT id FROM table_ WHERE ))
However I need the result the UNION
Hi Kevin,
> Sorry; I didn't mean to be harsh.
I also overreacted, sorry about that.
Indeed the documentation is well done, as is the software itself =)
Thanks, Clemens
> Sometimes people coming from some other products aren't used to that
> -- I was just trying to point you in the direction of
Hi,
I am working on a graphical table display widget which should be able
to display huge amounts of data, by lazy-loading database values. It
already works well if the primary-key and line-number are equal.
However I don't have any idea howto handle the case where the
primary-key contains holes,
Hello again,
> Well, if you were using 8.4 you could fool with configure's
> --with-wal-segsize option. Since you're not, look into
> src/include/pg_config_manual.h. In either case, expect to do a
> full recompile and initdb after changing it.
Good to know - I plan to upgrade to 8.4/9.0 anyway.
Hi,
I am using postgres-8.3 on an embedded ARM9 system.
Works pretty well, except for stoarge consumptions.
The actual table data is rather small, but postgres creates 2 16mb
files in pg_xlog:
r...@mesrv:/var/lib/postgresql/8.3/main# ls -la pg_xlog/
total 32820
-rw--- 1 postgres postgres 167
Hi,
Sorry for beeing a bit off-topic.
Recently I've done some database-fontends, which I used
java+swingset+netbeans-gui-builder for.
Compared to plain java development is quite fast - however I wonder
wether you could recommend db-fontend generators like the infamous
access.
What I've found so f
ask on the jdk-list.
Thanks for all your help and patience, Clemens
2009/7/26 Scott Marlowe :
> On Sun, Jul 26, 2009 at 11:20 AM, Clemens Eisserer
> wrote:
>> Hi Tom,
>>
>>> regression=# alter sequence foo_bar_seq start with 1000;
>>> ALTER SEQUENCE
>&
Hi Tom,
> regression=# alter sequence foo_bar_seq start with 1000;
> ALTER SEQUENCE
Completly forgot about that possibility, thanks a lot :)
What still puzzles me is how to get the sequence value applied.
MySQL's auto_increment simply ignores whatever value is supplied to
it, however postgres see
Hello,
Is it possible to use an implicit sequence with a start value?
Something like: CREATE TABLE foo (key SERIAL START 1000 PRIMARY KEY NOT NULL);
Thank you in advance, Clemens
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://
42 matches
Mail list logo