it. It was the psycopg adapter. My bad!!
Thanks Adrian / Tom.
Rich
On 8 Apr 2013, at 14:58, Adrian Klaver wrote:
> On 04/08/2013 06:49 AM, Richard Harley wrote:
>> It's
>>
>&
n Klaver wrote:
> On 04/08/2013 06:45 AM, Richard Harley wrote:
>> I am running the query straight through PSQL so there are no other programs
>> or adapters.
>>
>> The field definition is just 'timestamp'.
>
> From psql what do you get if yo
I am running the query straight through PSQL so there are no other programs or
adapters.
The field definition is just 'timestamp'.
I did try that as well - no luck :)
Rich
On 8 Apr 2013, at 14:36, Adrian Klaver wrote:
> On 04/08/2013 06:27 AM, Richard Harley w
Sure
Timestamp
2013/04/08 12:42:40 GMT+1
2013/04/08 12:42:33 GMT+1
2013/04/07 20:25:11 GMT+1
2013/04/07 20:19:52 GMT+1
2013/04/07 20:19:52 GMT+1
Some are GMT, some are GMT+1 depending on when they were entered.
On 8 Apr 2013, at 14:25, Adrian Klaver wrote:
> On 04/08/2013 06:22 AM, Rich
This doesn't seem to work - take a normal GMT date for example: 2012/12/14
12:02:45 GMT
select timestamp from attendance where timestamp = '2012/12/14 12:02:45'
..returns nothing
On 8 Apr 2013, at 14:17, Adrian Klaver wrote:
> On 04/08/2013 06:03 AM, Richard Harley w
Hello all
Pretty sure this should be simple - how can I select a timestamp from a
database?
The timestamp is stored in the db like this:
2013/04/08 13:54:41 GMT+1
How can I select based on that timestamp?
At the simplest level "select timestamp from attendance where timestamp =
'2013/04/08 1
On 09/05/12 00:00, Tomas Vondra wrote:
On 8.5.2012 19:27, Richard Harley wrote:
I currently do nightly database dumps on a ton of small dbs that are
increasing around 2-3mb per day. Suddenly, in a recent backup file, one
db in particular jumped from 55mb to 122mb overnight.
Well, I wouldn
I currently do nightly database dumps on a ton of small dbs that are
increasing around 2-3mb per day. Suddenly, in a recent backup file, one
db in particular jumped from 55mb to 122mb overnight.
I did some investigation -
One table increased from 8mb to 31mb during a 24hr period. The table is
your server has plenty of free resources there
won't be trouble, but I do have customers who cannot even imagine of
launching a dump in normal traffic hours. How loaded is your box,
currently?
Cheerio
Bèrto
On 15 March 2012 12:15, Richard Harley <mailto:rich...@scholarpack.com>>
Hello all
Very simple question - does pg_dump/dumpall hit the server in terms of
database performance? We currently do nightly backups and I want to move
to hourly backups but not at the expense of hogging all the resources
for 5 mins.
Pg_dumpall is currently producing a 1GB file - that's t
Won't log state = all catch everything?
Richard
On 27/08/10 10:39, Mike Christensen wrote:
Hi all -
I've noticed my log files for Postgres are getting way too big, since
every single SQL statement being run ends up in the log. However,
nothing I change in postgresql.conf seems to make a bit
That really helped me, thanks - although I wish someone had told me
about that/ before/ I tried to run a nuclear reactor using MSSQL
On 27/08/10 07:30, Mike Christensen wrote:
I found this tool pretty helpful for validating my architectural decisions..
http://www.howfuckedismydatabase.com/
12 matches
Mail list logo