On Fri, Apr 08, 2016 at 09:09:22AM -0500, Merlin Moncure wrote:
> I rolled my own in bash. It wasn't that difficult. The basic tactic is to:
>
> *) separate .sql that can be re-applied (views, functions, scratch tables,
> etc) from .sql that can't be re-applied (create table, index, deployment
"Bannert Matthias" writes:
> Thanks for your reply. I do think it is rather a postgres than an R issue,
> here's why:
> a) R simply puts an SQL string together. What Charles had posted was an
> excerpt of that string.
> Basically we have 1.7 MB of that string. Everything else is equal just the
I'm attempting to upgrade a database from 9.2 to 9.5 using pg_upgrade. The 9.2
database has the "orafunc" extension installed, which appears to have changed
names to "orafce". pg_upgrade complains that it can't find "orafunc" on 9.5,
which is true. Is there a standard way of handling this sit
Thanks for your reply. I do think it is rather a postgres than an R issue,
here's why:
a) R simply puts an SQL string together. What Charles had posted was an excerpt
of that string.
Basically we have 1.7 MB of that string. Everything else is equal just the
hstore contains 40K key value pairs.
Thanks for the reply Tom. template1 is definitely empty and does not contain
any hstore objects. I did a little debugging and placed the below SQL before
and after the hstore creation in the file produced by the pg_dump and
determined that these operator objects only become present immediately a
It looks like json_strip_nulls() may be what I need, I'm currently on 9.3,
which doesn't have that function but may be in a position to upgrade to 9.5
this summer. I think the apps that would be receiving the data can deal
with any resulting 'holes' in the data set by just setting them to null.
-
On Fri, Apr 8, 2016 at 11:44 AM, Scott Ribe
wrote:
> Alright, check kernel version, but what else, dump & restore?
>
> ERROR: unexpected data beyond EOF in block 1 of relation base/16388/35954
> HINT: This has been seen to occur with buggy kernels; consider updating
> your system.
>
> --
> Scot
On 4/8/2016 7:20 AM, Scott Mead wrote:
I'm not sure if that link exists, the general rule is In g if it's
POSIX, it'll work. You'll find that most PostgreSQL-ers have strong
opinions and preferences in regards to filesystems. Personally, I
know that XFS will work, it's not *my* preference,
On 04/08/2016 08:04 AM, Karl O. Pinc wrote:
Hi Tim,
As arranged I am cc-ing the pgsql-general list in the hope
they will assist. Your posts to the list may be delayed for
moderation, I can't say.
It could be helpful if you subscribed to the list, but it
is relatively high traffic and I know yo
On 04/08/2016 08:31 AM, Michael Nolan wrote:
I'm looking at the possibility of using JSON as a data exchange format
with some apps running on both PCs and Macs. .
The table I would be exporting has a lot of NULL values in it. Is
there any way to skip the NULL values in the row_to_json functio
On Fri, Apr 8, 2016 at 8:53 AM, Raymond O'Donnell wrote:
> On 08/04/2016 16:31, Michael Nolan wrote:
> > I'm looking at the possibility of using JSON as a data exchange format
> > with some apps running on both PCs and Macs. .
> >
> > The table I would be exporting has a lot of NULL values in i
On 08/04/2016 16:31, Michael Nolan wrote:
> I'm looking at the possibility of using JSON as a data exchange format
> with some apps running on both PCs and Macs. .
>
> The table I would be exporting has a lot of NULL values in it. Is
> there any way to skip the NULL values in the row_to_json fu
Alright, check kernel version, but what else, dump & restore?
ERROR: unexpected data beyond EOF in block 1 of relation base/16388/35954
HINT: This has been seen to occur with buggy kernels; consider updating your
system.
--
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
h
I'm looking at the possibility of using JSON as a data exchange format
with some apps running on both PCs and Macs. .
The table I would be exporting has a lot of NULL values in it. Is
there any way to skip the NULL values in the row_to_json function and
include only the fields that are non-null
Hi Tim,
As arranged I am cc-ing the pgsql-general list in the hope
they will assist. Your posts to the list may be delayed for
moderation, I can't say.
It could be helpful if you subscribed to the list, but it
is relatively high traffic and I know you have extremely limited
and expensive bandwid
"Charles Clavadetscher" writes:
> When R processes the daily time serie we get a stack size exceeded
error, followed by the hint to increase the max_stack_depth.
Postgres doesn't generally allocate large values on the stack, and I doubt
that R does either. Almost certainly, what is causing this
On Fri, Apr 8, 2016 at 9:16 AM, Marllius wrote:
> thank you, but i need a link in official postgresql documentation
>
I'm not sure if that link exists, the general rule is In g if it's POSIX,
it'll work. You'll find that most PostgreSQL-ers have strong opinions and
preferences in regards to file
On Wed, Apr 6, 2016 at 5:55 AM, Alexey Bashtanov wrote:
> Hi all,
>
> I am searching for a proper database schema version management system.
>
> My criteria are the following:
> 0) Open-source, supports postgresql
> 1) Uses psql to execute changesets (to have no problems with COPY,
> transaction m
thank you, but i need a link in official postgresql documentation
OCFS2 = oracle cluster file system 2
2016-04-08 10:00 GMT-03:00 Bob Lunney :
> XFS absolutely does. Its well supported on Redhat and CentOS 6.x and
> 7.x. Highly recommended.
>
> Don’t know about OCFS2.
>
> Bob Lunney
> Lead Dat
XFS absolutely does. Its well supported on Redhat and CentOS 6.x and 7.x.
Highly recommended.
Don’t know about OCFS2.
Bob Lunney
Lead Data Architect
MeetMe, Inc.
> On Apr 8, 2016, at 8:56 AM, Marllius wrote:
>
> Hi guys!
>
> The OCFS2 and XFS have compatibility with postgresql 9.3.4?
>
>
Hi guys!
The OCFS2 and XFS have compatibility with postgresql 9.3.4?
I was looking the documentation but i not found it.
Charles Clavadetscher wrote:
> We have a process in R which reads statistical raw data from a table and
> computes time series values
> from them.
> The time series values are in a hstore field with the date as the key and the
> value as the value.
> The process writes the computed value into a t
On 08/04/2016 11:50, M Tarkeshwar Rao wrote:
> Hi all,
>
>
>
> Please let me know the latest PostgreSQL version available on Solaris 11?
>
>
>
> Which PostgreSQL version will be supported on Solaris 11.x version and
> when the same will be available ?
http://www.postgresql.org/download/sol
Hi all,
Please let me know the latest PostgreSQL version available on Solaris 11?
Which PostgreSQL version will be supported on Solaris 11.x version and when the
same will be available ?
Regards
Tarkeshwar
Hello
We have a process in R which reads statistical raw data from a table and
computes time series values from them.
The time series values are in a hstore field with the date as the key and the
value as the value.
The process writes the computed value into a temporary table and locks the
corr
I'm looking to extend my PostgreSQL 9.4 master with a few slaves in hot
standby read-only for load balancing.
The idea would be to update the slaves only at defined times (once every
24/48 hours) to avoid migration issues with the application server code
and also because the "freshness" of the
Rakesh Kumar wrote:
>> Every row has two system columns associated with it: xmin and xmax
>>
>> xmin is the transaction ID that created the row, while xmax is
>> the transaction ID that removed the row.
>>
>> So when an update takes place, xmax of the original row and xmin
>> of the new row are set
Jeff Janes wrote:
>> I am curious because of "while xmax is the transaction ID that
>> *removed* the row".
>
> "marked for removal" would be more accurate. If the row were actually
> physically removed, it would no longer have a xmax to set.
Yes, thanks for the clarification.
I was thinking "log
28 matches
Mail list logo