On Tue, Apr 22, 2014 at 3:45 PM, Florian Weimer wrote:
> On 01/03/2014 06:06 PM, Claudio Freire wrote:
>
>> Per-query expectations could be such a thing. And it can even work with
>> PQexec:
>>
>> PQexec(con, "SELECT nextval('a_id_seq') FROM generate_series(1,10);");
>> --read--
>> PQexec(con, "SE
On 04/22/2014 07:03 PM, Claudio Freire wrote:
On Tue, Apr 22, 2014 at 8:19 AM, Florian Weimer wrote:
Feedback in this thread was, "we want something like this in libpq, but not
the thing you proposed". But there have been no concrete counter-proposals,
and some of the responses did not take in
On 01/03/2014 06:06 PM, Claudio Freire wrote:
Per-query expectations could be such a thing. And it can even work with PQexec:
PQexec(con, "SELECT nextval('a_id_seq') FROM generate_series(1,10);");
--read--
PQexec(con, "SELECT nextval('b_id_seq') FROM generate_series(1,10);");
--read--
PQexec(co
On Tue, Apr 22, 2014 at 8:19 AM, Florian Weimer wrote:
> Feedback in this thread was, "we want something like this in libpq, but not
> the thing you proposed". But there have been no concrete counter-proposals,
> and some of the responses did not take into account the inherent
> complexities of r
On 01/05/2014 01:56 PM, Craig Ringer wrote:
JDBC also has a statement batching interface. Right now PgJDBC just
unwraps the batch and runs each query individually. Any async-support
improvements server-side should probably consider the need of executing
a batch. The batch might be one PreparedSt
On Fri, Jan 03, 2014 at 03:06:11PM -0200, Claudio Freire wrote:
> On Fri, Jan 3, 2014 at 12:20 PM, Tom Lane wrote:
> > Claudio Freire writes:
> >> On Fri, Jan 3, 2014 at 10:22 AM, Florian Weimer wrote:
> >>> Loading data into the database isn't such an uncommon task. Not
> >>> everything
> >>>
On 01/04/2014 04:39 PM, Martijn van Oosterhout wrote:
Why switch between COPY commands, why could you not do it in one? For
example:
COPY table1(col1, col2, ...),
table2(col1, col2, ...)
FROM STDIN WITH (tableoids);
tableoid1col1col2...
tableoid2...
...
\.
My originally idea was to avoi
On 01/05/2014 03:11 PM, Greg Stark wrote:
On Fri, Jan 3, 2014 at 3:20 PM, Tom Lane wrote:
I think Florian has a good point there, and the reason is this: what
you are talking about will be of exactly zero use to applications that
want to see the results of one query before launching the next.
On Fri, Jan 3, 2014 at 3:20 PM, Tom Lane wrote:
> I think Florian has a good point there, and the reason is this: what
> you are talking about will be of exactly zero use to applications that
> want to see the results of one query before launching the next.
There are techniques for handling that
On 01/04/2014 01:22 AM, Merlin Moncure wrote:
> Long term, I'd rather see an optimized 'ORM flush' assemble the data
> into a structured data set (perhaps a JSON document) and pass it to
> some receiving routine that decomposed it into records.
The same is true on the input side. I'd much rather b
On 01/04/2014 01:06 AM, Claudio Freire wrote:
> You're forgetting ORM workloads.
I'm impressed that you've come up with an area where ORMs are beneficial ;-)
JDBC also has a statement batching interface. Right now PgJDBC just
unwraps the batch and runs each query individually. Any async-support
i
On Fri, Jan 03, 2014 at 04:46:23PM +0100, Florian Weimer wrote:
> On 01/03/2014 04:20 PM, Tom Lane wrote:
>
> >I think Florian has a good point there, and the reason is this: what
> >you are talking about will be of exactly zero use to applications that
> >want to see the results of one query befo
On Fri, Jan 3, 2014 at 11:06 AM, Claudio Freire wrote:
> On Fri, Jan 3, 2014 at 12:20 PM, Tom Lane wrote:
>> Claudio Freire writes:
>>> On Fri, Jan 3, 2014 at 10:22 AM, Florian Weimer wrote:
Loading data into the database isn't such an uncommon task. Not everything
is OLTP.
>>
>>> Tr
On Fri, Jan 3, 2014 at 12:20 PM, Tom Lane wrote:
> Claudio Freire writes:
>> On Fri, Jan 3, 2014 at 10:22 AM, Florian Weimer wrote:
>>> Loading data into the database isn't such an uncommon task. Not everything
>>> is OLTP.
>
>> Truly, but a sustained insert stream of 10 Mbps is certainly way
>
On Fri, Jan 3, 2014 at 9:46 AM, Florian Weimer wrote:
> On 01/03/2014 04:20 PM, Tom Lane wrote:
>
>> I think Florian has a good point there, and the reason is this: what
>> you are talking about will be of exactly zero use to applications that
>> want to see the results of one query before launchi
On 01/03/2014 04:20 PM, Tom Lane wrote:
I think Florian has a good point there, and the reason is this: what
you are talking about will be of exactly zero use to applications that
want to see the results of one query before launching the next. Which
eliminates a whole lot of apps. I suspect th
Claudio Freire writes:
> On Fri, Jan 3, 2014 at 10:22 AM, Florian Weimer wrote:
>> Loading data into the database isn't such an uncommon task. Not everything
>> is OLTP.
> Truly, but a sustained insert stream of 10 Mbps is certainly way
> beyond common non-OLTP loads. This is far more specific
On Fri, Jan 3, 2014 at 10:22 AM, Florian Weimer wrote:
> On 01/02/2014 07:52 PM, Claudio Freire wrote:
>
>>> No, because this doesn't scale automatically with the bandwidth-delay
>>> product. It also requires that the client buffers queries and their
>>> parameters even though the network has to
On 01/02/2014 07:52 PM, Claudio Freire wrote:
No, because this doesn't scale automatically with the bandwidth-delay
product. It also requires that the client buffers queries and their
parameters even though the network has to do that anyway.
Why not? I'm talking about transport-level packets,
On Wed, Dec 18, 2013 at 1:50 PM, Florian Weimer wrote:
> On 11/04/2013 02:51 AM, Claudio Freire wrote:
>>
>> On Sun, Nov 3, 2013 at 3:58 PM, Florian Weimer wrote:
>>>
>>> I would like to add truly asynchronous query processing to libpq,
>>> enabling
>>> command pipelining. The idea is to to allo
On 11/04/2013 02:51 AM, Claudio Freire wrote:
On Sun, Nov 3, 2013 at 3:58 PM, Florian Weimer wrote:
I would like to add truly asynchronous query processing to libpq, enabling
command pipelining. The idea is to to allow applications to auto-tune to
the bandwidth-delay product and reduce the num
On Sun, Nov 3, 2013 at 3:58 PM, Florian Weimer wrote:
> I would like to add truly asynchronous query processing to libpq, enabling
> command pipelining. The idea is to to allow applications to auto-tune to
> the bandwidth-delay product and reduce the number of context switches when
> running agai
I would like to add truly asynchronous query processing to libpq,
enabling command pipelining. The idea is to to allow applications to
auto-tune to the bandwidth-delay product and reduce the number of
context switches when running against a local server.
Here's a sketch of what the interface
23 matches
Mail list logo