On Sat, Feb 5, 2011 at 9:37 PM, Johan Corveleyn wrote:
> On Sat, Feb 5, 2011 at 7:14 PM, Stefan Sperling wrote:
>> On Sat, Feb 05, 2011 at 06:47:35PM +0100, Branko Čibej wrote:
>>> I would not worry about existing clients -- simply mark the existing
>>> APIs as deprecated, but keep them and do no
On Sat, Feb 5, 2011 at 7:14 PM, Stefan Sperling wrote:
> On Sat, Feb 05, 2011 at 06:47:35PM +0100, Branko Čibej wrote:
>> I would not worry about existing clients -- simply mark the existing
>> APIs as deprecated, but keep them and do not attempt to improve their
>> performance.
>
> Neglecting per
On Sat, Feb 5, 2011 at 8:25 PM, Mark Phippard wrote:
> On Sat, Feb 5, 2011 at 1:05 PM, Erik Huelsmann wrote:
>
>> Scenario (2) takes ~0.27 seconds to evaluate in the unmodified
>> database. Adding an index on (wc_id, local_relpath) makes the
>> execution time drop to ~0.000156 seconds!
>>
>>
>> S
On Sat, Feb 5, 2011 at 1:05 PM, Erik Huelsmann wrote:
> Scenario (2) takes ~0.27 seconds to evaluate in the unmodified
> database. Adding an index on (wc_id, local_relpath) makes the
> execution time drop to ~0.000156 seconds!
>
>
> Seems Philip was right :-) We need to carefully review the indic
Now attached as text files (to be renamed to .py) to prevent the
mailer software from dropping them...
Bye,
Erik.
On Sat, Feb 5, 2011 at 7:05 PM, Erik Huelsmann wrote:
> Yesterday or IRC, Bert, Philip and I were chatting about our SQLite
> perf issues and how Philip's findings in the past sugge
On Sat, Feb 05, 2011 at 06:47:35PM +0100, Branko Čibej wrote:
> I would not worry about existing clients -- simply mark the existing
> APIs as deprecated, but keep them and do not attempt to improve their
> performance.
Neglecting performance of backwards compat code is an interesting idea.
It all
Yesterday or IRC, Bert, Philip and I were chatting about our SQLite
perf issues and how Philip's findings in the past suggested that
SQLite wasn't using its indices to optimize our queries.
After searching and discussing its documentation, Philip suggested the
-too obvious- "maybe we have the wron
On 05.02.2011 15:35, Stefan Sperling wrote:
> On Sat, Feb 05, 2011 at 03:29:23PM +0100, Stefan Sperling wrote:
>> I think we should strongly consider revving affected callbacks in the
>> 1.7 API and document restriction they have to heed. Then we can bring
>> the r1039808 code back. We can keep the
On Sat, Feb 05, 2011 at 05:15:22PM +0100, Stefan Küng wrote:
> On 05.02.2011 16:46, Stefan Sperling wrote:
> >What if the amount of information requested simply doesn't fit
> >into memory? I'd prefer a system that cannot fail in this way.
> >I'd prefer passing the information to the client piece by
On 05.02.2011 16:46, Stefan Sperling wrote:
On Sat, Feb 05, 2011 at 04:22:29PM +0100, Stefan Küng wrote:
On 05.02.2011 13:56, Stefan Sperling wrote:
I think we should go into this direction.
In fact, I think we should simply change the existing APIs to use
the fastest possible way of getting a
On Sat, Feb 05, 2011 at 04:22:29PM +0100, Stefan Küng wrote:
> On 05.02.2011 13:56, Stefan Sperling wrote:
>
> >I think we should go into this direction.
> >In fact, I think we should simply change the existing APIs to use
> >the fastest possible way of getting at information.
>
> Well, currently
On 05.02.2011 13:56, Stefan Sperling wrote:
I think we should go into this direction.
In fact, I think we should simply change the existing APIs to use
the fastest possible way of getting at information.
Well, currently there is no API that does what I suggested (basically
return all results
On Sat, Feb 05, 2011 at 03:29:23PM +0100, Stefan Sperling wrote:
> I think we should strongly consider revving affected callbacks in the
> 1.7 API and document restriction they have to heed. Then we can bring
> the r1039808 code back. We can keep the code added in r1066541 for
> backwards-compatibi
On Sat, Feb 05, 2011 at 01:11:28PM +0100, Stefan Sperling wrote:
> I'll look at profiler output. It has much better granularity.
I took 3 profiled runs of "svn pl -v -R" on an svn-trunk working copy
with and without the patch.
The results were pretty much the same for all runs.
The numbers below a
On Sat, 2011-02-05, Stefan Sperling wrote:
> It might be more complext to generate and store multiple strings.
> But if this helps translators I think we should do it.
> We must find a nice way of splitting up help texts large help
> texts like this put too much burden on translators.
Looking at t
On Sat, Feb 05, 2011 at 01:56:41PM +0100, Stefan Sperling wrote:
> There were two approaches discussed in that thread. I am currently
> experimenting with the "queries per-directory" approach (see r1051452
> and r1066541).
Sorry, I meant r1050650, not r1051452.
On Sat, Feb 05, 2011 at 10:28:20AM +0100, Stefan Küng wrote:
> Hi,
>
> To find all files and folders that have a specific property set I
> need to crawl the whole working copy and fetch the properties of
> each and every item, then scan the returned property list for that
> property.
> But WC-NG u
On Fri, Feb 04, 2011 at 04:10:30PM +0800, Dongsheng Song wrote:
> On Fri, Feb 4, 2011 at 15:59, Dongsheng Song wrote:
> > On Sun, Nov 14, 2010 at 01:19, Greg Hudson wrote:
> >> On Sat, 2010-11-13 at 10:31 -0500, Daniel Shahaf wrote:
> >>> Sounds reasonable.
> >>>
> >>> What changes to the source
On Fri, Feb 04, 2011 at 03:29:37AM +0200, Daniel Shahaf wrote:
> (test script attached --- hope it goes through)
>
> Stefan Sperling wrote on Wed, Feb 02, 2011 at 18:53:39 +0100:
> > I've made svn proplist issue per-directory queries in r1066541.
> > Reviews of this change are most welcome.
> > Al
Hi,
To find all files and folders that have a specific property set I need
to crawl the whole working copy and fetch the properties of each and
every item, then scan the returned property list for that property.
But WC-NG uses an SQLite db so this task should be much faster with a
lot less dis
20 matches
Mail list logo