Hi Andrew:

Thanks for the explanation.  I am going to assume that database neutrality 
is the main selling point of sticking with the DAL. 

I second your interest in a DAL method for genuine bulk loading, though I 
reckon that would be quite a beastly project. 

I appreciate everyone's input. This community is uniquely helpful. 

On Monday, August 20, 2012 1:49:46 AM UTC-4, Andrew wrote:
>
> HI Martin,
> It depends on the RDBMS.  Some are still one row at a time, which makes 
> insert and bulk_insert the same speed (it just makes the statement easier 
> to write.
>
> Hi MIke,
> One of the goals of the DAL is to make the api database neutral, allowing 
> you to switch between databases without changing your code (hopefully).  
> The thing you sacrifice if you use a bulk native loader (today) is that you 
> are locking yourself into a specific database platform.  The api for the 
> DAL doesn't have any platform specific features (I think), although some 
> features don't apply to all.
>
> What I was suggesting was a "Native Load" method which is defined within 
> each database adapter as they will all be different.  Just a thought 
> although unlike ANSI SQL, every separate platform probably has their own 
> syntax for their bulk loader.
>
> Reiterating,  I think bulk loading is a thing you would do as a batch / 
> scheduler process.  It's not what you'd use with your web end-user app.  If 
> that's the case, does it make sense for it to be a part of the DAL,  or as 
> perhaps separate (contrib) modules targetting specific platforms ?
>
>
> On Monday, August 20, 2012 1:36:14 PM UTC+12, Mike Girard wrote:
>>
>> "bulk insert is a way faster than regular insert when you have many rows"
>>
>> I think we need to clarify terms. By Massimo's own account in the web2py 
>> book, the DAL bulk insert is not faster than db.insert unless you are using 
>> the GAE. So are you talking about your db's native bulk methods or is the 
>> book wrong?
>>
>> Could someone just please answer what, if anything, is being sacrificed 
>> when you use your database's own bulk loading methods instead of using the 
>> DAL? Why the DAL religion about this?
>>
>> On Sunday, August 19, 2012 5:09:43 PM UTC-4, Martin.Mulone wrote:
>>>
>>> bulk insert is a way faster than regular insert when you have many rows. 
>>> If you are under mysql you can use load data infile, this is incredible 
>>> fast, but you need special privileges under mysql.
>>>
>>> 2012/8/19 Andrew <awill...@gmail.com>
>>>
>>>> Is it possible that we add a "native bulk insert" function which is 
>>>> coded up in each adapter.  Even bulk_insert is an odbc 1 row at a 
>>>> time-slow 
>>>> for big files.  I need to load huge files all the time and I am writing 
>>>> custom modules to do this with a native loader.  Should this be a dal 
>>>> option?  Worth noting that this type of operation is a batch, back end 
>>>> thing,  I wouldn't do this for a end user web app.
>>>>
>>>> I would expect that each DBMS needs different info to start a bulk 
>>>> load, so the interface may be tricky, or just pass a dict and let the 
>>>> adapter work it out.
>>>> What do you think?
>>>>
>>>> --
>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>> -- 
>>>  http://www.tecnodoc.com.ar
>>>
>>>

-- 



Reply via email to