John Scoles wrote: > DBD::Oracle 1.18 has been released. > > With this release DBD::Oracle finally implements Oracle's native > Array Interface. You will see very dramatic increase in speed. For > example; the time for a 2 million plus insert query dropped from well > over an hour to less than 10 minutes when using execute_array() and > the new code. This new feature is really great. Thanks, John and Kristian. The announcement and docs seem to really emphasize execute_array() over execute_for_fetch(), though, which puzzles me. I see a dramatic speed increase using execute_for_fetch() instead of execute_array(). I first converted my code to accumulate columns into individual "parallel arrays" to be passed into execute_array(). I was happy to see a 56% speed increase using the bulk insert. Now, my program was cpu-bound and a lot of the cpu time was being eaten by execute_array (which was, to my surprise, simply converting the parallel arrays back into the format I originally used!). I changed the program to just accumulate rows as arrayrefs and then called execute_for_fetch with a simple shift() sub. Voila, another 20% faster. Is there a reason I would want to use execute_array() with ArrayTupleFetch instead of execute_for_fetch()? The latter is more direct, since execute_array() just calls it in the end anyway. Would I be missing out on some potential future optimization? Also, is there any reason to convert old code to accumulate into single- column arrays to be passed to execute_array, instead of using the subroutine reference to return tuples? I assumed that the columnar format was due to some driver implementation detail but I got burned by that. It's much easier and much faster to convert old code to use the fetch sub, since all it requires is to convert $sth->execute(@params) to push(@rows,\@params). PhilipThread Next