The recent benchmarking spree of PhpLib, ADODB, PEAR DB, Metabase and Native MySQL has brought more questions up. Here's my opinion.
Designing Fast Architectures
When designing a class library that is used by many people, it is important to divide your
code between core and peripheral. Core code
is used everywhere, so it is best to reduce the feature-set of core code, and move these
nice features into the peripheral section. This makes it easy to tune the core code
for speed because the code is so simple.
Some of the class libraries
made the mistake of not dividing the code up into must-haves (core) and nice-to-haves (peripheral), so when they added features, the core code
became “polluted” with slow non-essential code that is rarely used.
Also avoid overly-complex designs with a lot of message passing and clever object hierarchies. They are great for impressing people but rarely run fast in real life. PhpLib is to be admired in that sense – it's fast and is not pretentious.
The next thing to ask ourselves is what is the natural data size when dealing with
database tables. No it is not the field or column, it is the row or 1-dimensional array.
Databases are tuned for sending records for that reason. A class library that tries to
operate at the field level is going against “nature” in that sense. Thus it comes as no
surprise that these libraries that are prepared to struggle uphill against “nature” are extremely
slow.
These design considerations are very useful and apply to most programming languages, not merely PHP. Here's a shameful confession: when I started coding ADODB, I had less than a week's programming experience in PHP; ADODB was actually my method of learning PHP. See http://phplens.com/lens/adodb/ for the benchmarks of database abstraction libraries.
[PHP Everywhere]