winterspeak.com: He spoke of his work and Microsoft's plans in the future. Here are my reactions. “Their execs don't seem to understand Web standards.” [Archipelago]
Author: Vince Kimball
Can Congress Convene Online?
Can Congress Convene Online?. A proposal to create an 'electronic Congress' in times of emergency is causing some wonder up on Capitol Hill. By Noah Shachtman. [Wired News]
The recent benchmarking spree of PhpLib, ADODB, PEAR DB, Metabase and Native MySQL has brought more questions up. Here's my opinion.
Designing Fast Architectures
When designing a class library that is used by many people, it is important to divide your
code between core and peripheral. Core code
is used everywhere, so it is best to reduce the feature-set of core code, and move these
nice features into the peripheral section. This makes it easy to tune the core code
for speed because the code is so simple.
Some of the class libraries
made the mistake of not dividing the code up into must-haves (core) and nice-to-haves (peripheral), so when they added features, the core code
became “polluted” with slow non-essential code that is rarely used.
Also avoid overly-complex designs with a lot of message passing and clever object hierarchies. They are great for impressing people but rarely run fast in real life. PhpLib is to be admired in that sense – it's fast and is not pretentious.
The next thing to ask ourselves is what is the natural data size when dealing with
database tables. No it is not the field or column, it is the row or 1-dimensional array.
Databases are tuned for sending records for that reason. A class library that tries to
operate at the field level is going against “nature” in that sense. Thus it comes as no
surprise that these libraries that are prepared to struggle uphill against “nature” are extremely
slow.
These design considerations are very useful and apply to most programming languages, not merely PHP. Here's a shameful confession: when I started coding ADODB, I had less than a week's programming experience in PHP; ADODB was actually my method of learning PHP. See http://phplens.com/lens/adodb/ for the benchmarks of database abstraction libraries.
[PHP Everywhere
The recent benchmarking spree of PhpLib, ADODB, PEAR DB, Metabase and Native MySQL has brought more questions up. Here's my opinion. Designing Fast Architectures When designing a class library that is used by many people, it is important to divide your code between core and peripheral. Core code is used everywhere, so it is best… Continue reading The recent benchmarking spree of PhpLib, ADODB, PEAR DB, Metabase and Native MySQL has brought more questions up. Here's my opinion.
Designing Fast Architectures
When designing a class library that is used by many people, it is important to divide your
code between core and peripheral. Core code
is used everywhere, so it is best to reduce the feature-set of core code, and move these
nice features into the peripheral section. This makes it easy to tune the core code
for speed because the code is so simple.
Some of the class libraries
made the mistake of not dividing the code up into must-haves (core) and nice-to-haves (peripheral), so when they added features, the core code
became “polluted” with slow non-essential code that is rarely used.
Also avoid overly-complex designs with a lot of message passing and clever object hierarchies. They are great for impressing people but rarely run fast in real life. PhpLib is to be admired in that sense – it's fast and is not pretentious.
The next thing to ask ourselves is what is the natural data size when dealing with
database tables. No it is not the field or column, it is the row or 1-dimensional array.
Databases are tuned for sending records for that reason. A class library that tries to
operate at the field level is going against “nature” in that sense. Thus it comes as no
surprise that these libraries that are prepared to struggle uphill against “nature” are extremely
slow.
These design considerations are very useful and apply to most programming languages, not merely PHP. Here's a shameful confession: when I started coding ADODB, I had less than a week's programming experience in PHP; ADODB was actually my method of learning PHP. See http://phplens.com/lens/adodb/ for the benchmarks of database abstraction libraries.
[PHP Everywhere
Zeldman
Zeldman. Don't cut your IT budget by using a content management system. He does make a couple good points: 1) Interwoven and Vignette will drain you of so much $moola$ it will make your head spin and that 2) you shouldn't try Zope unless you have a Phd in computer science. The obvious solution is to… Continue reading Zeldman
Mark Bernstein: Let such teach others, who themselves excel
Mark Bernstein: Let such teach others, who themselves excel – Jessica Mulligan describes the corruption of the software trade press. (She's talking about the game industry, but the same problem pervaded the software trade magazines while they had influence and circulation. Now, they just don't seem to matter much.) “Just as I've always thought it… Continue reading
CIO: Content is Still King
CIO: Content is Still King – “…content professionals have been around for years, and they know more about content management, acquisition, filtering, taxonomy development and categorization than any tool or technology. These information professionals — also known as librarians — already know the answers to many of the content questions posed by the IT side… Continue reading
John Robb
Groove does Redirection? Get a clue. John Robb pointed to my piece on iFolder and then proceeded to say how Groove and Manila do redirection. John, I love ya, but you are clueless on this matter. (Sorry, maybe “clueless” is a little abrupt. How about just a little misguided?) Neither Groove nor Manila do any… Continue reading John Robb
Gary Kildall
iFolder Revolutionizes Internet Storage: I recently met Tom Rolander. I have wanted to meet Tom for a long time. Tom is a legend to me. Tom is one of the world premier programmers that understands redirection. Tom worked with and was a close personal friend of Gary Kildall (Author of CP/M and founder of Digital… Continue reading Gary Kildall
How K-Logs (knowledge management Weblogs) will evolve: Here is the result of some thinking I have been doing on how K-Logs have evolved and will eventually emerge as a core part of the desktop productivity suite:
1st Generation. K-Logs as a server-based Internet service. These basic Weblogs used a centralized services model to enable people to publish to the Web. There are a variety of vendors that provide this capability. However, most companies don't want to store vital corporate data outside the firewall. Also, there is a growing fear, given the current economic environment, that these services will suddenly stop working and vital data will be lost.
2nd Generation. Packed K-Log server software. This solution solved some of the problems with the services model by providing corporations with packaged Weblog software that they could install on their Intranet. However, this solution has the same problems with scalability, cost, and flexibility that plague centralized solutions we see in the Web world. Also, centralized software cannot easily take advantage of data stored in desktop applications or provide individuals with a fast loading mobile copy of their critical data.
3rd Generation. Desktop K-Log software. This is the point we are at today. Desktop K-Log software solves the scalability and personal storage issues by decentralizing K-Log development and publishing. Core functionality on this generation of software includes: Weblog publishing, categories, RSS headline aggregation, community data aggregation (recent updates for example), bookmark lists, directories, and file uploads. This decentralized approach provides people with a desktop archive of all information (aggregated or posted) as well as an ability to use the tool in a P2P framework.
4th Generation. Fully integrated desktop K-Log software. This is the generation where K-Logs challenge the current 1980's desktop productivity suite for dominance. This software includes the core functionality included in the 3rd generation but also: outlines, structured instant messaging, full e-mail integration, and P2P file/data transfer. Also, this software will fully integrate with corporate Webservices to allow employees to gather important information that can be then posted to his/her K-Log (for example: a SOAP service the provides sales figures at the end of each day — more on this later). This tool is the end-point that can be fully customized by corporations to fit their needs. It allows an employee to aggregate dozens of data sources, analyze that data, and post it with an annotation to a K-Log (or multiple K-Logs based on categories). It breaks down data silos and puts otherwise random data into context that has meaning and structure. That posted knowledge can be searched, sorted and used by all employees with the appropriate access to improve their ability to do their job. [John Robb's Radio Weblog
How K-Logs (knowledge management Weblogs) will evolve: Here is the result of some thinking I have been doing on how K-Logs have evolved and will eventually emerge as a core part of the desktop productivity suite: 1st Generation. K-Logs as a server-based Internet service. These basic Weblogs used a centralized services model to enable people… Continue reading How K-Logs (knowledge management Weblogs) will evolve: Here is the result of some thinking I have been doing on how K-Logs have evolved and will eventually emerge as a core part of the desktop productivity suite:
1st Generation. K-Logs as a server-based Internet service. These basic Weblogs used a centralized services model to enable people to publish to the Web. There are a variety of vendors that provide this capability. However, most companies don't want to store vital corporate data outside the firewall. Also, there is a growing fear, given the current economic environment, that these services will suddenly stop working and vital data will be lost.
2nd Generation. Packed K-Log server software. This solution solved some of the problems with the services model by providing corporations with packaged Weblog software that they could install on their Intranet. However, this solution has the same problems with scalability, cost, and flexibility that plague centralized solutions we see in the Web world. Also, centralized software cannot easily take advantage of data stored in desktop applications or provide individuals with a fast loading mobile copy of their critical data.
3rd Generation. Desktop K-Log software. This is the point we are at today. Desktop K-Log software solves the scalability and personal storage issues by decentralizing K-Log development and publishing. Core functionality on this generation of software includes: Weblog publishing, categories, RSS headline aggregation, community data aggregation (recent updates for example), bookmark lists, directories, and file uploads. This decentralized approach provides people with a desktop archive of all information (aggregated or posted) as well as an ability to use the tool in a P2P framework.
4th Generation. Fully integrated desktop K-Log software. This is the generation where K-Logs challenge the current 1980's desktop productivity suite for dominance. This software includes the core functionality included in the 3rd generation but also: outlines, structured instant messaging, full e-mail integration, and P2P file/data transfer. Also, this software will fully integrate with corporate Webservices to allow employees to gather important information that can be then posted to his/her K-Log (for example: a SOAP service the provides sales figures at the end of each day — more on this later). This tool is the end-point that can be fully customized by corporations to fit their needs. It allows an employee to aggregate dozens of data sources, analyze that data, and post it with an annotation to a K-Log (or multiple K-Logs based on categories). It breaks down data silos and puts otherwise random data into context that has meaning and structure. That posted knowledge can be searched, sorted and used by all employees with the appropriate access to improve their ability to do their job. [John Robb's Radio Weblog
Super Sync
MIT Technology Review: Super Sync. Instead of ubiquitous connectivity to centralized databanks, we are instead building an infrastructure that's optimized for data replication. The same information is getting copied to dozens, hundreds or even thousands of places throughout the world… [Tomalak's Realm]