On openp2p.com: Interesting interview w/ Eric Bonabeau, relevant quotes:
“And that swarm intelligence offers an alternative way of designing “intelligent” systems in which autonomy, emergence, and distributedness replace control, preprogramming, and centralization.” — The words ‘autonomy, emergence and distributedness’ are somewhat freeing from a software development standpoint. In complex systems, instead of taking x years to development a specification document and then another y years to implement the system and then z more years to debug the system, you spend the time upfront to make the system adaptable to its environment, able to evolve in some sense.
“In social insects, errors and randomness are not “bugs”; rather, they contribute very strongly to their success by enabling them to discover and explore in addition to exploiting. Self-organization feeds itself upon errors to provide the colony with flexibility (the colony can adapt to a changing environment) and robustness (even when one or more individuals fail, the group can still perform its tasks).” — Bugs (the software kind) are inevitable, we can’t write perfect code. Doesn’t it make sense to use errors to our advantage then? (one might argue that the same system that takes advantage of errors will have errors, do we have to write software to take advantage of the errors written into the program that takes advantage of errors?)
The article mentions routing and UAVs as applications of swarm intelligence. My domain of knowledge currently only wraps around web applications… how might one use swarm intelligence in web applications? Implicit personalization might be a place to start. The pheromones that ants leave behind for others to follow aren’t that different from the ‘paths’ one leaves behind on a site that others might follow. Knowledge management comes to mind too. Could the aggregate of referrers and queries sent to a site become something greater than the sum of the individual parts? On heavily trafficked sites, you’ll always have to make a choice about what to cache and what to get from persistent storage. Perhaps a case can be made for letting the system compute what the most expensive procedures are (disk I/O, DB transactions, web service calls, etc…) and cache those, instead of caching everything (or nothing!), thereby maximizing your use of memory and processor utilization. Any others you can think of?
If you’re interested, Eric has a book on the subject: “Swarm Intelligence: From Natural to Artificial Systems“.