Category Archives: Books

Redesigning Web Sites: Retooling for Changing Needs of Business

I worked with Stefan Mumaw back a couple years ago at BigMan Creative. He’s now at Brainyard, apparently busy writing books. “Redesigning Web Sites: Retooling for Changing Needs of Business”, available soon from amazon, leads with the story of the redesign of FAO Schwarz, which MINDSEYE (the company I work for) completed in August of 2001. I was the lead engineer for the ColdFusion & Spectra portion of the project.

The Goal: A Process of Ongoing Improvement

A Process of Ongoing ImprovementI finished reading “The Goal: A Process of Ongoing Improvement” today (a sunny and windy beautiful day in Boston btw) which Joel pointed to a couple months back. Like Joel, I’ll find it useful should I ever need to run a factory, it also struck a chord with me on some non-programming level, more business like level. One of the takeaways from the book was the description of what the goal of a for-profit organization should be:

… so I can say that the goal is to increase net profit, while simultaneously increasing both ROI and cash flow and that’s the equivalent of saying the goal is to make money…. They’re measurements which express the goal of making money perfectly well, but which also permit you to develop operational rules for running your plant. Their names are throughput, inventory and operational expense… Throughput is the rate at which the system generates money through sales…. Inventory is all the money that the system has invested in purchasing things which it intends to sell…. Operational expense is all the money the system spends in order to turn inventory into throughput.” (pg 59-60) Selling physical widgets is a much different ballgame than selling services or selling software, so I’m having a hard time imagining how this might apply to a web shop like MINDSEYE, but it’s an interesting mental exercise nonetheless.

Another interesting concept (which I’m guessing is covered in the Critical Chain book) is the idea that in a multi-step synchronous process, the fluctuations in speed of the various steps result not in an averaging out of time the system needs to run but rather in an accumulation of the fluctuations. His example was a hike with 15 Boy Scouts, if you let one kid lead the pack and that kid averages between 1.0 and 2.0 mph, the distance between the first and the 15th scout will never average out but will gradually increase with time because “… dependency limits the opportunities for higher fluctuations.” (pg 100). In this case, the first scout doesn’t have any dependencies, but each of the next 14 do: the person in front of them. Thus, as the hike progresses, the other 14 can never go faster than the person in front of them and so on and so forth. This actually does have some usefulness within our daily jobs as programmers. Each of us does depend on someone in one way or another whether it be for design assets or IA docs or a software function. A breakdown in one step of a project can sometimes be made up through sheer willpower and alot of caffeine, but most likely will result in the project being delayed. The same thinking can be applied to software that we write: slow functions, rather the statistical fluctuations in functions (fast or slow) don’t average out as much as they accumulate, which I guess is kind of obvious, but worth noting.

Other nuggets to chew on this morning: “The maximum deviation of a preceding operation will become the starting point of a subsequent operation.” (pg 133)

On bottlenecks within an operation: “… Make sure the bottleneck works only on good parts by weeding out the ones that are defective. If you scrap a part before it reaches the bottleneck, all you have lost is a scrapped part. But if you scrap the part after it’s passed the bottleneck, you have lost time that cannot be recovered.” (pg 156) Do you have expensive (in terms of processor cycles) operations in a computer program after which you then scrub the data for quality? Why not put the scrub before the expensive operation saving yourself some cycles.

On problem solving: “… people would come to me from time to time with problems in mathematics they couldn’t solve. They wanted me to check their numbers for them. But after a while I learned not to waste my time checking the numbers — because numbers were almost always right. However, if I checked the assumptions, they were almost always wrong.” (pg 157) Next time someone asks you to review some code that isn’t producing the correct results, attack their assumptions before going over the code with a fine tooth comb.

BTW, the book is written as a story so if like reading stories and want to stretch the business side of your brain, but don’t enjoy business textbooks, this book is for you!

Update: Jeff Sutherland has a great review of the book from a technical perspective here.

Small Pieces Loosely Joined: A Unified Theory of the Web

I finished Small Pieces Loosely Joined: A Unified Theory of the Web [amazon], [allconsuming], [official site] tonight. I don’t know how to review the book, the quality wasn’t at all like Emergence or The Future of Ideas, I guess it’s a different kind of book though, not as academic. Emergence reads like a well researched, factual, essay as evidenced by the small print and lengthy bibiolography. Small Pieces Loosely Joined reads like marketing prose, commenting on philosophy, society, and even sometimes religion, using larger print and pointing mainly at websites, rather than other books. Give it a glance at a local bookstore sometime before you buy it, you might find that reading the dustcover is all you need to read. Interestingly, this book was number 2 on the list of Top 100 Most Frequently Mentioned Books of 2002.

The Clustered World : How We Live, What We Buy, and What It All Means About Who We Are

Finished a couple books in the last couple weeks. First, this last Wednesday I read “The Clustered World : How We Live, What We Buy, and What It All Means About Who We Are”, which seems to be a popular book amongst the blogosphere. The information in this book should be required reading for anyone launching a major consumer product as it explains and outlines the PRIZM cluster makeup, which is a tool to help marketers create detailed lifestyle pictures, including the food you eat, what you drink, the magazines you read, what TV shows you watch, your leisure activities and what worries you. (find out what cluster you belong by going http://cluster2.claritas.com/YAWYL/Default.wjsp?System=WL).

From the appendix, page 305, The Universal Principles of Clustering:

1. People and Birds of a Feather Flock Together…

2. The Mass Market Is Dead…

3. There’s Little Connection Between Income and Lifestyle….

4. The Super-Rich Live Differently from the Rest of Us — But Not from Each Other…

5. Culture Does Not Honor Political Boundaries….

6. Everyone Collects Something…

7. Everybody Complains About Junk Mail, But Nobody Does Anything About It…

8. No Home Is Complete Without a Tropy….

9. There’s Always an Exception That Proves the Rule….

10. Every Community Has a Historian.

The Age of Spiritual Machines

Finished “The Age of Spiritual Machines: When Computers Exceed Human Intelligence” by Ray Kurzweil a couple weeks ago. If you want a scary view of what the future holds, read this book. I’m not a great book reviewer, so just like all of my other book reviews, here are a couple quotes I thought were thought provoking and/or notable.

“… it illustrated one of the paradoxes of human nature: We like to solve problems, but we don’t want them all solved, not too quickly anyway. We are more attached to the problems than to the solutions.” [pg 1]

On death: “… A great deal of our effort goes into avoiding it. We make extraordinary efforts to delay it, and indeed often consider its intrusion a tragic event. Yet we would find it hard to live without it. Death gives meaning to our lives. It gives importance and value to time. Time would become meaningless if there were too much of it.” [pg 2]

“The only way of discovering the limits of the possible is to venture a little way past them into the impossible.” [pg 14]

“What makes a soul? And if machines ever have souls, what will be the equivalent of psychoactive drugs? Of pain? Of the physical/emotional high I get from having a clean office?” (Esther Dyson on pg 135)

The Future of Ideas

Finished The Future of Ideas by Lawrence Lessig a couple days ago. Reviews of this book are too numerous to count, one might say that they number like the sand on the seashore, so the following notes are more for my benefit than yours.

On politicians: “The vast majority are decent and extraordinarily hardworking people who live in a system that simply doesn’t give them time to reflect. They spend more time each week raising money than they spend in a year reading about what’s new. This system produces leaders who can’t begin to lead, because they are leaders who haven’t had time to look ahead. (preface, pg 18)

Machiavelli in The Prince: “Innovation makes enemies of all those who prospered under the old regime, and only lukewarm support is forthcoming from those who would prosper under the new. Their support is indifferent partly from fear and partly because they are generally incredulous, never really trusting new things until they have tested them by experience.”

A succinct description of the premise of the book: “The argument of this book is that always and everywhere, free resources have been crucial to innovation and creativity; that without them, creativity is crippled. Thus, and especially in the digital age, the central question becomes not whether government or the market should control a resource, but whether a resource should be controlled at all. Just because control is possible, it doesn’t follow that it is justified.” (pg 14)

The Tragedy of the Commons: ” Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit — in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.”

On the end to end design of the internet: “End-to-end says to keep intelligence in a network at the ends, or in the applications, leaving the network itself to be relatively simple.” (pg 14)

which then has 3 important consequences for innovation: “First, because applications run on computers at the edge of the network, innovators with new applications need only connect their computers to the network to let their applications run. No change to the computers within the network is required. If you are a developer, for example, who wants to use the internet to make telephone calls, yo uneed only develop that application and get users to adopt it for the Internet to be capable of making ‘telephone’ calls. You can write the application and send it to the person on the other end of the network. Both of you install it and start talking. That’s it. Second, because the design is not optimized for any particular existing application, the network is open to innovation not originally imagined. All the Internet protocol does it figure a way to package and route data; it doesn’t route or process certain kinds of data better than others. That creates a problem for some applications, but it creates an opportunity for a wide range of other applications too. It means that the network is open to adopting applications not originally foreseen by the designers. Third, because the design effects a neutral platform — neutral in the sense that the network owner can’t discriminate against some packets while favoring others — the network can’t discriminate against some new innovator’s design. If a new application threatens a dominant application, there’s nothing the network can do about that. The network will remain neutral regardless of the application.” (pg 16-17)

On open code: “… But there is a challenge with open code projects tha many believe is insurmountable. This is the challenge to assure that there are sufficient incentives to build open code. Open code creates a commons; but the problem with this sort of commons is not the problem of overgrazing. (Indeed, as accidential revolutionary Eric Raymond puts it, open code creates an inverse commons.) ‘Grazing’ does not reduce a code that is available. Instead, in this ‘inverse commons, the grass grows taller when it’s grazed upon.'” (pg 68)

On when resources should be governed or controlled: “Where a resource has a clear use, then, from a social perspective, our objective is simply to assure that the resource is available for this highest and best use. We can use property systems to achieve this end. By assigning a strong property right to the owners of such resources, we can then rely upon them to maximize their own return from the this resource by seeking out those who can best use the resource at issue. But if there is no clear option for using the resource — if we can’t tell up front how best to use it — there there is more reason to leave it in the common, so that many can experiment with different uses. Not knowing how a resource will be used is a good reason for keeping it open.” (pg 89)

The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail

Todd Dickinson, Trademark Office Commissioner: “Some days I wish I was the professor and only had to think about these things and not do the work. But I got an office to run. And I’ve got 1,500 applications coming in this year and I have to figure out what to do with them. I don’t have the luxury to wait for five years for Congress to figure out whether they will change the law or not.” (pg 210)

Fast Food Nation

Finished Fast Food Nation while driving from Phoenix to Mammoth Lakes. Ironically, we had Del Taco (even more ironically, I used to do work for BigMan Creative who did the Del Taco site) along the way. Fast Food Nation was a world view altering book: in one paragraph you learn (if you aren’t already convinced) that in numerous instances Big Business runs this country; in a second you learn that the majority of beef comes from cows that are eating ‘other dead animals’; in another you learn that ‘… about one quarter of American children between the ages of two and five have a TV in their room.’ (pg 46) Fast Food Nation will make you think twice (and sometimes three times) about eating Fast Food, but more importantly, maybe you’ll look into the eyes of someone who’s working at a fast food restaurant and see a person, a real live human being, with feelings. Many low income families will buy fast food because you can buy cheaper food, but if you need help with groceries you can learn how to apply for food stamps here. As usual, here are a couple quotes and interesting tidbits:

On advertising to children: “Many studies had found that young children could not tell the difference between television programming and television advertising. They also could not comprehend the real purpose of commercials and trusted that advertising claims were true.” (pg 46)

On profits: “Today McDonald’s sells more Coca-Cola than anyone else in the world. The fast food chains purchase Coca-Cola syrup for about $4.25 a gallon. A medium Coke that sells for $1.29 contains roughly $.09 worth of syrup. Buying a large Coke for $1.49 instead, as the cute girl behind the counter always suggests, will add another 3 cents worth of syrup — and another 17 cents in pure profit for McDonald’s.” (pg 54)

On corporations sponsoring various schools with advertising campaigns: “The spiraling cost of textbooks has led thousands of American school districts to use corporate-sponsored teaching materials. A 1998 study of these teaching materials by the Consumers Union found that 80 percent were biased, providing the students with incomlete or slanted information that favored sponsors products or views. Proctor & Gamble’s Decision Earth program taught that clear-cut logging was actually good for the environment; teaching aids distributed by Exxon Education Foundation said that fossil fuels created few environmental problems and that alternative sources of energy were too expensive…” (pg 55)

On how fast food restaurants value their employees: “The bonuses of Taco Bell restaurant managers were tied to their success at cutting labor costs. The managers had devised a number of creative ways to do so. Workers were forced to wait until things got busy at a restaurant before officially starting their shifts. they were forced to work without pay after their shifts ended. They were forced to clean restaurants on their own time.” (pg 75)

On profits (and how large corporations squeeze the life from small family owned organizations): “… Burger King’s assault on the supremacy of the McDonald’s french fry, launched in 1997 with a $70 million advertising campaign, was driven in a large part by the huge markups that are possible with french fries. The fast food companies purchase frozen fries for about 30 cents a pound, reheat them in oil, and then sell them for about $6 a pound.” (pg 117)

On what’s really in your fries: “The taste of a fast food fry is largely determined by the cooking oil. For decades, McDonald’s cooked its french fries in a mixture of about 7 percent cottonseed oil and 93 percent beef tallow. The mix gave the fries their unique flavor — and more satured beef fat per ounce than a McDonald’s hamburger.” (120)

On the power of Big Business: “Today the US government can demand the nationwide recall of defective softball bats, sneakers, stuffed animals, and foam-rubber toy cows. But it cannot order a meatpacking company to remove contaminated, potentially lethal ground beef from fast food kitchens and supermarket shelves. The unusual power of the large meatpacking firms has been sustained by their close ties and sizable donations to Republican members of Congress.” (pg 196-197)

On the amount of contamination in ground beef: “A series of test conducted by Charles Gerba, a microbiologist at the University of Arizona, discovered far more fecal bacteria in the average American kitchen sink that on the average American toilet seat. According to Gerba, ‘You’d be better off eating a carrot stick that fell in your toilet than one that fell in your sink.'” (pg 221)

On obesity: “The United States now has the highest obesity rate of any industrialized nation in the world. More than half of all American adults and about one-quarter of all American children are now obese or overweight.” (pg 240)

Debating whether or not these large fast food monopolies are good for the world: “…Henry Teller, a Republican Senator from Colorado, dismissed the argument that lower consumer prices justified the ruthless exercise of monopoly power. Fast foods are good if eaten in moderation. ‘I do not believe,’ Teller argued, ‘that the great object in life is to make everything cheap.'” (pg 266)

Finally, you must know that In-N-Out is held up as a shining example (pg 259) of a fast food corporation should be run… if they could only get out to the East Coast!

Bots: The Origin of New Species

Miscellaneous notes from the book “Bots: The Origin of New Species” by Andrew Leonard.

Definition of a bot: “.. a bot is a supposedly intelligent software program that is autonomous, is endowed with personality, and usually, but not always, performs a service.” (pg 14)

With that definition in mind, I thought that generally it was good read if you want a glimpse of what the some parts of the Net were like in 1996… chatbots, modbots, hackbots, the list of ‘bots’ goes on and on. One of the major themes of the book was it’s focus on how ‘bots’ were (or in some cases would) change the way that we interface with computers, moving from the command line interface to a graphical user interface to a ‘social interface’ (pg 95) in which we interact with the computer like we would another person. Fast forward to 2003 and we don’t have very many ‘bots’; sure we still have robots that index the World Wide Web but very little of the software I use on a daily basis could be considered to have bot-like qualities, and that’s disappointing in some ways. Why don’t we have autonomous bots with personalities performing services for us? That sounds like it would be fun and more importantly, incredibly useful. Of course, if it really was useful, then I’m sure someone would have created something to make some money off it, so maybe I’m just barking up a non revenue generating tree.

Without further delay, following is a list of quotes/links/sections that I personally found to be thought provoking or worth noting:

“The service/interface aspect is what makes a bot something greather than a curiosity. Bots are the first precursors to the intelligent agents that many visionaries see as indispensable companions to humans in the not-too-distant future. Intelligent agents are software programs designed to help human beings deal with the overwhelming information overload that is the most obvious drawback to the information age.” (pg 15)

CollegeTown — “…is a text based virtual Academic Community. Its purpose is to serve as a platform for the scholarly pursuits of students and faculty from around the world. COLLEGE TOWN is a place for folks to meet, hold classes and seminars, do research, carry out class projects, and exchange ideas.” Created by Ken Schweller. (pg 37)

Eliza: “… the first computer program that could carry on a conversation with a human being…. the brainchild of MIT research scientist Joseph Weizenbaum.” (pg 42)

On the various components that make up “intelligence”: “Language is one such component. There is no such disagreement in the AI community over whether or not the ability to speak or understand a language is a marker of intelligent behavior. The capacity to communicate meaning with someone other than yourself is a prime indicator of smarts, perhaps even the single most important indicator.” (pg 51)

On Julia, written by Michael Mauldin (who later went on to become the founder of Lycos which is derived from the greek word for ‘wolf spider’) while at Carnegie Mellon : “Julia can answer questions without resorting to sophistic wiggle-waggling, as Eliza does. At TinyMUD, Julia’s code incorporated a constantly updated internal model of the MUD and all its component objects in the form of a graph that allowed her to instantaneoulsy compute the shortest path between any two points. If a user asked her a question, such as ‘How do I get from the Town Square to the Liberty Desk?’ Julia knew the way. Julia also kept tabs on the current location of all MUDders, and could answer queries as to their whereabouts.” (pg 54)

On the properties of intelligence: “I don’t believe that intelligence is a property that is binary,’ said Schweller. ‘The proper word is gradient. I have no problem talking about an intelligent thermostat. Turing himself provided the clue: asking whether a machine is intelligent is a pointless question.'” (pg 59)

Markov chaining (pg 63)

On Isaac Asimov’s ‘robot ethics’ from the Handbook of Robotics, A.D. 2058 as quoted in Isaac Asimov’s I, Robot: “1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.” (pg 110)

Robot Exclusion Protocol (pg 148)

On Scooter, the bot behind AltaVista: “…Scooter is a speed demon, able, in the early summer of 1996, to traverse the entire depth and breadth of millions of documents on the Web in little more than a week.” And now we have this: “2003 Google – Searching 3,083,324,652 web pages”.

Finally… “Cohen is convinced that the emergence of artificially alive bots on the Net is inevitable. ‘Think of it this way,’ he says. ‘The Net is an environment. There is not a single environment on earth that hasn’t been invaded by life. It’s only a matter of time before this new environment gets invaded.’ The word invasion has a negative connation but Cohen isn’t alarmed. The prospect of bot-induced destabilization is nothing to be afraid of, he contents. ‘Ideally, the Net shouldn’t be stable,’ says Cohen. ‘It should surge bath and forth. For it to be a good Net, it should be prone to incompleteness and breakdown.'” (pg 238-239)

If you’re looking to waste a bunch of time, do a search on google for ‘intelligent agents‘, which I think is the term that has replaced ‘bot’ in our tech nomenclature. You’ll find a bunch of interesting applications for ‘bots’, including current and past projects developed at MIT.

But back to the definition (“.. a bot is a supposedly intelligent software program that is autonomous, is endowed with personality, and usually, but not always, performs a service”), what bots would you like to see? What bots would you use?