Four of the most egregiously unfair and misused words in this language are “You can do it.” And I’m guilty at abusing them, too.

Because in using those words to urge our children or employees or students or anyone else forward in the performance of a task they’ve not done before or at which they are performing poorly, we are often claiming ownership of information and insight that, in most cases, is simply absent.

Who really knows exactly what your brain is capable of? I certainly don’t? And how could you possibly know what my brain is capable of? You shouldn’t presume to know. And neither of us should be telling each other, or anyone else, that we can do something unless there is evidence that this might be so, and even then, there are important intermediate steps that usually get left out. We can call it The 3-Way Test of Achievability.

• Would you like to do it?
• How do you think you might best go about it?
• Is it worth the effort that is going to be required?

When and only when we have affirmative answers to those questions, do you and I have any reasonable right to offer someone the encouragement that “You can do it.”

In the past few days, I’ve had at least three experiences reminding me that there are things that, in all likelihood, I can’t do. At least, in all likelihood, I’m not going to do them, and so, on these subjects, I fail The 3-Way Test of Achievability.

1) Sitting in our neighborhood deli, Sherry and I were still waiting on our food when the private envelope of our morning conversation was suddenly pierced by a sheet of drawing paper. On the paper, with remarkable fidelity to visages we both were used to observing in the bathroom mirror, were two people seated at a deli restaurant table, having their morning conversation. When we looked up, the artist was beaming at us. He’d been sitting at the table across the aisle, sketching away, unnoticed by either of us. I’m quite sure I’ll never be able to do what he had just done because my brain doesn’t work that way. He said his gift was something he had discovered in himself. He doesn’t use it professionally but, wanting to do something with it, he does things like draw unsuspecting strangers in their morning conversation and spring their portraits on them.

2) One of our local high school seniors has taken the three-hour exam that’s supposed to measure a high school student’s chance of academic success in the first year of college—the dread SAT—twice . . . and achieved a perfect score both times. Asked to explain how he does this, the best he could offer was, “It helps to remember what you have studied.” I don’t need to test this talented mind to be very suspicious that he can’t help but remember what he has studied. This is just the way his brain works. I’ve always marveled at how quickly and totally my brain erases what I’ve just studied once the immediate reason for cramming has been satisfied. I’m quite sure I was not designed to achieve perfect scores on the SAT. Not even once, much less twice.

3) At a used book sale the other day, I spotted a thin, jacket-less little volume titled Mind’s Eye of Richard Buckminster Fuller. There was a time when I spent a lot of time devouring Bucky Fuller’s writings—and pretending to understand most of what I’d just read. Two things in life I’m pretty certain of: (1) Buckminster Fuller was a genius. (2) Virtually no one really understands very much of what he had to say. A really gifted mind can understand a part of it. But by the time you understand that part, Bucky is off rattling the tea cups in some other authority’s buffet. Here, though, was a guy—Bucky’s patent attorney!—ready to show us how Mr. Fuller’s mind worked. So I snatched up Donald W. Robertson’s book (it’s only 109 pages long) and figured I was about to be handed the secret to deciphering one of the 20th Century’s most creative intellects. But no such luck. All that attorney Robertson knew was how to describe approximately how Bucky happened to think up an invention so it stood a chance of being awarded a patent. (Robertson’s applications weren’t always successful because sometimes the patent office attorneys didn’t understand Robertson well enough to understand if Bucky, on that occasion, could be understood).

Three more things in life I’m pretty sure of. No matter how many times you tell me “you can do it!” I’ll never be able to (1) draw a detailed likeness of you eating breakfast that will cause you to say, “That’s amazing!” (2) take the SAT and get a perfect score (once, much less twice) or (3) be able to look at much of anything with the kind of unique visioning capabilities of one of modern times’ most fascinating minds, Richard Buckminster Fuller’s.

The moral of the story: Please save your encouragement for my doing something reasonably doable, and something that I really want to do (and maybe that the world would benefit from my doing), and I’ll return the favor. Thanks!

P.S. Never pass up a chance to buy a copy of Mind’s Eye of Richard Buckminster Fuller. It’s a rare book.

The above commentary has appeared previously on one of my blogs. I’m choosing to recycle here because I think the points it makes are fascinating and important.

Bookmark and Share


One week a few years ago, I chanced upon two mostly forgotten books, and probably would not have spent much time with either had not both mentioned—on the very first page—an event that itself has been mostly long forgotten: the Century of Progress Exposition that the city of Chicago staged in 1933-34 to commemorate the 100th anniversary of the city’s incorporation.

In The Next Hundred Years: The Unfinished Business of Science, Yale University chemical engineer professor C.C. Furnas lost no time in pointing out how disappointing and overblown the Hall of Science at the Chicago event was to many astute visitors.

Among his observations:

“They [visitors] found most of the loudspeakers on the grounds sadly out of adjustment and the television exhibitions to be more imagination than vision. They saw the latest, swiftest and safest airplanes on display, but during the Fair one sightseeing and one regular passenger plane fell in the vicinity of Chicago killing an even score of men and women.

“They saw exhibit after exhibit featuring the advance of modern medicine but were faced with a preventable and inexcusable outbreak of amebic dysentery, entering in two of the city’s leading hotels, which claimed 41 lives out of 721 cases….They saw a motor car assembly line in operation but, if they investigated carefully, they found that as mechanism for converting the potential energy of fuel into mechanical work the average motor car is only about 8 per cent efficient.

“They marveled at the lighting effects at night but, in talking the matter over with experts, they found that most of the lights were operating with an efficiency of less than 2 per cent.” There was much more—several more paragraphs, in fact—in the way of observations and cautions and laments from Professor Furnas based on his visit to the Century of Progress Exposition.

Bottom line to The Next Hundred Years: the Century of Progress wasn’t all it was cracked up to be.

Then I opened a copy of J.B. Bury’s The Idea of Progress and learned on the first page that the Century of Progress Exposition was partly why the Macmillan publishing house decided in 1932 to bring out an American edition of Cambridge historian Bury’s 1920 masterpiece of historical/economic analysis.

In it, Bury sought to pooh-pooh the idea that “the idea of progress” was a john-come-lately concept crystallized by self-promoting business people and thus was a rather superficial invention. He traced the roots of the idea back at least as far as St. Augustine in the Middle Ages (not that Augustine was a father of the idea of progress but rather that he and other Christian Fathers booted out the Greek theory of cycles and other ideas that stood in the way of a theory of progress) and charactered the idea as one of those rare world-makers.

But even so, after 300 pages of trenchant, sometimes breath-taking reporting and analysis, Bury—on the final page of his book—cautioned that the Idea of Progress might not be all it was cracked up to be. After all, he argued, the most devastating arrow in the idea’s quiver was the assertion that finality is an illusion, that the truth is that what comes, eventually goes.

Bury wrote, “Must not it (the dogma of progress), too, submit to its own negation of finality? Will not that process of change, for which Progress is the optimistic name, compel ‘Progress’ too to fall from the commanding position in which it is now, with apparent security, enthroned?…In other words, does not Progress itself suggest that its value as a doctrine is only relative, corresponding to a certain not very advanced stage of civilization; just as Providence, in its day, was an idea of relative value, corresponding to a stage somewhat less advanced?”

Bury thought it might be centuries in the future before the Idea of Progress was dethroned and replaced.

But looking at an exceedingly rough start for the 21st Century, especially in America, it can be suspected that a persistent undercurrent of change may already be underway less than one century after Bury raised the question of whether the Idea of Progress was going to prove insufficient and undesirable as “the directing idea of humanity.”

Never in history have the shibboleths and ideals of the Idea of Progress been praised and promoted to the extent that they have in the U.S. in the past five years. And with each passing day, the conclusion seems to be more and more unavoidable: they are only working for a tiny part of our population, the very rich and powerful.

It is becoming more and more obvious that the highly stylized, sound-bite-polished, PowerPoint-presentation-perfected, U.S. flag-draped version of the Idea of Progress isn’t all that is was cracked up to be.

Which leaves us to wonder if the time isn’t much riper than we could have imagined a few short years ago for if not the emergence of a new directing idea of humanity, at least the beginning of the disintegration of the current one.

For as the late Peter Drucker argued in a book published in the 1960s that perhaps should be considered the third in a triology of works on this whole subject of progress, it appears that we may already be much deeper into an “age of discontinuity” that we had realized.

The above commentary has appeared in a blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are fascinating and important.

Bookmark and Share


Anyone—and it might be anytwo, or at best anyfive or anysix—who has been paying attention to the progressive content of my thinking through the years understands that I’ve been on some sort of journey.

It is my belief that it is not all that remote from a journey that most all who have ever lived participate in.

The road map that I like best, and one to which I’ve devoted a substantial part of my lifework, is that provided by the late Dr. Clare Graves, the psychologist. He traced the route as a spiral, with well-defined stops. In my most recent book, I shared the view that much of the time I’m now experiencing “life its own self” at Graves’ Stage 7 or, as I renumbered it in this work, Stage 2.0.

From the perspective of the 2.0 mind, one of the key understandings that I keep butting my nose into—like a door jam in the dark—is this: Everyone who has ever tried to explain why the world is, what humans are doing here, and the totality of how it all works has been guessing. Once you are armed with this insight, then it is both fascinating and sometimes a little fear-provoking to see just how many guesses have been put forth about what’s happening and how and why, and how much influence even very bad guesses can have.

A question then: Which of those guesses deserve to be labeled the best guesses ever made, even if they are no longer attention-attractors except for serious scholars, and sometimes not many of these?

Somehow, I have always intuitively suspected that the cultural mentality most likely to take such a question seriously, and attempt to answer it, would belong to a citizen of the United Kingdom. The question itself just sounds very…British.

And so it was a vindication of sorts to come across British critic, biographer and poet Martin Seymour-Smith’s book, The 100 Most Influential Books Ever Written. Published in 1998, this work was just such an attempt—to define the guesses in history that have had “the most decisive influence upon the course of human thought.”

I can’t imagine anyone ever reading Seymour-Smith’s book from cover to cover. At least, I don’t have this kind of ocular or intellectual stamina. But this is one of those books that prompts me to get it down off the shelf every once in a while, open it at random and marvel anew at the origins and consequences of all the guessing that has been going on.

This time, 100 Most Influential fell open to book No. 83, Italian intellectual Vilfredo Pareto’s The Mind and Society. I have always thought that Pareto was an economist, because of what has come to be called “Pareto’s 80/20 Principle.” (Seymour-Smith calls it “Pareto optimality,” and says it was unpopular from the first because of its “the trival many—the critical few” character. In other words, that an economy is best off when the largest proportion of its participants are badly off.) But what do I learn? That Pareto, a congenital sourpuss of a thinker, is consider one of the fathers of sociology. And that The Mind and Society puts forth one of the best guesses for why, to use T.S. Eliot’s notion (as Seymour-Smith does), “Mankind cannot bear much reality.” Pareto’s ideas of the early 20th Century are very much in vogue again in the early 21st Century: that the foundations of the social system are very much anchored in the nonlogical, not the rational, actions of humans.

So Pareto’s best guess is, by other names and because of other systems of inquiry, back in town. I suspect that if I ever summon up the stamina to read this entire work, I’ll find that this is true again and again. That there can only be so many guesses of sufficient quality to be considered very good guesses about what’s happening here even though they all remain just that—guesses—and that most of them have already been fleshed out at one time or another by a very fine, if now perhaps largely ignored if not totally forgotten, mind. But good or bad, they remain mostly that: guesses.

Seymour-Smith died on July 1, 1998, at the age of seventy.
The above commentary has appeared in a blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are fascinating and important.

Bookmark and Share


I don’t often experience writer’s block. Sleeping on a topic overnight is nearly always enough to return a free flow of ideas and images. But it was not working that way with this thing called The Singularity. For days, I tried without success to tie a literary bow around a supposition that had fast become a phenomenon that is now on the verge of becoming the first Great Technological Religion. In repeated stare-downs with my computer screen, I lost.

In a moment, I’ll share what finally dissolved the plaque in my creative arteries on this subject, but first I may need to introduce you to the high drama and low wattage of the whole Singularity debate.

The word first appeared in a 1993 essay written by a California math professor, Vernor Steffen Vinge. The full title was “The Coming Technological Singularity.” Professor Vinge was not the first to raise the issue. But he was the first to supply a name worthy of building a whole “end of the world at least as we know it”-fearing movement around this idea: that computer and other technologies are hurdling toward a time when humans may not be the smartest intelligences on the planet. Why? Because some kind of artificial intelligence (“AI”) will have surpassed us, bringing an end to the human era.

Dr. Vinge is now retired. But his Singularity idea has become another of those Californications that sucked the air out of intellectually tinged, futuristically oriented salons and saloons faster than a speeding epiphany. The relentless personality under the hood of the Singularity phenomenon is a talented 70-year-old inventor and big-screen-thinking, oft-honored futurist from New York City and MIT named Ray Kurzweil.

Where “My Way” Is the Theme Song
A few years ago, I wrote about The Singularity movement just after it had finished what one irreverent observer had called Kurzweil’s “yearly Sinatra at Caesar’s.” He was referring to Singularity Summit, the annual conference of the Machine Intelligence Research Institute. As best I can tell, the last one of these events was in 2012.

Attendees usually listened to, among others, futurist Kurzweil explain how he believed with all his heart that unimaginably powerful computers are soon going to be able to simulate the human brain, then far surpass it. He thinks great, wondrous, positive things will be possible for humanity because of this new capability. If you track Kurzweil’s day-to-day activities and influence, you quickly realize that he’s not so much Singularity’s prophet as its evangelist. His zeal is messianic. And he’s constantly on the prowl for new believers in a funky techno-fringe movement that is definitely showing legs.

Consider these developments:

• Not long ago, no less than four documentary movies were released within a year’s time on The Singularity. One debuted at the Tribeca Film Festival and also was shown at the AFI Fest in Los Angeles. “Transcendent Man” features or rather lionizes—who else?—Ray Kurzweil. The film is loosely based on his book, The Singularity Is Near. Movies called “The Singularity Film,” “The Singularity Is Near” and “We Are the Singularity.” One admiring critic wrote of “Transcendent Man,” “[The] film is as much about Ray Kurzweil as it is about the Singularity. In fact, much of the film is concerned with whether or not Kurzweil’s predictions stem from psychological pressures in his life.”

• Meanwhile, the debate continues over how soon will be the first and only coming of The Singularity (otherwise it would be named something like The Multilarity or perhaps just The Hilarity). At the Y, Paypal co-founder Peter Thiel once gave voice to his nightmare that The Singularity may take too long, leaving the world economy short of cash. Michael Anissimov of the Singularity Institute for Artificial Intelligence and one of the movement’s most articulate voices, warned that “a singleton, a Maximillian, an unrivaled superintelligence, a transcending upload”—you name it—could arrive very quickly and covertly. Vernor Vinge continues to say before 2030. (It didn’t arrive on Dec. 21, 2012, bringing a boffo ending to the Mayan calendar, as some had predicted.

• Science fiction writers continue to flee from the potential taint of having been believed to have authored the phrase, “the Rapture of the Nerds.” The Rapture, of course, is some fundamentalist Christians’ idea of a jolly good ending to the human adventure. Righteous people will ascend to heaven, leaving the rest of us behind to suffer. It’s probably the Singulatarians’ own fault that their ending sometimes gets mistaken for “those other people’s” ending. They can’t even talk about endings in general without “listing some ways in which the singularity and the rapture do resemble each other.”

• The Best and the Brightest among the Singulatarians don’t help much when they try to clear the air. For instance, there is this effort by Matt Mahoney, a plain-spoken Florida computer scientist, to explain why the people who are promoting the idea of a Friendly AI (an artificial intelligence that likes people) are the Don Quixotes of the 21st Century. “I do not believe the Singularity will be an apocalypse,” says Mahoney. “It will be invisible; a barrier you cannot look beyond from either side. A godlike intelligence could no more make its presence known to you than you could make your presence known to the bacteria in your gut. Asking what we should do [to try and insure a “friendly” AI] would be like bacteria asking how they can evolve into humans who won’t use antibiotics.” Thanks, Dr. Mahoney. We’re feeling better already!

• Philosopher Anders Sandberg can’t quit obsessing over the fact that the only way to AI is through the human brain. That’s because our brain is the only available working example of natural intelligence. And not just “the brain” is necessary but it will need to be a single, particular brain whose personality the great, incoming artificial brain apes. Popsci.com commentator Stuart Fox puckishly says this probably means copying the brain of a volunteer for scientific tests, which is usually “a half stoned, cash-strapped, college student.” Fox adds, “I think avoiding destruction at the hands of artificial intelligence could mean convincing a computer hardwired for a love of Asher Roth, keg stands and pornography to concentrate on helping mankind.” His suggestion for getting humanity out of The Singularity alive: “[Keep] letting our robot overlord beat us at beer pong.” (This is also the guy who says that if and when the AI of The Singularity shows up, he just hopes “it doesn’t run on Windows.”)

• Whether there is going to be a Singularity, and when, and to what ends does indeed seem to correlate closely to the personality of the explainer or predictor, whether it is overlord Kurzweil or someone else. For example, Vernor Vinge is a libertarian, who tends to be intensely optimistic, likes power cut and dried and maximally left in the hands of the individual. No doubt, he really does expect the Singularity no later than 2030, bar nothing. On the other hand, James J. Hughes, an ordained Buddhist monk, wants to make sure that a sense of “radical democracy”—which sees safe, self-controllable human enhancement technologies guaranteed for everyone—is embedded in the artificial intelligence on the other side of The Singularity. One has to wonder how long it will take for the Great AI that the Singulatarians say is coming to splinter and start forming opposing political parties.

• It may be that the penultimate act of the Singulatarians is to throw The Party to End All Parties. It should be a doozy. Because you don’t have thoughts and beliefs like the Singulatarians without a personal right-angle-to-the-rest-of-humanity bend in your booties. The Singularity remains an obscurity to the masses in no small part because the Singulatarians’ irreverence. Like calling the Christian God “a big authoritarian alpha monkey.” Or denouncing Howard Gardner’s popular theory of multiple intelligences as “something that doesn’t stand up to scientific scrutiny.” Or suggesting that most of today’s computer software is “s***”. No wonder that when the Institute for Ethics and Emerging Technologies was pondering speakers for its upcoming confab on The Singularity, among other topics, it added a comic book culture expert, the author of New Flesh A GoGo and one of the writers for TV’s Hercules and Xena, among other presenters.

All of the individuals quoted above and a lengthy parade of other highly opinionated folks (mostly males) who typically have scientific backgrounds (and often an “engineering” mentality) and who tend to see the world through “survival of the smartest” lenses are the people doing most of the talking today about The Singularity. It is a bewildering and ultimately stultifying babel of voices and opinions based on very little hard evidence and huge skeins of science-fiction-like supposition. I was about hit delete on the whole shrill cacophony of imaginings and outcome electioneering that I’d collected when I came across a comment from one of the more sane and even-keeled Singulatarian voices.

That would be the voice of Eliezer Yudkowsky, a co-founder and research fellow of the Singularity Institute.

He writes, “A good deal of the material I have ever produced—specifically, everything dated 2002 or earlier—I now consider completely obsolete.”

As a non-scientific observer of what’s being said and written about The Singularity at the moment, making a similar declaration would seem to be a great idea for most everyone who has voiced an opinion thus far. I suspect it’s still going to be awhile before anyone has an idea about The Singularity worth keeping.
The above commentary has appeared in a blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are fascinating and important.

Bookmark and Share


I always seek to break the news gently, but it can be disconcerting to some folks when I reveal that the two brains in history intriguing me most are Shakespeare’s and Jesus Christ’s, in that order.

Neither choice is by any means unique, and the subject of Jesus’s brain is probably the most enigmatic. What can you really think about a brain that supposedly was both a man’s and a god’s, dually occupied at the same time? Bertrand Russell thought the man suffered from schizophrenia, but Schweitzer, summoned to the truths he saw in the man’s life, argued otherwise. Psychologist Jay Haley thought the Nazarean carpenter is best understood as a master political strategist whose mind, above all, excelled at using complex power tactics to flummox and stalemate his enemies.

I’m not sure that were a small group of us to sit down to dinner with the Godspell character himself that we’d really understand how things worked inside his cranium, so that’s why I list him second. And putting J.C. Superstar second is what upsets my fundamentalist Christian friends, so I rush to assure them that I do so only because with Shakespeare, I think we’d have a better chance of coming away with more insight than heart burn.

I once happened on a book whose author shares my interest in Shakespeare’s brain and isn’t waiting on a chance dinner party encounter in some future time-warp to take the subject on. Diane Ackerman has an entire chapter in her book, An Alchemy of Mind (Scribner softcover, 2004), speculating about how the bard’s brain functioned. She opines, “Something about his brain was gloriously different.”

For example, Ackerman recalls his abilities to squeeze the most precise qualities from word combinations. Like when he described a kiss as “comfortless/As frozen water to a starved snake.” Or when his King Lear, in deep grief over Cordelia’s death, utters, “Never, never, never, never, never.” (Such feats and usages of the language led the editors of Bartlett’s Familiar Quotations to devote more than 60 pages to Shakespeare, Ackerman observes). Such precisely feelings-capturing word pictures suffuse his works, of course. “He must have … possessed a remarkable general memory, the ability to obsess for long periods of time, a superb gift for focusing his mind in the midst of commotion, quick access to word and sense memories to use in imagery, a brain open to novelty and new ideas,” she writes. And that’s just for starters.

Eventually, she asks one of two questions I’d most love to put to a large list of personages who have distinguished themselves down through the mists of time. Did Shakespeare know how different he was? Her conclusion: probably so. How alien. How “more of everything.” If scientists could study his brain today, she wonders if they’d find his brain bushy, somehow having foregone all the natural pruning away of neuronal connections that occurs in a “normal” brain.

Ackerman doesn’t see any usefulness in viewing Shakespeare as a god. “If anything, he risked being more human than most. Because he was a natural wonder,” she finishes.

It’s a beautiful chapter in a really well-done book. And her concluding thoughts about Shakespeare fit well with the second question I’d like to put to each of the great personages selected from “the bank and shoal of time” (Shakespeare again): What do you think this universe is really about? If there is a god in the group, then we should be in for a memorable evening although I can’t shake the thought that we’d probably end up learning more from Shakespeare’s reply than anyone else’s.

You can latch onto a bargain-priced copy of Ackerman’s book by going here. Haley’s fascinating arguments, by the way, are in his book: The Power Tactics of Jesus Christ and Other Essays); go here.
The above commentary has appeared in a blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are fascinating and important.

Bookmark and Share


A long-time friend and colleague valued for many reasons, not the least of which is the expansive range of his scholarly interests, writes:

“I often get lost in the soup of new economic titles that try to capture the behavioral side of economics, an area that has languished until recently. Books on neuroeconomics, behavioral economics, economic sociology, game theory, behavioral finance, hedonic psychology, intertemporal choice, and other such juicy domains have proliferated to a point that I cannot keep up with the reviews, let alone read all of the texts.”

For several days after receiving his note, I was in and out of a funk. If this guy—I mean, he says he’s read 600 books in preparation for finishing his doctoral dissertation—is having that much trouble staying up with today’s explosion of knowledge in fields that any decent “cutting edge”-oriented management theorist should have passing knowledge of, then what hope is there for the rest of us?

But, several days’ funk is enough. Let’s at least get more familiar with some of the more unfamiliar terms in his list:

Hedonic psychology. According to the American Psychological Association’s Observer, this is the study of pleasure and pain, happiness and misery, both as they are experienced in the present and as they are remembered later. A key researcher here is Dr. Daniel Kahneman, professor of psychology and public affairs at Princeton University. For more on Kahneman and hedonic psychology, go here: “Memory vs. Experience: Happiness is Relative”

Economic sociology. Wikipedia, the free encyclopedia, defines this field of inquiry as “the sociological analysis of economic phenomena…. Current economic sociology focuses particularly on the social consequences of economic exchanges, the social meanings they involve and the social interactions they facilitate or obstruct. Influential figures in modern economic sociology include Mark Granovetter, Harrison White, Richard Swedberg and Viviana Zelizer. To this may be added Amitai Etzioni, who has popularised the idea of socioeconomics, and Chuck Sabel and Wolfgang Streeck, who work in the tradition of political economy/sociology.” For more, go here: “Economic sociology”

Intertemporal choice. To put it simply, as an economics instructor at the University of Pennsylvania has, “Life is full of intertemporal choices: should I study for my test today or tomorrow, should I save or should I consume now?” Letting Wikipedia weigh in again: “Intertemporal choice is the study of the relative value people assign to two or more payoffs at different points in time. This relationship is usually simplified to today and some future date.” For a fascinating brain-studies-oriented discussion of “ic,” go here: “Is There A Neurobiology of Intertemporal Choice?”

Behavioral economics. InvestorHome Web site observes, “Much of economic and financial theory is based on the notion that individuals act rationally and consider all available information in the decision-making process. However, researchers have uncovered a surprisingly large amount of evidence that this is frequently not the case…. A field known as ‘behavioral finance’ has evolved that attempts to better understand and explain how emotions and cognitive errors influence investors and the decision-making process….As an example, some believe that the outperformance of value investing results from investor’s irrational overconfidence in exciting growth companies and from the fact that investors generate pleasure and pride from owning growth stocks. Many researchers (not all) believe that these humans flaws are consistent, predictable, and can be exploited for profit.” “Applied Behavioral Finance: An Introduction”

Maybe a worthy perspective on the subject matter of all the above fields of inquiry and other contemporary scholarly endeavors along these lines is provided by two quotes from the article referenced immediately above:

• “Recently we worked on a project that involved users rating their experience with a computer. When we had the computer the users had worked with ask for an evaluation of its performance, the responses tended to be positive. But when we had a second computer ask the same people to evaluate their encounters with the first machine, the people were significantly more critical. Their reluctance to criticize the first computer ‘face to face’ suggested they didn’t want to hurt its feelings, even though they knew it was only a machine.”—Bill Gates in The Road Ahead

• “Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.”—Albert Einstein

The above commentary has appeared in a blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are important.

Bookmark and Share


I am accustomed to being questioned by prospective business clients on all kinds of issues. What I’m not accustomed to is having them ask me, unexpectedly and point-blank, as happened over dinner not long ago, “Do you believe in God?”

But it happened, and I replied immediately, “I don’t believe in your God.”

I think that’s the right thing to do in such circumstances, and the right way to do it. I encourage such a response, instantly and emphatically, if you find yourself in similar circumstances. It is certainly a return volley across the net that keeps the discussion from degenerating quickly into strained politeness, vague assurances or simply plain silliness.

What you believe about this nettlesome, never-seemingly-laid-to-rest-in-human-affairs-and-discussions issue should never, in my opinion, hinge on what someone else thinks. So I’m never going to encourage anyone to base their conclusions on the issue on my opinions. Over the eons, there have been a thousand and one opinions, multiplied a thousand and one times, voiced on the subject. Mine is just one more (and, to paraphrase the late Kingsley Amis, “a person’s view of what he is doing is no more valid than anyone else’s.”)

But this individual was put off only momentarily. After an instant, she said, “But are you a believer?”

“I don’t know about believing, “I replied. ”But I don’t believe in believers.”

“So you are a non-believer?” she inquired

“Let me tell you what I think I might be,” I offered as it became clearer that she was asking with a personal earnestness that seemed to have nothing to do with whether she was going to engage my services as a change agent and thinking skills authority.

“I think I might be an accepter.”

Blank stare. Good. A blank stare is usually a good place to begin when people are, even if not consciously, trying to see if they can categorize you and your slate of opinions using their own.

Then I told her what I was willing to accept:

• Most humans seem to be incorrigibly religious. Most want to believe in something “beyond” themselves. The search for “God”—or something out there or up there or in there—never ends. It has been that way from day one.

• What most humans think about God they’ve really never thought much about, if they’ve thought about it at all. What most humans think about God they’ve inherited. It’s a family matter or a neighborhood matter or a community matter. Saint Paul was absolutely right: raise a child in the family faith, and you’ve nearly always got them for that faith for life.

• The “theological history” that all faiths cite as their proofs for their view of God all eventually grows murky and, like the record of evolutionary biology, becomes riddled with gaps and uncertainties as the mists of time close in. There’s no real, dependable footing for anyone’s view of God. That’s why there is no one theology. It is always a Muslin theology or a Christian theology or a Methodist theology or a Christian Science theology. Really good, smart theologians understand this, which is why a Paul Tillich or a Pierre Teilhard de Chardin can be so maddeningly vague and full of double-speak so much of the time and, in the end, pretty much incomprehensible.

• Everything that has ever been said about religion or theology or God was generated by the same source: the human brain. There’s no getting around it. There is no other way to say anything. If God speaks in the forest and there is no brain, it’s pretty clear: there’s no sound—nothing said, nothing heard, nothing remembered, nothing recorded. So the brain first draws the outline and then colors in the spaces about everything, including all that has been said and thought about the issue of whether there’s a God and whether we can know anything worth knowing about such a being, if there happens to be One.

• What the brain thinks and says about God is simply one more assignment for a brain seeking to understand how it has come to be plopped in the middle of, to use astrophysicist Freeman Dyson’s apt phrase, “infinity in all directions.” The thing about this assignment is that explaining ultimate causes in such an environment is proving to be enormously difficult, perhaps even—no, I’ll say that stronger—probably even beyond our capabilities on many subjects, including this one. We simply don’t appear to have the brainpower to pull it off.

• So the issue of God has come to bore me. There are so many other interesting questions where I stand a chance of finding some answers. None of the answers offered up by any other brain that has ever spoken out or written something down on this particular issue that I’ve read—and believe me, I’ve read a ton of them—any longer interests me. I don’t mind people having strong spiritual beliefs if they will use them responsibly. I know from the research—brain research!—that having strong spiritual beliefs can be very healthy. The brain likes believing it can know. But then my brain knows it can believe almost anything about anything. And so can any other brain. So that makes my brain very leery of believing in believers.

“So this is what I accept, and why I think I may be an accepter,” I told my dinner guest.

“You’re right,” my guest replied. “I don’t believe you believe in my God.”

But I still got the job and, I think, made a new friend. And, she left the table looking very thoughtful.

The above commentary has appeared in another blog on another of my websites. I’m choosing to recycle it here because I think the points it makes are important.

Bookmark and Share


Sometimes I’m bemused at the question, sometimes a little exasperated: Can people really change?

I suspect the reason that anyone would ask the question has to more to do with the nature of consciousness than anything else. Consciousness appears to be the paragon of immediacy. “We” may have trouble staying in the here and now, but consciousness doesn’t. It is either here, or not here. (There’s a school of thought that suggests consciousness is gone as much as it is away, even when we are awake and neurologically whole and competent and thinking our consciousness is here. These theorists think consciousness is as hole-y as Swiss cheese, online one instant, off-line an instant later, and then back again, and yet we never miss it when it’s gone.)

Because consciousness is so “now” centered, at least when it is here, it’s apparently easy for us to forget how much we ourselves change over time. Or how much others we know have changed.

Yesterday, I took my consciousness (or it took me) off to see one of my professional venders. I’m sitting there, and he’s behind me. Because of my congenital hearing losses, I must read lips a great deal of the time to know what others are saying. I picked up on part of one of his comments while his back was turned, but asked him to face me and repeat it.

“I was married for 32 years, “ he repeated, then shrugged. “She changed.” And that why in his late 50s, he was suggesting, he was now living alone.

Here’s another example. A couple of days ago, I received an e-mail from one of our BTC associates in a land and culture far, far away. For the first time in more than a year, she had taken BTC’s BrainMap® instrument. To her surprise, her new results indicated a pronounced shift toward the top and left of the instrument and away from the center and right compared to the previous time she took it.

What did it mean, she asked?

Before attempting to answer her, I suggested she self-test with another of BTC’s tools, MindMaker6®. If the BrainMap score is not an aberration, I told her, then she was likely to see changes in her MindMaker6 scores, too. The scores should show a shift of personal values emphasis out of the Alpha worldviews and into the new Beta worldview. She did, and it did.

“What caused this?” she asked.

Knowing a good deal about the circumstances of her life for the past many months, I replied:

“Oh, probably a number of things. You’ve burned some bridges to parts of your past. You’ve made some decisions to allow parts of you that have been held back to enjoy more freedom. You’ve made some important decisions about your future. You are more sure that you know your purpose and are committed to staying on purpose. You are probably getting more encouragement now from people important to you. Those kinds of things.”

Within moments, she e-mailed back, “So TRUE!”

Thinking we know how to change people is one kind of issue. Knowing whether people can change is another kind.

Of course people can change!

We do it all the time.


Email comments to dudley@braintechnologies.com

Bookmark and Share


My all-time favorite description of what time is comes not from a scientist but from a writer of pulp science fiction, the late Ray Cumming. In 1922, he observed that time is “what keeps everything from happening at once.”

This is more than a not-half-way-bad way of describing time. In fact, it’s such a doggone-good way that even some very reputable scientists say it is hard to beat.

Today, professionals in a variety of fields are recognizing the importance of “keeping everything from happening at once.” Or if you can’t keep the time crunch out of unfolding events, the importance of understanding how the brain seeks to cope when everything seems to be happening at once and making allowances for all-too-brief tick-tocks in time.

In California not too long ago, Sergeant Steve “Pappy” Papenfuhs, a police training expert, took up this subject with 275 lawyers who defend cops and their municipalities in lawsuits. The plaintiffs in these suits are often alleging wrongful deaths from police bullets.

When a mole hill looks like a mountain

Papenfuhs is a great fan of Dr. Matthew J. Sharps, a psychology professor at California State University, Fresno, who has made a career of studying the actions of people who must make split-second, life-and-death-affecting decisions. Sharps has even gone so far as to do cognitive and psychological post-mortems of events like Custer’s last stand, the Battle of Mogadishu and the Battle of the Bulge.

He learned that cavalry soldiers at Little Big Horn tried to take cover behind small piles of soft soil, where they died. Because they were stupid? No, Sharp concluded, because when everything is happening at once, the brain has a tendency to grab at the first apparent possibility. There isn’t a lot of natural cover on the American Great Plains. And Custer’s men hadn’t been trained to think about beating a zigzag retreat until they could reach an arroyo or a big rock or something else more solid to duck behind than a prairie dog mound.

But it wasn’t what happened at Little Big Horn but in one of Sharps’ experiments that, according to Papenfuhs, caused gasps of disbelief from the lawyers present at his recent lecture. Rather, it was evidence of what the brain may decide when there’s very little time—and often very little information.

Sharps’ discoveries that most dumbfounded the cop-defending lawyers were these: (A) Ordinary people have an overwhelming tendency to shoot people they believe are threatening them with a gun. (B) They will do so even if the perpetrator is holding a power screwdriver that they have mistaken for a weapon. (C) But only about one in 10 people believes it is appropriate for a police officer to fire under the same circumstances.

All these cops saw was the hair

In his book, Processing Under Pressure: Stress, Memory and Decision-Making in Law Enforcement, Sharps offers his G/FI (Gestalt/Feature Intensive) Processing Theory. Boiled to a few words, it says that when everything is happening at once, the brain defaults to what it feels is most right (that’s the “gestalt” part). It really doesn’t even have to think about it; in fact, it usually doesn’t. If you want it to do something else—in cop talk, make good tactical decisions—then you better spend a lot of time upfront explicitly teaching the brain about what to look for and what to do when it finds it (that’s the “feature intensive” part).

Rapid cognition—or the lack of it—was, of course, the subject matter that The New Yorker magazine’s curiosity hog, Malcolm Gladwell wrote about in Blink: The Power of Thinking Without Thinking. Interestingly, he got the idea for the book from—who else?—a bunch of cops. It happened when, on a whelm, he let his hair grow wild like it had been as a teenager and suddenly started getting stopped a lot by the fuzz. One time he was grilled for twenty minutes as a rape suspect when his skin color, age, height and weight were all wrong. “All we [he and the actual rapist] had in common was a large head of curly hair,” he notes.

That tweaked Gladwell’s interest. “Something about the first impression created by my hair derailed every other consideration in the hunt for the rapist, and the impression formed in those first two seconds exerted a powerful hold over the officers’ thinking over the next twenty minutes,” he says. “That episode on the street got me thinking about the weird power of first impressions.”

Like Professor Sharps, Gladwell was often riveted by how the brain responds—and sometimes how good it is when it does—to situations where everything is happening at once. Nor by any means are those two the first to pursue this. For years, research psychologist Gary Klein has been studying how people make decisions when pressured for time. When he first started, he assumed that people thought rationally even when time was being sliced thin. But then he met a fire commander who demurred when asked how he made difficult decisions. “I don’t remember when I’ve ever made a decision,” the firefighter said. So what does he do? He replied that he just does what is obviously the right thing to do.

On thin ice, it’s good to do thin slicing

This was the beginning of Klein’s years-long inquiry into what he ended up calling “Recognition-Primed Decision-Making.” It’s not a cut-and-dried process, since the decision-maker can change his or her mind from moment to moment and often needs to.

Say a fire commander goes into a burning house, believing it to be a regular kitchen fire. But as he’s scouting around he realizes that things are too quiet and too hot. He’s uncomfortable, so he orders his team out—just before the floor collapses. The big fire was in the basement. The guy didn’t even know the house had a basement; he just knew this fire was not behaving like other fires in his experience. Klein calls this “seeing the invisible.” In Blink, Gladwell borrowed a phrase from psychologists: “the power of thin slicing.” Like Klein, he marvels at how capable the human brain can be at making sense of situations based on the thinnest slice of experience.

There is growing evidence that in situations where there is incessantly too much information incoming and not nearly enough time to come to a decision in classic laboratory (“non-garbage-in, non-garbage-out”) fashion, it behooves someone needing a favorable decision from the decider to appeal to the brain’s “powers of thin slicing.”

Literary agent Jillian Manus offers such advice at writers’ conferences to wannabe authors who are battling uphill odds that their ideas for books will ever get the full attention of a reputable agent, much less get an offer of representation. The really good (“successful”) agents get hundreds of snail mail and/or e-mail queries weekly, if not daily. This is another of those “everything is happening at once” realities. So it is critical that a writer do everything possible to instantly engage an agent’s powers of thin-slicing.

Who knows what cagier blinks will turn up?

One of Manus’s suggestions is to give an agent a comparative pitch in the very first words of a query letter. That is, tell the agent that the work is “somewhat like a this and a this.” Jane Smiley’s 1992 Pulitzer Prize winning novel, A Thousand Acres? It’s King Lear in a corn field. Clueless, the movie? Emma meets Beverly Hills 90210. The war drama, Cold Mountain? Gone With the Wind meets Faulkner. The science fiction novel, The Last Day? Manus successfully pitched it to a publisher as Michael Crichton meets the Celestine Prophecy.

Some of the more daring minds in our midst think that the universe itself has taken steps to avoid being taxed with unmanageable demands on its processing power. Science fiction writer/astrophysicist David Brin speculates that the 186,000-miles-per-second limit on how fast light can travel may be an artifact “introduced in order not to have to deal with the software loads of modeling a cosmos that is infinitely observable.” Or at the level of the quantum, “the division of reality into ‘quanta’ that are fundamentally indivisible, like the submicroscopic Planck length, below which no questions may be asked.”

Though he doesn’t talk about it exactly in these terms, Brin even wonders if our growing powers of thin slicing have us on the verge of figuring out or at least strongly suspecting that we are all reconstituted virtual people living out our lives in a reconstituted virtual reality. A simulation created by greater-intelligences-than-are-we operating way out in front of us, time-wise.

On his blog, Brin once wrote: “Take the coincidence of names that keep cropping up, almost as if the ‘author’ of our cosmic simulation were having a little joke. Like the almost unlimited amount of fun you can have with Barack Obama’s name. Or the fact that World War II featured a battle in which Adolf the Wolf attacked the Church on the Hill, who begged help from the Field of Roses, which asked its Marshall to send an Iron-hewer to fight in the Old World and a Man of Arthur to fight across the greatest lake (the Pacific) … does the Designer really think we don’t notice stuff like this? Or maybe this designer just doesn’t care.”

As we get better and better at deciphering what goes on in our minds in a blink in time, maybe we’ll begin to notice all kinds of things that have been eluding our powers of thin slicing. Meanwhile, our interest in what we are already noticing can only grow.

Bookmark and Share


Quote of the day from James Scott’s new book, Information Warfare: The Meme is the Embryo of the Narrative Illusion:

“The meme is the embryo of the narrative. Therefore, controlling the meme renders control of the ideas; control the ideas and you control the belief system; control the belief system and you control the narrative; control the narrative and you control the population without firing a single bullet.”

Those are exactly the points we’ve been making for more than 40 years at Brain Technologies Corporation with assessment tools like The BrainMap® and MindMaker6® and our books, beginning with Your High-Performance Business Brain and Strategy of the Dolphin: Scoring a Win in a Chaotic World.

Welcome aboard, Mr. Scott. Good to have you helping spread the message.

Bookmark and Share