Wednesday, May 24, 2006

Simulating Life

Why are people so fascinated with computer games? Novels? TV shows? Stories? Gossip? What can these all have in common? All these pursuits provide a person with a compact simulation of life. "What happened to person X when they did such and such? Why, they lived happily ever after or were miserable failures." They provide valuable information to you on how other people fared when they tried particular strategies.

I personally prefer games, novels and gossip. They have different, but related value to me. I like to play strategy games - like Civilization, Pharoah and Age of Empires. These games have taught me many strategies that I use in every day life. From Civ, I learned that democracies don't tolerate long, unsuccessful wars. True, isn't it? I've learned that you have to address problems early, or they will bite you later. I've learned to balance my attention. I've learned that you have to have a strong advantage over your rival before you take them on and that you need to have good defenses to stay at peace.

I've read at least a 1000 books of fiction. I treasure many of these and appreciate all of them on one level. The author has shared a life with me in the pages of each book. In many cases, many lives. From these I have learned the wisdom of many lives. It's hard to imagine what kind of person I would be without reading.

From gossip, I learn more near term information. What are the people around me likely to do? Where can I get a good dinner? What strategies work in motivating your children?

In all cases, these are examples of "living life more fully" - experiencing life experiences in a compressed way. This would give a person an advantage over someone living a singular life.

People can get fooled into thinking that these simulations are real life and forget to live their own. Sure, watching TV or reading a book provides you a compact form of a life probably more interesting than your own. But, it is still not real life.

One thing I have learned over the years is that you have to be brave about living. It is important not to put things off due to indecision - or fear that things might go wrong. Life goes by more quickly than you think. People that put off marriage, career changes, or having children may find that life has passed them by. If there is something that you really want to do, do it. Don't live vicariously. Take your chances and do your best.

Thursday, May 18, 2006

The Change Function

Since I am perenially interested in scientific evolutions/revolutions and seek to understand them better, change is a topic I am always thinking about - specifically changes to human behavior and concepts. One of my observations is that some of us love change far more than others. I am a change lover. I live for the thrill of the paradigm shift. I love them and collect them, somewhat like stamps or baseball cards. They feel good to me. I am painfully aware that others do not share my love of change.

My current theory on this is that there is actually a brain structure feature at play here. Some of us have better restructuring capabilities than others, somehow. For most people, the resistance to changing the way they think about something is directly proportional to how much that something is networked to other concepts in their brain. If a concept is far out on the edge of a network of concepts in the brain, it is far easier to change than something at the center of a lot of related concepts.

I ran across a review of a book today in MIT Technology Review. The book is The Change Function: Why Some New Technologies Take Off and Others Crash and Burn, by Pip Coburn. Pip seems to agree with me completely. He says technologists are all like me - change junkies. They love complicated new things that make them adapt. They just don't understand that most of their potential customers don't. He has an equation for the non-technologists:

The Change Function = f(perceived crisis vs. total perceived pain of adoption)

and there is the techie version:

Supplier-centric adoption model = f(Grove's law of 10x disruptive technology x Moore's law).

The supplier-centric model basically means that techies think people want things that are "way cool" and "much better".

I like Coburn's change function. It captures my concept succinctly. People won't change their way of thinking about things unless their current way of thinking is in such a failure mode that the problems with continuing to think that way would be more trouble than changing. This is true in all kinds of situations - not just scientific evolution/revolution or technology adoption.

In science, you can see this during paradigm shifts. Scientists will not abandon current theories until they are clearly "broken" or dignificantly less useful than a new theory.

This seems like an adaptive mechanism. As long as things are working for you, why change? Expending energy to reorganize your brain should be motivated by a good reason. So, the question is: why are I and my fellow techies so happy to change? Are some people change agents naturally? Do they seek the new ideas, regardless of their value, presenting them to the slow changers to evaluate?

Sunday, May 07, 2006

Models Within Models

This is only a half-baked idea, which needs more thought (a small disclaimer). Political Reasoning and Cognition by Rosenberg at al. categorizes cognitive levels as sequential, linear, and systematic. These are increasingly sophisticated views of the world that they apply to political reasoning. One of their hypotheses is that people operate at different cognitive levels and that this affects their political reasoning. This was (and is, I think) apparently controversial, and in conflict with many social scientists who reject the idea that one person's thinking is more sophisticated than another's. Of course, I am with Rosenberg on that. The egalitarian's odd view, that we are all the same, is the sort of thinking that makes non-academics roll their eyes and gripe about political correctness. It's not science to take a position because it is "nice" and reject reality because it is "unfair".

But, back to the model. The levels are part of the "structural developmental" school of developmental psychology. They are well-developed concepts, with related behaviors. But, to me, they resemble a model that still too high level to be fundamental. I think there is an underlying model that interacts with other capabilities. My hypothesis is that what is really at work here is the level of abstraction of a cognitive modeling capability.

Humans observe patterns in the world and construct a model based on their observations. Their models help them predict the outcome of actions and allow them to make choices and intervene to have a more favorable outcome to themselves. This is not to say that every observation and every instance of model construction is deliberate and consciously intended to improve prediction and help you to survive. This modeling capability is a big part of our cognition and we use it all the time. Creatures that model all the time, building more and more accurate and predictive models, survive better. We are the descendents of those survivors. We model, model, model.

As babies, we start out observing and modeling the physical world. Over time, our models get better at working with longer time frames. People become adept at modeling and become aware of the models themselves - and are able to think about them as if they were physical objects - these are abstract concepts. As we advance in capability, we can observe and model about abstract items, and the processes of abstraction. This is basically a recursive process, builing level on level. On this blog, I am observing and modeling the process of modeling itself. More and more sophisticated thinking require more levels of abstraction.

Put in another way, first people understand the behavior of objects, then they understand rules about them, then the understand how rules are made and how to make them, then they understand which rules are better than others and what constitutes an effective rule making process - they become experts on rulemaking itself.

An example can be made about religion. As babies, religion has no meaning - it is too abstract. A schoolage child can learn the rules of religion and understand how to apply them. A more sophisticated person can understand the morality and value behind the rules and apply them more effectively to special cases. Another level of sophistication beyond that, and a person can change the rules to fit evolving circumstances. Beyond that, a person can create a new religion, defining the rules that its followers would adhere to. There are even more levels of sophistication - as someone could become an expert at religion defning, with a model of how religions work, in general.

Thursday, May 04, 2006

Transhuman Conundrums

A meeting at work today started off with a discussion of Kurzweil and the transhumanists. If you are not familiar with transhumanism, Kurzweil's book is an interesting read. The basic idea of transhumanism is that a time will come in the near future when computers are powerful enough to emulate the human mind. At that point, you could copy the "data" in a person's brain and load it into a computer and start the emulation. "You" would now be alive in the computer.

I was quite taken with this idea and wanted to be the first to volunteer. But, then, my clever son pointed out that the copy would be a copy - like spawning a new process in a computer. Your copy would die - and blink out of consciousness. Sure, your copy would live on as you, but it wouldn't actually be you. This took some of the fun away for me.

I blogged about this on my CR blog once and a clever reader had a solution. He would replace your brain, part by part, with a gradual subsitution. Then, you would continue on indefinitely - wouldn't you? Hmm, maybe that would work. I'd be willing to try it!

We had this discussion at dinner again. This is a great framework for discussing many of the more vexing questions of consciousness. If I lose consciousness and wake up again - is it really the same "process" that was there before - or a new one, that has my memory and my old "hardware"? How would that be any different than the Kurzweil copy? Do we "die" when we go to sleep and wake up a new "consciousness" every day?

Clearly, the me that is thinking now is the active conscious process of the physical me. If I lose conscious and deactivate the physcial me, the newly conscious me that wakes up still thinks it is me. The Kurzweil copy will think it is me. It will be me in a very serious sense. "I" will continue on. But the me that is in this physcial body will die. The copy will be as distinct from that me as you are. So, in some ways, transhumanism will not let you live forever. It will create an entity with consciousness, with your memories and, perhaps, your mental capabilities (or better) that could live forever.

What would be the advantage of this? Really, there is no advantage to you and me. We will no doubt actually die one day. But, we could grant eternal life to our clone. Even then, it is unlikely that a clone will be invented that would last forever. If you shut down one clone and transferred its data to a new improved set of hardware, it would "die" too. In fact, every time you "rebooted" it, or restored a backup of it, it would "die" and be "reborn". Hardware improvements could happen much faster than genes allow. Direct connection to the internet? Much faster clock cycles (like 1000 or 1,000,000 times faster than people), encyclopedic memory, built in macros? It boggles.

In some ways, it hardly matters, doesn't it? Here I am, typing this post, with my lifetime of memories housed in an organic processor. This lifetime of memories creates me every time I awake and the organic processor starts back up. If I copied the whole set to a new physical processor every night and threw the old one away, it would still think it was "itself" when it started back up in the morning. The only reason we care so much is that our brains have a strong instinct for self-preservation. We are wired to struggle mightily to stay alive. We find the notion of the copying to be distressing. Try it on your friends and family. You can become a persona non grata very quickly at a family reunion. I know.

This is related to the paradox of "why me?" I am sure this paradox has a formal name, but I don't know what it is. I call it "why me?" This is the paradox that causes people to exclaim in wonder "Aren't we lucky that we are on the Earth - a planet that is so amenable to human life?" A silly statement indeed, because anyone who is alive would be on an amenable planet, ipso facto. Another example is "I am so lucky to be the only survivor - why me?" Someone has to be the last survivor. They were not chosen, they just are.

You are reading this because you are your brain's consciousness. Your brain's consciousness is all you are. You are not a separate thing that just happened to lodge in your brain - "picking it" in some way. You are your brain's consciousness trying to understand itself. Quite a struggle, isn't it?

Tuesday, May 02, 2006

Connecting Piaget, Rosenberg (et al) and Judith Rich Harris

It seemed to me that something was missing in Piaget's concepts, as described by Rosenberg. There was something too idealistic about it all - with the purpose of cognitive development being achievement of cooperation. It's missing competition. I've always enjoyed a good debate about cooperation vs. competition. I've come to understand them as the yin/yang of things. Both are necessary for survival. Neither is "better". But both have their proponents - and their detractors. As if you could eliminate one of them or the other!

In Judith Rich Harris' model, there is a place for both. Cooperation is part of the socialization subsystem, developed to make you function as a group member. But competition is there, too - in the status subsystem.

I am not sure how you would work competition into Piaget's concepts? Perhaps I can make an attempt. Piaget has humans developing ever increasing levels of cognitive capability, going at each stage through a transformation from egocentricity to sociocentricity. Rosenberg at al characterize this as culminating in effective cooperation. But, wouldn't it also culminate in effective competition? The models built at each stage increase both the capacity for cooperation and competition.

I am still in the middle of Political Reasoning and Cognition, and it's raised another question - actually several. The authors cite studies by Ward and Lane - in interviewing subjects to determine their cognitive level and its effect on their political beliefs. The levels seems to be discrete, rather than continuous. You are at the "low" level or the "high" one. This is an interesting idea, and seems possible to me. I am sure Piagetians have an opinion on this.

This book was written in 1988. The ideas seem promising, but it's not clear that they've had a major impact on political science. Rosenberg's current program is a very interesting one, a cross-disciplinary political science and psychology program. He describes the issues:
Interdisciplinary programs are difficult to design. Crossing intellectual
boundaries creates unusual demands both with regard to the theoretical
definition of the subject matter and the empirical methods which may be used
to examine it. Typically graduate programs adopt the perspective of either
political science or psychology. Within this context they then tend to
emphasize either theoretical issues or empirical ones.

Somewhat daunting from my persepective and exactly the reason I shyed away from academia. Crossing intellectual boundaries seems necessary to me.

Monday, May 01, 2006

The Stuff of Science

One of my fantasy project ideas - one that could keep a small army of graduate students busy - and potentially be a tool for the human race - is what I've been calling CLEW - the causal logic evidence web. This would be a internet-based sort of scientific hypothesis wikipedia thing. A networked hypothesis space that links to the evidence behind each hypothesis. I've thought about this a fair amount. For a while, I thought I had figured out how to do this. But, then, I began to wonder if science and hypotheses were really so interchangeable. This is when I began to think I needed to do more groundwork.

Here's the question: What is the stuff of science? In 5th grade, they taught you all about hypotheses, evidence, double-blind experiments and such. If you are a scientist, you might have listened. If not, you probably were annoyed and daydreamed.

I think hypotheses are an artifact of the human brain - and not the real stuff of science. Humans don't think in "systems" - they think more simply. Science is really a model of the system of the world. If we make the mistake of limiting science to techniques that are easy for humans to think about, then we will really limit our understanding and exploration.

Most science is posed in one of two forms: mathematical or cause/effect relationships. Both are pretty compact and easily communicated to someone who understands math or the vocabulary of their field. But, systems require more than math or cause/effect to describe them effectively. For example, a car is a system of sorts. You can describe many things about a car with math and cause/effect relationships. Acceleration is math. Gas mileage is math. "Turning the key starts the engine" is cause/effect. You will know a lot about a car from these descriptions. But, it's not enough. Physics, astronomy, and chemistry lean heavily on math. But biology is big on cause/effect. My CLEW idea was born out of medicine - so, I was thinking cause/effect.

Now, rumbling around in the back of my mind is all my experience as a computer scientist - especially a computer language specialist. I invented languages in my early career. It was lots of fun and I understand a lot about computer languages at a theoretical level. There are very sparse ways to describe computational capability, like Turing's. But, in practice people program with some more descriptive and powerful tools. If I want to specify a system - like the system of the world - I can do this the most effectively with a slightly larger toolbox. Functions are there - the math. Cause/effect forms are there - conditional execution like "if.. then .. else". But there is also sequence and iteration. There are also objects, attributes, and inheritance. There are variables and constants - the "x" of algebra and the numbers that the "x" can really be. The "mother" of "mother knows best" and your actual mother. I've programmed in languages that are mostly math (APL) and languages that are mostly cause/effect (CLIPS, Prolog). These languages are pretty limited at describing things. All commonly used languages have all the features. You need them to effectively simulate the real world. And you must understand that almost all programming has an element of simulation or description. Even a banking system is simulating the physical banking process. You have a logical account - that is ultimately little magnetized spots on a hard drive platter somewhere - and can withdraw abstracted money from it and deposit abstracted money to it. These operations are really simulations that mimic the processes of putting money is a box and taking it out - the action that would have happened 300 years ago.

So, I am thinking about whether science is better described with a richer language - one that has all the features of a good programming language. A good test of this would be the Piagetian concepts that are described in the book I am reading (Political Reasoning and Cognition: A Piagetian View). There is sequence there - the four levels. There is iteration over each conceptual focus as it is transitioned to the new level. There are "objects" - the concepts. Could you describe Piaget's concepts with a "science language"?

Interestingly, I suspect a major objection to this approach is its complexity. This seems funny to me to even think about. If I could describe Piaget's ideas succinctly in 3 or 4 pages of formal language - this is far more succinct that hundreds of pages of text - the form it is in at this time. If a system is complex, then it cannot be described with a few equations. Simplicity would be misleading - and ultimately not very useful at predicting behavior. No point in pretending that the simple view is adequate. So, if science is going to scale up, and address complex systems, like living things, maybe we need better tools.

This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]