On Sat, Jul 19, 2003 at 04:49:49PM -0500, Dan Minette wrote: > This was clearly in the best interest of the Iroquois, but not the > slaughtered tribes, nor humanity in general. Yet, it was a perfectly > rational act, if you assume the Iroquois acted in the self interest of > their own tribe.
Until they got slaughtered in turn by a stronger "tribe". Not so rational, after all. Perhaps if they had cooperated with the other tribes, they (collectively) would have been much stronger when the Europeans arrived, and could have negotiated a peace from a position of strength. > Fine, but again, that misses the question at hand. The question is it > better for person X if X behaves in a given manner. Self interest, > pretty well by definition, looks myopically at what benefits one > person: oneself. Then we are arguing semantics. What I mean by self-interest is what is best for oneself in the long-term. If you want, I will call that long-term-self-interest since I don't want to argue semantics. > To use your language, the question at hand is "if one considers their > own self interest only in a myopic fashion, why worry about others?" Frequently not much reason to. Why only consider short-term-self-interest? > So, are you agreeing that the progress is inconsistent with people acting > only in their own self interest? Yes, short-term-self-interest. > This seems to refute the contention that cooperation is reducible to > enlightened self interest. No, long-term-self-interest. > That's an interesting question, and one that would take an L5 > post...but I'm not sure how it relates to the question of whether > morality can be shown to be derived from self interest. I explain a huge chunk of it by "free" markets, capitalism and rule of law (including property laws). Once you set up such a system, individual greed and long-term-self-interest of groups are forced to overlap quite a bit. You have both competition and cooperation at the same time. This is the most efficient system in history for progress in our mastery of the world. > But, no reasonable person would think that. Even if they are willing, > what are the odds of them being in a position to do that? Even if > you assume that the risks of death assumed by someone going door > to door in a smoke filled building (to the point where they had > to be hospitalized) is only 1%, one can clearly see by looking at > the frequency of life threatening fires, the mobility of people, > the number of people that he saved, etc, that it was not a cost > effective strategy. If, for example, you were to have a game theory > with multiple scenarios during which people would either act in > their immediate self interest, or act in a manner that helped others > immediately and stored good will for the future, it would be a no > brainer to run as fast as possible in this scenario. One would just > do the numbers, and program accordingly. No one BUT reasonable people would think that. There are many other scenarios than fires where this behavior will come up. The group benefits by cooperation in a huge variety of situations. > This certainly excludes the widows and orphan problem. It also > excludes slaughtering people and taking their land. Further, it > excludes using military power to set up an unequal system; to maintain > oneself in power. In short, it excludes many/most situations where > morality comes into play. It is a simple model. More work is required, but the results are highly suggestive. > From my perspective, you are so sure that faith is bad, even when it > proves beneficial, when the benefit is tangible and measurable, it is > still bad because it is at odds with your metaphysics. No, it is bad because it does not prove beneficial, overall. The bad outweighs the good. > You and I made very different types of statements. When I say I > believe in something; I acknowledge that there is no proof; no > empirical basis. You claim an empirical basis for morality: it is > the behavior that occurs when someone pursues their enlightened self > interest because harming others harms oneself. So, where we differ is > that you believe a number of things that are not derivable from the > empirical; Wrong, they are verifiable. Just not easily. This is far better than your baloney which is DESIGNED to be unverifiable. If someone comes along with a system that is better than mine (one example, if it is more easily verifiable), than I am certainly flexible. > Indeed, what you posts indicate as your basic metaphysical position: > strong realism, needs a lot of contortions to be at all consistent > with experimental results of modern physics collected over the last > century. No contortions are required. You seem to be confusing the mental gymanstics required for your position with that of others. > Right now the best realistic interpretation of modern physics assumes > that there is a rich infinity of inherently undetectable universes > that contain a rich infinity of variations** of you and me (as well as > an even richer infinity that don't) created every annosecond. This is irrelevant to any useful sort of experiment. It is a waste of time unless you can put it into a falsifiable hypothesis. > Why not change metaphysics instead of doing this type of contortion? You are projecting your own contortions again. I am not making any. > They did not act in enlightened self interest. Yes, they did. Long-term-self-interest. > Sure, but you cannot control how others react. Yes you can. Not perfectly, but to a useful degree. > No, the Minds are the foundation of the society. The odds on them > existing are slim and none, with slim walking out the door. The chance of a very advanced artificial intelligence being built some day is nearly 100%. You are letting your irrational beliefs cloud your thinking. > One would have to suppose that there are straightforwardly tappable > laws of physics that are undetectable over, roughly 40 orders of > magnitude. I do not take it so literally. I ignored the numbers given in, was it Consider Phlebas? We will certainly be able to someday build machines as good as the human brain, and quite possibly find a way to squeeze MORE intelligence per unit mass or volume than the human brain. And we could always make the mass or volume much larger than the brain. In short, it is quite likely that someday something much more powerful than the brain will be built. > Yet, you appear to believe that science is about the truth. No. Science is about testing knowledge by experiment. -- "Erik Reuter" <[EMAIL PROTECTED]> http://www.erikreuter.net/ _______________________________________________ http://www.mccmedia.com/mailman/listinfo/brin-l
