Friday, December 23, 2005

The Relationship of Consciousness to Rationality, Responsibility, and Will

As an immediate follow-up to my discussion of scenario formulation and selection, in addressing Blake’s emphases on ‘rational thought’ and ‘moral responsibility,’ I think it may be useful to recognize, and perhaps put in proper place, the role of consciousness. Humans, apparently uniquely, have a cognitive capacity for recursion that seems important for both language and consciousness. If I understand correctly, lesser apes have some capacity for symbolic communication, but not the ability to talk about what Alice is talking about Bob talking about (this sort of multi-leveled ‘nesting’ is what I mean by ‘recursion’). Similarly, it may be that apes ‘think’ in some immediate sense, while not having the recursive ability to think about their thinking—the phenomenon we call consciousness.

Now, Blake has emphasized the term ‘rational thought.’ I am not sure I understand the motivation behind the emphasis, and think behind it is undue placement of weight on consciousness. Consider again the elaboration and assessment of, and selection among, future scenarios by an expert chess computer program. Would Blake call it rational or irrational? I would prefer to call it ‘rational’ (centuries of philosophical tradition be damned if necessary!). For me, I suspect in contrast to Blake, rationality is a matter of accuracy, reliability, logical rigor, and so on, rather than a conscious feeling of control (see below on mental illness). In terms of freedom, more distinguishing between chess programs and humans than rationality (which I would say they share) or consciousness (which humans have and chess programs lack) is the auto-rewritability of humans’ rules and values (see the immediately preceding post).

On to ‘moral responsibility,’ which I would say involves the initiation of, regulation of the nature of, and termination of social relationships (note this description encompasses even judgment by God and assignment to kingdoms of glory). If I declare expectations to someone, spelling out what will happen to our relationship based on various behaviors she might exhibit, and she can therefore (like an expert chess program) accurately formulate and assess future scenarios, then she is ‘responsible’—she can respond to these expectations according to her values. (Similarly but more primitively, in an airplane on autopilot we may say that the onboard computer is ‘responsible’ for the maneuvering of the plane.) Along with this, I ‘hold her responsible’ by regulating my participation in the relationship in accordance with the expectations I laid down. Now if someone is mentally ill, they are incapable of accurate scenario formulation and/or assessment, and therefore cannot respond accurately—they are not ‘responsible.’ Likewise, recognizing this incapacity, I may not ‘hold them responsible’—lay down expectations or regulate the relationship in the same manner as with some someone sane.

Now, there are degrees of responsibility, and we may choose to distinguish the higher degrees that require consciousness by the name moral responsibility. In the documentary March of the Penguins, the colony does not allow a mother penguin who has lost her egg or offspring to steal that of another. The enforcement of the relationship would be the same whether or not the mother penguin’s behavior results from faulty brain wiring (‘mental illness’). This is because penguins, not having the recursive capacity of consciousness, cannot think about other penguins’ thinking, or assess their assessments. In contrast, the regulation of human (and presumably divine) relationships proceeds in part on the basis of thinking about and assessing the thinking and assessments of others—a recursive activity requiring consciousness. None of these features of responsibility, however—whether amoral and unconscious, or moral and conscious—seem to require the absence of determinism.

Finally, on the relationship of consciousness to ‘free will.’ Blake tries to evade causal determinism, while disclaiming “mere indeterminism,” by saying that “What accounts for why an agent chooses A rather than B is that the agent has a power to agent cause the decision that is inherent [in] the very fact of having a will that is free.” But it seems to me that this statement is meaningless without an operationally useful notion of a ‘will.’ I think it would be useful and meaningful to use the term ‘will’ to describe a situation in which the assessment of future scenarios is itself being self-assessed—in colloquial terms, that a conscious mind is ‘observing’ its own deliberations. While we do not presently understand how consciousness arises, I do not know of anything precluding the possibility that this entire process of our brain or eternal intelligence monitoring its own assessments proceeds entirely under causal determinism.

21 Comments:

Oh Christian, how far you've come since this  comment.

It sounds like you have many of the same problems with Blake's position that I do. It seems that he wants to have his cake (causal determinism) and eat it (LFW) too. Even Geoff seems to get the problem to a certain degree, for while he identifies his position with Blake's he's never protested that he in reality accept full event causality. While I still need to think about what Blake's argument is really doing, I can't shake the suspicion that some sort of hand waving is trying to create counter causal free will from the fully deterministic material under pinnings. This doesn't sit well with me at all. 

Comment by Jeffrey Giliam | 12/23/2005 07:10:00 PM  

Wow. I've skimmed over some of your recent posts, meaning to come back to them and dive into them. Some days I have more . . . cognitive attention in my bank than others. Capacity to focus on things that require . . . wrapping one's brain around a concept, holding it, and following the trains of thought, process, and discussion that the writer proceeds with.

Today I happened to have enough in my "bank". I hate feeling so . . . mentally lowered in capacity. I used to love reading and thinking and pondering all sorts of intellectual things, subjects, etc. Anyway, hopefully I'm regaining some of that.

I have a post , that I wrote in September, as a new medication was starting to kick in, and I was trying to describe the indescribable sense of expanding capacity of self, capacity to self-activate, and/or direct my will, and other things. I reflected a bit more on this recently in a comment on this thread at BCC, comment 39.

I must say it is a VERY unique and wondrous experience to feel these "senses" of capacity, of self, of capability, ability, direction of will, and other things, expand or even FEEL them for the first time, in some things, in some regards. I will grasp on to any happiness I can find in my situation, and this "blossoming" of mind, mental functioning, emotional functioning, functional functioning (if that makes any sense; the capacities of self which most everyone takes for granted), is . . . a unique and deeply profound experience. I am still assessing it, as I was on too high a dose for a month or two and thus was rather woozy. Anyway, it is . . . beyond description, but I try to describe it anyway. Lol!

I also speak to some issues of taking responsibility for myself (even though at times it takes returning to a more . . . rational? state of mind before I can do so, but I then do), on this thread, comments 39, and to a lesser extent, more physically based, comment 55. Although I think it is a different sort of responsibility, at least in some sense, than that you discuss in this post.

Anyway, I'm going to go poke around the previous threads, here. Spend my "mental" currency as I have it, over the next week or two.

Thanks for this post!

Oh yeah, if you didn't know, I'm bipolar with a variety of anxiety disorders. I speak about my experiences alot on my blog; some occasionally well-written posts (I used to be a great writer; some glimmers of it are returning to me, at times), and some rambly stuff, in my more unfocused moods, but anyway, I speak on how alot of these sorts of issues feel from my point of view, from time to time. 

Comment by sarebear | 12/23/2005 07:20:00 PM  

Jeffrey, wow, blast from the past, seems like forever ago even though it was just a few months. Yes, that exchange makes a lot more sense to me now... Now I can see where some of these thoughts were planted. 

Comment by Christian Y. Cardall | 12/23/2005 10:59:00 PM  

sarebear, a very interesting perspective. A lot of interesting things to think about.  

Comment by Christian Y. Cardall | 12/23/2005 11:36:00 PM  

Thank you. I know you are probably posting on this sort of thing from a more philosophical point of view, but since I recently experienced the very, to me, new and unique awareness and growth inside me of feeling, for the first time in my life, some aspects of focus of will, fullness of self, and other things implicit in your discussion, I thought I'd try to describe what is possibly effectually indescribable. But I try nonetheless. Maybe I'm too obsessed with trying to convey my experiences and perspective, but hey, if it gives me a sense of purpose in the purposelessness storms that batter 'round me, then what the hey. 

Comment by sarebear | 12/24/2005 12:23:00 AM  

Jeff: It seems that he wants to have his cake (causal determinism) and eat it (LFW) too. Even Geoff seems to get the problem to a certain degree 

Interestingly, that is same thing I (and apparently Blake) have been thinking about your and Christian's (and Clark's) positions.

I have been tied up recently but after catching up on discussions tonight I am very interested in Blake's recent comments on "agent causal libertarianism". I think he might be on to something with that explanation of all of our views.

More next week after I too read up on those links Blake provided. 

Comment by Geoff J | 12/25/2005 01:18:00 AM  

Actually, "auto-rewriteability" of rules is something computers are capable of. I've only seen it done at the tic-tac-toe level, not the chess level, but it's not a uniquely human capability. The program is able to analyze how effective the rules it's been following have been and modify them accordingly. 

Comment by Wm Jas | 12/26/2005 10:30:00 AM  

Wm: which rules? The rules governing the underlying software or the rules that the software follows to generate the game?  

Comment by Blake | 12/26/2005 04:08:00 PM  

Geoff  and Blake: I find absurd claims that under casusal determinism there can be no moral responsibility. Even believing in causal determinism, there are two practical reasons why I hold others responsible (and do not begrudge others doing the same to me): (1) The laying out of expectations and administration of associated consequences becomes a causal feed into other's deterministic decision-making, so why not get into this deterministic game and see if outcomes can be successfully influenced? (2) If such influence does not result in suitable outcomes, measures can be taken to keep separate from individuals exhibiting unacceptable behavior.

I have skimmed the articles on agent causation Blake linked to on the other thread and they seem obsessed with the question of existence of moral responsibility as some sort of detached, independent absolute principle. This approach (as indeed the entire libertarian free will agenda) seems inspired by the problems and outlook of traditional Christianity that result from ex nihilo  creation of souls. I find my operational, pragmatic, naturalistic account (including a Mormon variety of naturalism if we must be religious) adequate to the explanatory task, and find nothing of practical utility lost; hence I find that my interest in and patience for the construction of elaborate technical arguments aimed at reaching transcendent absolutes dwindles quickly... 

Comment by Christian Y. Cardall | 12/26/2005 08:11:00 PM  

Wm Jas,  that's an interesting example that of course appeals to the naturalistic (non-religious) track of my thinking.

Of course in my haste I neglected to mention that in terms of autonomy or freedom, animals and humans also differ from computer programs in ability to regulate and sustain the homeostasis of the physical substrate in which their logical operations are carried out. As Damasio argues, it seems plausible this is closely related to other features of human existence that we cherish---feelings and emotions. (Though one sees lines beginning to be blurred here too---my laptop turns its fan on and off to regulate temperature---when it begins to whir it is almost as if it is emoting in distress... ;-> ) 

Comment by Christian Y. Cardall | 12/26/2005 08:22:00 PM  

Christian: There are multiple problems with your response to morality and free will (or determinism). First, if I ought to do something, it follows that I must be able to do it in the sense that it is genuinely open to do it given all of the circumstamces that obrain. You must reject the ought-implies-can principal. To show this point (and the failure of your position) assume that scientists have planted electrodes in your brain. They want you to steal a Mars bar. To do so they implant electrodes in your brain to create in you a desire that, given your character, deterministically requires you to steal the Mars bar. Are you morally responsible for stealing the Mars bar? Well, obviously not, because stealing the Mars bar was beyond your control because you didn't have control over the causes that led you to do so.

Now replace these scientists with the causal laws that lead you to have the desires you do that, given your formed character, result in your stealing the Mars bar (given determinism). The deterministic laws play the same role as the scientists. However, would your suggestions make sense in this scenario? First of all, you suggest that you are still entitled to hold others responsible because you could could then become a causal feed into their education. This response leads to the famous "philosopher outside the causal box" scenario in which what you suggest only makes sense if you are yourself outside the causal chain so that what you want to teach isn't just the result of prior causes. Otherwise, you yourself are just a causal result as well and what you want to teach is only what the causes dictate (without any rationality to them).

Further, teaching someone is not the same as holding someone responsible. I think there is a missing step in what you say, i.e, it is OK to punish someone because punishment becomes a causal link in the causal chain. But you could teach a monkey or dog in the same way without holding them responsible. I suggest that like Jeff you have an illusory sense of moral responsibility. You administer punishment not because someone is deserving of punishment, but because it is a way of behavioralistically or operantly condiitioning their behavior. Who operantly conditioned you to give this teaching and what makes you think you have the right to choose the operant conditioning?

So what you suggest isn't a response to an argument about moral responsiblity, but about why corrective punishment is OK without morality (a lot of determinists take this tack and it shows that you really don't have a sense of someone being morally responsible and thus being held morally accountable). You don't punish someone because they are accountable, but because it will create a world more to your liking (and less to the person being punished). I have notice that determinists like you and Jeff are real short on accountability and long on talking about forgiveness -- a forgiveness that is never necessary because no one is really ever accountable!

So the bottom line is that your response doesn't require any morality, it just requires a program of operant conditioning so that you can get the desired result -- a result that you desire for reasons also beyond your control because they are deterministically caused in you. Would you ever accept punishing or praising someone because they are accountable for what they did rather than mere a means of conditioning their behavior?  

Comment by Blake | 12/26/2005 09:21:00 PM  

Blake, as far as I can tell, God's (or society's) promises of rewards and punishments are aimed at motivating particular behaviors---behaviors that, according to God's knowledge (or society's experience), lead to peaceful societies---Zion, in scriptural terms. God's (or society's) subsequent implementation of promised consequences is either by way of fostering subsequent learning, or, when the limits of an individual's learning have been reached with unsatisfactory and unchangeable results, enforced separation. If God himself proceeds this way, why denigrate this process as 'mere' conditioning?

Because God (or society) does not know beforehand the limits of any particular individual's learning---whether because of artificial electrodes, the nature of their uncreated eternal intelligence, or any other internal constraint beyond God's (or society's) control---God (or society) operates under the presumption  that learning can proceed to the level requisite for God's (or society's) companionship. To the extent this presumption is proved false, separation is enforced.  

Comment by Christian Y. Cardall | 12/26/2005 10:48:00 PM  

Blake: I don't understand the distinction between "the rules governing the underlying software" and "the rules the software follows."

The program rewrites itself as it goes, modifying the algorithm that determines which moves it makes. This is the only part of the program that gets modified -- the rules of the game stay the same, as do the procedures for displaying the gameboard on the screen, and so on. But this is for practical reasons only -- because we want the program to play tic-tac-toe, not do something else. In theory you could write a program capable of modifying every part of itself, including modifying its capacity for self-modification. 

Comment by Wm Jas | 12/26/2005 11:07:00 PM  

Wm: What we do is to give the algorithm a progressive formula -- it doesn't really modify itself.  

Comment by Wm | 12/27/2005 12:07:00 AM  

Christian: My point is that you don't have a morality at all but an operant conditioning program. It isn't like we have an ogligation to avoid hurting another; it is just that we want peace so we find an instrumental way to institute it -- as I understand what you are now saying. However, why is peace something to be sought? Do we or do we not have moral obligation in your view rather than just instrumental ways of achieving desired ends?  

Comment by Blake | 12/27/2005 12:10:00 AM  

Blake: Once again you speak of "morality" and "moral obligation" as if they were ethereal absolute principles floating on their own out there somewhere, independent of anyone's mind or being. (Maybe this is what philosophers would call transcendent principles?) As I hinted at above, unless I can be shown why such is necessary or useful, I don't have much sympathy for or interest in this kind of approach.

Why not instead ground "morality" in God's being, defining it as that behavior which is desirable to him (or in an earthbound approach, desirable by consensus to some social group)---behaviors that, by God's knowledge (or society's experience), lead to a Zion society? With Zion as the desired end, what is wrong with or inaccurate about describing the plan of salvation, and God's actions in its execution, as "instrumental ways of achieving desired ends"?  

Comment by Christian Y. Cardall | 12/27/2005 08:25:00 AM  

Christian: No, I don't talk as if moral principals are "just out there" independent of us, but I do believe that they are not merely subjective preference and not merely based on social contracts so that we could change them by agreement. Moral principals arise because of the type of eternal nature we have and the type of potential we have and the kind of relationships (loving ones) that actualize our nature to realize our potential to be as God is. See my discussion of moral meta-ethics at http://www.fairlds.org/apol/TNMC/TNMC06.html

So while I think that you are on the right track, there are still moral obligations that obtain between us that cannot be reduced to mere social contracts and violation of them merits approbation (blame) and keeping them merits praise. The problem is that you treat moral principals as if they were merely a tool for placing another link in the chain of causation and there is no real moral obligation at all. Now I don't believe that you really believe what you argue -- for this last post suggests that you would accept the same kinds of grounds of moral obligation as I do.

However, let me say that moral obligation doesn't exist merely as an agreement, for there are moral obligations whether we assent to them or not. It happens to be wrong to torture little children and there is no agreement, no mere social contract that could make it otherwise. In other words, Jeff's acceptance of the very weak "morality" proposed by Dennett and others is completely inadequate for someone who doesn't believe that "morality" is mere social convention. There are a number of talks by GAs asserting that there are moral absolutes and not the mere moral relativity you appear to me to accept -- if I have properly understood you.

So it would be nice if you answered some of my questions -- they are designed to get at whether you believe in any moral principals or merely social convention as the basis of morality. Your determinist response as to the reason for punishment, to put another link in the causal chain to get the results you desire, assumes that there is no real moral obligation at all. As I understand your argument, we don't punish or praise someone because they deserve punishment or praise, but merely as a link in the causal chain to achieve an end desired by God or some group -- and these principals could change if God changed his mind or the group changed its collective mind.

Now let me add that because moral principals only arise in relationships between persons on my view, they are not person-independent but neither are they merely personal whim and agreement as you seem to think. So there are absolute moral principals, but they are not ethereal or "just out there" as you assert of my position.

So let me ask this, why is a Zion society desirable? Do we have some obligation to bring it about or is it merely something that we could reject (and adopt hatred and violence as our way of being and then that would be "good") and we violate no moral duties at all in doing so?

Finally, it seems rather clear that God is not putting in place the "instrumental ways of achieving his ends" as the causal conditions that will causally determine us to achieve Zion as you suggest -- if I have properly understood you. He leaves it up to us and not to the causal conditions that either he institutes or just happen to exist. There seems to be a fairly strong assumption of Calvinism in your views no matter how hard you verbally try to deny it. I believe that God does indeed put in place conditions that will allow us to choose to grow to be like him, or to reject him, but it is not merely by placing causal links in a chain that causally determine us to do as he desires as you have argued here.  

Comment by Blake | 12/27/2005 11:53:00 AM  

Blake, I've read your comment and skimmed your FAIR article and am thinking about my response, but have to leave the computer now and probably won't be able to respond until later this evening. 

Comment by Christian Y. Cardall | 12/27/2005 04:23:00 PM  

Christian & Jeff: I believe that the discussion of meta-ethics is unfruitful because our views of meta-ethics differs so greatly. However, perhaps you could discuss a response to the examples I have given about the scientists who can cause us to have the desires and springs of motivation that we do that lead to our "decisions" that I discussed above. This is a case where the conditions of causal determinist choice obtain but we are clearly not morally responsible for what we do. So I see it as a counter-example to the assertion that we can be causally determined and also morally accountable. Perhaps this is a more fruitful way to approach the issue rather than to discuss meta-ethics (and let me stress that we may well not disagree on what is in fact valuable or what is ethical, we just don't have any common ground on what grounds ethcis).  

Comment by Blake | 12/29/2005 10:55:00 AM  

Jeff & Christian: There were some comments on another blog that I like to govern civil discussion of issues on which we disagree. Since I am likely the main offender, I thought it may be useful to lay it out so that you can call me on it when I don't comply:

What I would suggest ... is that in a situation such as you describe, in which we disagree, one would be civil, give one's opponent the benefit of the doubt (after all, we might have undetected errors in our own reasoning), and go over the arguments together. This should lead to one of several results (not necessarily an exhaustive list):

1) We cannot agree on our premises, and no further progress is possible.

2) We find a place where, given our premises, your reasoning is at fault.

3) We come to a place where my reasoning is at fault.

4) We follow the argument as far as it will take us, but the result is inconclusive.

If we arrive at 2), I can still decide not to beat you over the head with it, out of a general sense of decency. But I am at that point, I think, entitled to consider you unreasonable.
 

Comment by Blake | 12/29/2005 11:04:00 AM  

Blake, I also desire that discussions remain civil, and would hope they proceed in such a way as to avoid bad feelings. I also agree that there come points when further discussion becomes unfruitful. It's not clear to me that we've reached that point yet, however.

I agree in principle with your outlined points (1)-(4), but would point out that in practice interactive argument does not typically follow the pattern linearly, mostly because premises are often (unintentionally) hidden or implicit.

I am interested in continuing the discussion and will work on it as I can through the day. I am a rather ponderous thinker and writer, and am not on vacation, so I apologize if my responses seem to dribble out slowly or if at some point I seem to be unresponsive. It's not because I have bad feelings or anything. 

Comment by Christian Y. Cardall | 12/29/2005 11:36:00 AM  

:
:
:

BloggerHacks

<< Home