Wednesday, April 29, 2015

Our Empirical Education

I have had the privilege of studying at four different post-secondary institutions in Canada over the last 14 years. Two were private institutions, and two public. Redeemer University College (Ancaster, Ontario), a private liberal arts institution, is my undergraduate alma mater. From there I went to do do a Masters of History at McMaster University (Hamilton, Ontario) and a Masters of Philosophy at the Institute for Christian Studies (Toronto, Ontario), another private institution, which was closely associated with the Toronto School of Theology at the University of Toronto. I am presently finishing up a Doctorate in Religious Studies at McGill University (Montreal, Quebec).

I did not consciously set out to get where I am today. I made choices when I needed to make choices. I walked through doors that opened. In retrospect, none of the decisions along the way look very smart. My undergrad education was too expensive. My graduate education has been perpetuated by dismal economic prospects elsewhere. In the succession of moments, though, I cannot claim to have made any particularly grievous decisions. Nor do I have any obvious regrets. The privilege of pursuing higher education for as long as I have is a rare oppourtunity for a farm kid like myself. Just 40 or 50 years ago (and the remainder of human history) this would have been a near impossibility.

My path appears an entirely predictable set of steps through Canadian post-secondary institutions. The steps even make sense in geographical terms, starting out from the small town of Ancaster, down the 'mountain' into the larger city of Hamilton, then down the QEW to Toronto, and finally down the 401 to Montreal. The one thing that strikes me, looking back over the past number of years, is the fact that the public and private institutions are all cut from the same cloth. Individual professors and the content of the courses, especially at the private institutions, could be quite diverse--or quite homogeneous, depending on your point of reference. But the essential 'structure' of higher education, though, hardly varied at all. With the exception of the Institute for Christian Studies, which was delicately solely to the study of philosophy, the institutions of higher education are organized according to faculties, departments, and fields of study. Depending on the size of the institution in question, the faculties of the arts or the sciences, for example, are subdivided into departments of English, History, Philosophy, and Social Studies, or Physics, Chemistry, Biology, and Environmental Studies. Within each of the departmental sub-divisions, individual professors special in their particular fields of study, say, 19th century English literature, late medieval nominalism, personal earning patterns in developed economies, the high energy creation of exotic particles, or the chemical composition of crystal formations, and so on. The world is divided neatly into tiny boxes, in which different professors plied their trade. These are regarded as essentially the same--as fields of study--despite superficial differences.

Why should this be the case? The structure of modern higher education is organized around the basic principle of empirical specialization. Today everyone in higher education is an empiricist--even, and especially, the vocal critics of empiricism, who must work in their particular fields of study, with their particular specializations. It is the water that an academic must swim in, without which the academic would be a proverbial fish out of water--which is to say, not in academia, but maybe engaged in some, much more practical, productive endeavour.

There is, of course, a certain inevitability to specialization. This is not to be denied, nor should it be entirely rejected. There are no Renaissance men, no polymaths these days, as there is simply too much for anyone person to learn. Even so, a healthy suspicion of specialization's excesses is not out of order.

Despite being cut from the same institutional cloth, there were genuine differences between the different institutions that I attended. These focused on their cultures--on what people made of empirical specialization.

The most sophisticated account was at Redeemer University College, where I was introduced with the philosophy of the Dutch Neo-Kantian (or Neo-Calvinist) Herman Dooyeweerd. A relatively unknown figure, Dooyeweerd divided the human world into 15 different modalities, which correspond to possible divisions between academic disciplines (mathematical, spatial, kinematic, physical, biological, psychological, logical, historical, lingual, social, economic, aesthetic, legal, ethical, and theological). He argued that every particular thing that a person might encounter in the world can be thought as 'participating' in different ways in each of the modalities: either in relation to the natural order or in relation to the human order. Something a mundane as a tree, for example, could be thought of as functioning in the five first natural modalities. Once human beings got their hands on it, bending it towards their own ends, the tree could also be thought of as functioning in the last ten modalities. The modalities, in this sense, might be thought of as the many different dimensions of a thing. Dooyeweerd's point, which I had drilled into my head, was to underscore the beautiful complexity of empirical reality, to take its objectivity seriously, and to guard against the cardinal sin of reductionism--of explaining everything else in terms of one of reality's discrete dimensions. Persons would inevitably end up specializing. But they would know what they were specializing in, and how it related to other specializations.

Redeemer wanted to give its students a properly liberal arts education. It proposed to achieve this both by requiring a broad sampling of courses and a conceptual framework within which to situate their particular fields of study. Different academic disciplines weren't studying different things; they were studying the same things in different ways. The public schools, McMaster and McGill, probably tried to accomplish something similar for their undergrad student. My impressions gleaned through conversations with undergrad students, though, was that public institutions encouraged a lot more specialization. They lacked a comprehensive account of how everything was supposed to fit together in a complex, coherent whole. This meant much more attention was paid to mastering particular details.

The fact was impressed upon me in my own work especially by just how 'research-oriented' was even the the study of history, philosophy, and theology in public institutions. One didn't just read texts to understand what other persons thought or did.. One 'excavated' them for meaning. One worked to extract some implication that lay just below the surface or between the lines on the page. In McMaster University's History Department, this impulse was played out in a dialectical relation between the creative use of 'progressive' concepts (gender, class, race, etc.) and close attention paid to the textual evidence. McGill University's Faculty of Religious Studies, by comparison, is more 'conservative' in its outlook in the specific sense that the emphasis is not placed on a certain set of concepts (what one proposes to read into/out of a text), but more generally on refining one's methodology (how one proposes to go about reading a texts). An emphasis on methodology discourages overtly 'creative' readings of the textual evidence, demanding instead that you engage with what is being said.

The Institute for Christian Studies was a bit of a different beast, again, owing to its very small size and narrower focus. The Institute was born from the same community of scholars as Redeemer, but ended up taking the Dutch Neo-Kantian (or Neo-Calvinist) tradition represented by Dooyeweerd in a more progressive direction. Its faculty was very deeply animated by ethical concerns, for criticizing oppressive regimes (intellectual, political, social), being open to new human possibilities. Its scholarship was similarly motivated to unlock interpretive potentials in texts. For example, the Institute was also convinced of the persuasiveness of Dooyeweerd's modal theory as an empirical description of human reality. But it tended not to be as confident, as had been the case at Redeemer, that human reality could be exhaustively described in empirical terms. Cutting the human world into ever smaller pieces, after all, can't be all that this life is about.

These are, of course, only my own impressions. I formed them in the particular situations to which I belonged. This is not to say that there is nothing in what I have said here, but to admit my judgments have always been partial. I can well understand, for example, that some persons from Redeemer or the Institute for Christian Studies might object to my characterization of what the school was about; or, as is more likely, my presentation of Dooyeweerd. What about his critique of Kant? or the 'supra-temporal' dimension of human life? or the religious Idea? or the specifically biblical ground-motive? All good questions, no doubt. All for insiders, though, who are intent on mastering the particular tradition of scholarship that Dooyeweerd represented--on fitting themselves into that particular box.

What compels us to reduce the entirely of human reality to a merely empirical description where everything has its own box? where every box has its own professional journal? and every journal have its small group of contributors scattered around the world? Where did this system of education come from in the first place?

The modern university has its institutional origins in medieval Bologna (1088). The subjects taught in the medieval university were ideally divided according to the classical disciplinary divisions between the Trivium and Quadrivium. The Trivium was modeled, in very generic terms, on the how persons processed information. It was divided between between grammar (input), logic (mental processing) and rhetoric (output). The Quadrivium divided up the objective world: arithmetic (or math), geometry (math in space), music (math in time), and astronomy (math in space and time).

The Trivium and Quadrivium, of course, could never hope to account for the many different sorts of knowledge that we today think of as legitimate. But it says something, I think, that a person can see themselves in its distinctions taking information in, thinking about the information, and conveying it to other people, or counting, doing basic physical measurements, listening to music, or looking up at the stars.

The same cannot be said of our empirical education, which arranges itself knowledge like books on a shelf--like so many little boxes. So how do we get from the classical ideal to today? The most obvious answer is the radically de-centering consequences of modern science, which says the human being is not at the center of the physical universe, so the human being has no business being at the center of our knowledge of the universe. Cutting the world into ever smaller boxes, in fact, has proved highly productive for natural scientific inquiry. The smaller the box, the more precise the answer. The more precise the answer, the better control we have over its application. The rest, of course, is in the histories: of the atom bomb, of the moon landing, of the Pill, of the Computer, of the Large Hadron Collider.

But is the human world--i.e. the liberal arts--to be divided along the same lines? How much sense does it make to decenter humanity from the humanities?

The contemporary push for empirical specialization has its origins in the late 19th century as the professoriate began to treat the study of their subject matter like a profession. They established new faculties in medicine in and natural sciences, which had the consequence of firming up distinctions between faculties in the university. They established professional journals dedicated to particular topics--still much too broad-ranging for our empirical tastes, but nonetheless specialized by the standards of the day. And they asked, much like Dooyeweerd would ask in the middle of the 20th century, how all these different subjects fit together in a single, consistent, complex, coherent whole. Their outlook was evolutionary, progressive, and hopeful. Humanity had a beastly past, but things were going to get better.

Some time in the late 60s and the early 70s, the generation that either been born during or immediately after World War II abandoned the grand synthetic visions of their parents and grandparents. The consequence has been an increasingly fragmented academic world. This had, in part, to do with an understandable loss of confidence in grand projects. It also had to do with simple numbers. In the United States, for example, the GI Bill (1944) meant that the number of doctorates awarded between 1953 and 1971 rose from 8000+ to 33750+. The average number of years covered in a history dissertation, specifically, fell from 75-100 years in 1900 to less than 30 years in 1975. The expansion of the academic industry meant that people were doing more and more that covered smaller and smaller tracts of space and time. Many more 'things' were being sought in those smaller tracts as well. Not just politics, but society, economics, psychology, etc. Philosophy and English departments got on board the radically empiricistic bandwagon by thinking about the nature of historical context, deconstructing abstract knowledge regimes, contemplating the impossibility of thinking the entire world in a single movement of thought.

As I said above, there is a certain inevitability to empirical specialization. It's value is not to be entirely discounted. On the other hand, too much specialization, particularly in the liberals arts, seems to misunderstand the nature of the subject matter.

The academic world seems sense it too. Big History is back in a big way. Not, of course, the big histories of Hegel and Marx. Big History has taken the nihilistic responses from Nietzsche and Foucault to heart, and itself founded on big science: grand evolutionary narratives in which humanity occupies just a small, but infinitely complex, fraction of the picture frame. Today's gods are funded by Bill Gates. They stride the length and breadth of our universe, from its origins 14 billion years ago across its 10s of billions of light-year it takes to get from one side to the other. They will fret about what it means for us to live in the Anthropocene, a radically new epoch in our planet's history, in which the human race is no longer subject to nature, but now participates, for better or worse, in its recreation. That a good place to start.

Saturday, April 25, 2015

Conservative Catholicism and Climate Change

With a new a papal encyclical on climate change on its way and a papal visit to the United States scheduled for this fall, it has been interesting to watch different groups of American Catholics grapple with the 'meaning' of Pope Francis' ministry.

The Roman Catholic Church, of course, embraces a plurality of outlooks on almost every topic under the sun. This may come as a surprised to some outsiders but the Church is as much a debating club as it is a community of worship and service. When a pope makes an off-the-cuff remark--like Francis did when he said in response to a question about homosexuality, Who am I to judge?--broad segments of the faithful scurry to appropriate for themselves or distance themselves from his statements. They affirm what they agree with, and claim it for their own. They caution their constituencies about following this or that line of argument to far.

News that Francis would promulgate an encyclical on climate change sent similar aftershocks through the online Catholic community. Many more left-leaning Catholics eagerly embraced the announcement. About 70% of Catholics in the United States, for example, are supposed to be concerned about climate change, a number that is much higher than in other Protestant churches.

Not so the more ideologically conservative Catholic pundits and their readership. The National Catholic Reporter's Brian Roewe has good summary of the various different things being said, as well as the various criticisms being offered. 'Some conservative corners,' he writes, 'have a more tepid take, welcoming papal guidance on environmental issues, while voicing concerns about the document's ultimate direction: toward a reaffirmation of the stewardship role over creation, or into the boiler of the contentious climate debate still firing in the U.S.' ('Conservative corners have tepid take Francis's environmental encyclical,' Apr. 21, 2015.)

One of the more impressive things about the discussion is how measured are the criticisms. Even when they disagree with the pope, conservative pundits will not openly break with the authority that he represents. The result is a sort of hostile deference. Rather than do the online equivalent of throwing up their hands and storming out of the room because Francis is obviously being unreasonable for not seeing the world the way that they see the world, the conservative Catholic punditry resorts to 'raising' the level of debate, refining their conceptual distinctions, and expounding a great length on the dangerous intellectual precedents set by the environmental movement. You will not find any overt suggestions that Francis has abandoned the fundamental principles of Roman Catholicism. But you will find almost everything just shy of that mark.

I happen to think that the conservative punditry will end up looking foolish in retrospect. The document will be theologically moderate. It will assert the sovereignty of God, the dignity of the person, and the gift of the creation, while holding out two possibilities: to either live well with one's fellow human beings on this world of God's creation; or to live poorly by always seeking one's own advantage, regardless of the cost to one's fellow human beings and to the world. Its basic message will be able to be summarized in Jesus' two commandments to love God and to love one's neighbour as oneself.

Because the Catholic Church assumes a much longer perspective on who counts as one's neighbour than it typically the case, the document will affirm a continuity with sacred tradition and include warnings about not to exhaust the world's natural resources in the effort to enrich ourselves at the expense of future generations. It will explore the destructive potentials of the merely material, capitalist ideology of economic development of its own sake. It will point out that the wealthy half of humanity who benefit from the breakneck pace of development are in a much better position to weather the effects of climate change than the poor half. And it will argue, on that basis, that climate change is a fundamentally moral issue because Jesus came to both rich and poor alike, to make all men fathers, brothers, and sons, and all women mothers, sisters, and daughters.

My reason for thinking the conservative punditry will end up looking foolish has to do with the sorts of arguments that they are making against the idea that Francis is going to promulgate an encyclical on climate change. They forget that Francis will not squirrel himself away in a room for a couple weeks, emerging a with complete draft, like Athena sprang from the forehead of Zeus. The encyclical, the highest platform for church teaching, is born from a consultative and collaborative process. In the absence of something to read, they are grasping for something to say.

The conservative punditry trot out old hobby horses that they like to ride when called upon to talk about issues in the general vicinity of climate change.

The first, and only the most obvious, is incipient anti-humanism of the modern environmentalist movement. Human beings are the problem. The solution is to get rid of them.

This is closely related to a second theological conviction of the modern environmentalist movement. The Gaia Hypothesis: the earth itself is alive and godlike. Human come from the earth and return to the earth. Or, we can state this more expansively to say that we are all stardust--which we are, but that's not my point.

These are stock-in-trade objections. The punditry will point out that they preclude seeing human beings as independent actors, who are responsible for their actions. They will also point out that if you *really* believe this, you will promptly kill yourself. Whining about how other people are destroying the planet while your own footprint continues to grow is either duplicitous or self-deceiving.

However, the problem for the the conservative punditry is that Francis will not make such elementary theological mistakes. The pundits can speak darkly about 'intellectual influences' and the consequences of certain tendencies of thought. Their objections will have no traction when the encyclical is released.

They do, however, offer slightly more sophisticated arguments about Francis' 'sphere of competency.' As Roewe shows, it is widely assumed that Francis will be deciding on scientific questions regarding climate change, while they believe he should restrict himself to moral and spiritual direction. Robert P. George has pointed out in a short reflection at First Things that as far as the empirical science goes, Francis stands exactly where the rest of us stand--which is to say, on less than certain grounds. 

The Heartland Institute, a Chicago-based, conservative think-think funded by in small part by Charles Koch (to the tune of $25,000), has even said it will send a group of scientists to the Vatican to discredit the existing climate science consensus. The Guardian reports, 'Jim Lakely, a Heartland spokesman, said the thinktank was “working on” securing a meeting with the Vatican. “I think Catholics should examine the evidence for themselves, and understand that the Holy Father is an authority on spiritual matters, not scientific ones,” he said.' ('Conservative think tanks seek to change Pope Francis' mind on climate change,' April 24, 2015)

These are bait-and-switch arguments. Though they are ostensibly directed towards different ends, they serve to restrict what Francis might say to a limited moral or spiritual 'sphere of competence.' But what they actually accomplish is to prevent Francis from make things like economic inequality or access to natural resources moral issues.

Everyone knows that Francis is not a scientist, and so not competent to judge climate sense. The objection is therefore hardly novel. What is novel is the use of scientific objections to deflect from moral issues. So when Francis says we ought to do this and ought not do that, and his conservative Catholic critics respond by saying that he is not competent to judge the science (which is never conclusive anyway), the critics are either fooling themselves, or trying fooling the rest of us.

Francis has never claimed to be competent to judge the science. But if you are a Catholic (which I am not), then you defer to his judgment regarding what it means to love God and love one's neighbour as oneself--in principle, at least. As he is reported to have said immediately following his election, 'one of the reasons he took the name Francis was because St. Francis of Assisi is "the man who loves and protects creation." He went on to say, "These days we do not have a very good relationship with creation, do we?" ('Encyclical on environment stimulates hope among academics and activists,' Apr. 24, 2015)

The line that the conservative punditry is walking is a dangerous one. Taken to its conclusion, their argument says that moral and spiritual values are subjective and private, and have nothing to do with bodies and society. No pundit, of course, would agree to this wholeheartedly, since it would undercut their arguments against abortion and the nature of the family. On the other hand, if moral and spiritual values do have something to do with bodies and society, then an encyclical on the environment that draws attention to social justice issues is entirely apropos.

Friday, April 24, 2015

Nietzsche: Living Unhistorically

Students of history face the same difficulty faced by students of religion when trying to communicate what their intellectual interests are about. Since I happen to stand in both camps, both my feet are put to the fire. The difficulty is that the subjects are peculiarly modern in a way that the study of the natural sciences are not and can never be, but the steadily increasing age of their subject matter gives the impression of their steadily increasing irrelevance.

Let us grant at the outset that modern scientific discoveries are also characteristically modern. Copernicus' heliocentrism, Galilean frames of reference, Newton's Three Laws of Motion and Law of Gravity, Lavoisier's Table of Elements, Darwin's Theory of Evolution by Natural Selection, Clerk Maxwell's Theory of Electromagnetism, Einstein's Theory of Relativity, and Bohr's Quantum Theory (the list could go on) all fit this bill. Neither the ancient nor the medieval mind was so observationally or technically inclined to have generated modern scientific discoveries of these orders.

This may be entirely true. My argument remains unchanged. Why? As long as human beings have recorded their thoughts on tablet, parchment, or paper, they have been interested in the physical world. They may have understood it on very different terms. They may have interpreted it as a creation of the gods, an illusion, or a source of moral order. They were nevertheless trying to understand order of the natural world: to measure distance and time; to understand the relation between the sun, moon, and stars to the ebb and flow of the tides or the changing seasons; to discern the tell-tale signs of new life, or illness, or death. The maintenance of human life depended on these sorts of knowledge.

However, what we call the liberal arts (and we might also include the social sciences here as well) are peculiarly modern in the sense that humanity's interest in its own self is peculiarly modern. Contemporary academic disciplines like history and religion, or economics and social studies for that mater, have no analogue in the pre-modern world. Nor does the plentiful and widely available popular literature on the same that you can pick up at Borders (for Americans) or Indigo (for Canadians). There are anticipations, yes. But nothing that approaches the level of sustained inquiry or critical reflection, not on just what is studied but also how these are studied, that we begin to see in the 18th and 19th centuries in Europe and the Americas.

The liberal arts are peculiarly modern in the second sense that their subject matter often is not modern at all. Take religion as an obvious example. It may seem that the study of religion is many centuries old because religious figures like the Prophet Muhammad, Jesus Christ, and the Buddha lived many centuries ago.

But this is not strictly the case. The study of comparative religion, which acclimatizes us to making comparative statements like the one immediately above, is only a century and a half old. The basic non-anthropocentric assumptions of modern astronomy, by comparison, are at least five centuries old; modern physics, three and a half centuries old; modern chemistry, three centuries old; modern biology, two centuries old; and so on. Only particle physics, which shed its anthropocentric assumption of the impenetrability of the atom in the last century, can claim to be younger.

So, whence comes the instinctive assumption that the increasing age of a subject matter is inversely proportional to the relevance of the subject matter? Just so the reader doesn't pass this question over too quickly, pause for a moment to consider what is being asked. The question can be stated in simpler terms: why is it that the older something is, the less relevant it must be? Certainly we may grant that this is the case in the natural sciences, where the best information is usually drawn from the most recent studies. But does the same apply to the liberal arts?

Perhaps the most erudite reflection on this is found in Friedrich Nietzsche's On the Advantage and Disadvantage of History for Life (1874). The question implied in the title finds us in roughly the same territory that we have already covered. It asks: in what sense is knowledge of the past conducive to living well?

Nietzsche invites his readers: 'Consider the herd grazing before you.' Not exactly what we would consider a familiar place to begin. But okay; let's walk with him for a little while more.
Consider the herd grazing before you. These animals do not know what yesterday and today are but leap about, eat, rest, digest and leap again; and so from morning to night and from day to day, only briefly concerned with their pleasure and displeasure, enthralled by the moment and for that reason neither melancholy nor bored. It is hard for a man to see this, for he is proud of being human and not an animal and yet regards its happiness with envy because he wants nothing other than to live like the animal, neither bored nor in pain, yet wants it in vain because he does not want it like the animal. Man may well ask the animal: why do you not speak to me of your happiness but only look at me. The animal does want to answer and say: because I always immediately forget what I wanted to say--but then it already forgot this answer and remained silent: so that man could only wonder.
What to make of this...Nietzsche's discussion assumes an account of differences between human beings and animals that dates back at least as far as Aristotle, but is now largely discredited in the intellectual wake of the publication of Charles Darwin's The Origin of Species. He assumes that human beings are rational or self-conscious, while animals are merely sensitive or conscious beings that react to the pain or pleasure induced by external stimuli. On this account, human beings possess memory and have emotion, whereas animals neither remember nor possess any emotion. The human ability to 'stand back' and reflect on themselves gives rise to existential discord. Animals are 'happy' in their comparative simplicity.

Whether or not this is a fair comparison is immaterial. We are interested in what Nietzsche thinks it says about human beings.

The problem that Nietzsche identifies is that human beings possess knowledge--even desires to know, in Aristotle's telling--but too much knowledge hinders action. This is a familiar trope. We encounter it in such turns of phrase as 'too much theory, not enough practice,' 'you can't just talk the talk, you have to walk the walk,' or, in its medieval iteration,  'too heavenly minded to be any worldly good,'

Nietzsche brought the charge against the 19th century German professoriate, who became enamoured with their historical methods. These men (for they were predominantly men) filled tomes and volumes with ever smaller smaller of the historical minutiae. But it was never clear what it all amounted to, or what wisdom was gained in the process. How did any of it help people live today? How did it help people govern today?

Nietzsche has his fingers on a genuine problem. The decision to act, to do this or not that or the other thing, eliminates the possibility of doing that and the other thing. That's just the way things are. A decision one moment sets the course of the action for the next. A series of decisions through successive moments establishes a habit or pattern, or commits a person to pursuing this goal and not that. More information is not always the answer. More information, in fact, may be part of the problem. Too much information can so burden a person that they are reduced to indecision. So, Nietzsche says, 'Consider the herd grazing before you.' Consider how blissful they are in their forgetfulness--their lack of knowledge. Look at them 'leap about, eat, rest, digest and leap again.'

Nietzsche's answer to our question about why the older something is, the less relevant it must be, is straight-forward enough. In order to act, person's must drastically curtail the amount of information deemed relevant to deciding their course of action. Really long historical perspectives are out of the question. They only clutter our head with useless information.

This is, of course, just a different version of the modern scientific conviction that the only actionable knowledge will be found in the most recent scientific study. People with a longer historical perspective--for example, who can list off major scientific discoveries over last five centuries, but who can't find their way around a laboratory--don't possess actionable knowledge.

This would also seem to throw into doubt whether the liberal arts can ever serve purely utilitarian purposes. Therein, it seems to me, also lies the rub. Only the longer historical perspective puts us in touch with what is quintessentially human in human beings. The much more shorter perspectives, which yield actionable knowledge, show us something other than, or less than, the whole of our humanity. Indeed, they must, in order to be actionable.

The problem of modernity is the problem posed by our memory and our history. By virtue of being modern, we have it, but we are not sure what to do with it. That leaves us, paradoxically, wanting to become like Nietzsche's herd; wishing desperately that we could, but knowing, deep down, that it is not even possible.

Monday, April 20, 2015

Hegel: The Master-Slave Dialectic

This is the third is a series of three sketches of the modern problem of the relation between empirical fact and moral value. The first sketch looked at Kant's question, How are synthetic a priori judgments possible? Though the language used to formulate the question was highly technical, we saw that question got at the nature of our confidence in assumptions and methods of natural scientific study. The question, we also saw, framed the world in such a way that it was incapable of dealing with the concrete character of moral judgments. These are always crossing the supposed gap between an objective fact and a subjective value by ascribing a moral value to objects in the world, including, and especially, human bodies.

The second sketch looked at Hegel's initial formulation of response to Kant's question: in a little studied section of his Phenomenology of Spirit called 'Sense-Certainty.' Hegel's response was at the same time more straight-forward and more difficult to grasp than the formulation of Kant's question. He argued that persons carried around in their heads a complex, concrete universal idea of an 'I-This-Here-Now,' into which they fitted their thoughts of particular things. For example, I might think about this tree, this building, or this sentence on the computer screen. Regardless what it is that I think about, it remains 'I' who think about 'This,' which is locatable in some 'Here' and 'Now.' By tethering the 'I' to 'This-Here-Now,' we saw that Hegel, in effect, showed how moral judgments were possible.

This is all well and good, someone might say, but what does any of this have to do with actual moral judgments? The 'I-This-Here-Now' does seem a rather uninspiring framework in which to analyze the moral character of human thought and action. I grant the point. Hegel's response to Kant, however, is significant both for what it affirms and for what it denies.

The essence of Hegel's response is concentrated in a much studied section of the Phenomenology titled the 'Independence and Dependence of Self-Consciousness: Lordship and Bondage.' I have here identified it here by its better known moniker 'the master-slave dialectic.'

Hegel's arguments begins by assuming that human being are always and everywhere self-consciousness. This is the moral ideal, the end of human nature. Persons are self-conscious, but they may not themselves be conscious of that fact. It makes little sense, after all, to say a new-born baby is actually self-conscious. So instead we say that a baby is potentially self-consciousness--which is to say, self-conscious by nature, something which they can achieve in the course of their lifetime, though not in present actuality. Similar considerations can be offered for the infatuated teenager, the inebriated party-goer, or the distracted mother. These people are self-conscious, in the sense that they have the potential capacity for rational self-reflection. But they aren't always--or even ever--realizing their complete self-conscious potential. They might be thinking about this, that, or the other thing. Just not their ownmost selves everywhere and all the time.

Hegel's idea of self-consciousness also has the important feature of being intrinsically communal. Using his own technical philosophical vocabulary, we would say: 'Self-consciousness exists in and for itself, and by the fact that, it so exists for another; that is, it exists only in being acknowledged.' Persons achieve self-consciousness only to the degree that they also think of other persons as self-consciousness like themselves; or to the degree, if you will, that they see themselves in other persons. The obvious example here is an encounter between two adults in full possession of the rational faculties that come together for some purpose like marriage, a business deal, or some other sort of collaboration. The ideal of self-consciousness holds that persons deal with each other as if they were dealing with themselves.

Encounters between social equals, of course, will obviously apply to a very limited portion of encounters in any given population. As likely, or even more likely, are encounters between persons whose differing social standing, which introduces elements of authority in the relation. The time and energy that parent's pour into their children or that teacher's give to the education of their students are good examples of this second case. Authority clearly rests in the hands of one of the parties and not the other. Yet, Hegel will claims, the nature of these relationships is such that they aims at the eventual dissolution of authority--or, at least, at the dissolution of the minor party's dependence on major party's authority. And, in this sense, we may say in every case the end goal is that 'they recognize themselves as mutually recognizing one another.'

What Hegel terms 'the master-slave dialectic' is symptomatic of the failure to see other persons as essentially like one's own self. By this, of course, Hegel means something more than merely thinking positive thoughts about other persons. Thought is always implied in a person's course of action. How a person thinks will therefore inform how they act towards other persons.

Usually dialectic arises as a consequence of one person usurping another person's 'labour' for their own gain, with little or no recompense. Hegel understands 'labour' to be an investment of one's very self into some material reality. One imparts an intelligible form to a material medium: by planting a field, building a shelter, domesticating animals, or, to bring things into our virtual age, designing a website, auditing a set of financial books, or setting instructions assemble a car. People perform these sorts of labour, of course, to secure the means for life: food, clothing, shelter, and the like. But, in the case of the master-slave dialectic, the master takes the products of the slave's labour, with little or no attention to the slave's need to secure the means for live. The master fails, in other words, to recognize the slave as a being like themselves, and sees them merely as a means to their own end. If any of this is beginning to sound at all like Karl Marx' critique of capitalism, then you are getting warm.

Let's take a second look at things and get a little more precise with our description by re-introducing some of the terminology from the previous reflection on 'Sense-Certainty.' A self-conscious person thinks about themselves through in terms of the complex, concrete universal idea of 'I-This-Here-Now.' They are conscious of themselves as being this body, here and now (perhaps sitting somewhere like a living room or a library in front of a computer). A person who is not mere self-conscious by nature, but actually thinks of themselves on such terms, will also think of other persons in the same way: as other 'I-This-Here-Nows' (perhaps sitting in another living room or another library in front of a computer). Suffice it to say that the self-conscious person knows themselves to be this body, and knows that other self-conscious persons are those bodies.

The master-slave dialectic, it will follow, arises as a consequence of a very natural two-fold mistake. First, they will fall into a primitive version of Kant's mistake : they will distinguish absolutely between what they themselves necessarily are (the 'I') and their contingent existence (the 'This-Here-Now'), which is always changing with the passage of time and with different sets of circumstances.  And second, they will consequently they will think about and act towards other persons as if they were merely 'This-Here-Nows,' or merely objects to be manipulated. Since there is no room for other persons in each other's heads, the others will be objectified and reduced to the status of a merely contingent existences, who one encounters from place to place and time to time.

Both the master's consciousness and the slave's consciousness, Hegel insists, are incomplete by finding in the other only half of a whole. The slave is nevertheless closer to the truth of self-consciousness than the master ever can be. The reason why should be the case is quite straight-forward. By comparison to the master, who steals the products of the slave's labour, the slave is actually labouring; they are actually imparting an intelligible form to a material medium. The slave, therefore, thinks about something through the complex, concrete universal idea of 'I-This-Here-Now.' Whatever the something is, of course, it will at the very beginning be something other than themselves. But the more the slave labours for the master, the more of themselves they will find in the products of their labour. In time, this dialectical line of thought continues, the slave will accumulate such knowledge of themselves that they will rise up against their master. And when that happens, the master will find themselves entirely defenseless, because their life has been perpetuated by the slave's labour all along.

Hegel's account of the master-slave dialectic reconnects the modern problem of the relation between empirical fact and moral value with the longer Western intellectual tradition. The account clearly owes something to Aristotle's Politics, where the master is described as the slave's soul, making use of the slave's body for their own purpose. It also owes something to the long tradition of Christian reflection on Jesus' command to love God and one's neighbour as oneself. In his On Christian Doctrine, for example, Augustine argues that the love or self and love of neighbour are properly ordered by the love of God because one sees God in one's neighbour. And it possible now to see God in one's neighbour because God has revealed himself in Jesus (which was not unimportant to Hegel.)

The dialectic also informs subsequent generations of thinkers like Karl Marx, who adapts the master-slave relation to the relation between capital and labour. The classless communist utopia, in this picture, is simply Hegel's communal ideal of self-consciousness turned on its head. The ideal of mutual self-recognition has been replaced by the ideal of 'from each according to his ability, to each according to his need.' And it influence can be found in the work of the arch-anti-humanist Friedrich Nietzsche, who absolutely despised slave-morality, proposing instead the ideal of the Overman, who would surpass all merely human limitations. The progress of 20th century Western thought can be discerned in broad outline from there.

The important point to take away from all this, it seems to me, is that Hegel has never not been talking about how people make sense of their bodily lives. Even if you are inclined to think of his ideal of communal self-consciousness as more than a little hopeful, you are still compelled to admit that you live an embodied life, something you share in common with every other human being who has ever lived.

Saturday, April 18, 2015

Hegel: The Dialectic of Sense-Certainty

Kant's question, How are synthetic a priori judgments possible? creates significant difficulties for subsequent generations of 'moderns' like ourselves. The wording of the question may appear obscure, but it gets at a something very basic that we take for granted everyday. It asks how is it possible for us to be confident in our methods of natural scientific study. How do we know that our conclusions correspond to objective reality? The way that Kant formulates the question, however, assumes that the human world could be divided into two distinct parts: between 'the starry heavens above' and 'the moral law within.' By the former Kant meant the whole order of nature, everything in the spatiotemporal world; and by the latter he meant something like a universally accessible, rationally determinable standard for moral conduct. This should sound familiar to anyone familiar with the distinction that people like to draw between facts and values.

The problem that Kant's formulation could not overcome, as we saw last time, was that moral judgments always pertain to specifically human thought and action, which is always locatable somewhere in space (or place) and time. Moral judgments do not 'terminate' in an abstracted universal idea of what human beings ought to be or ought to do, but concretely in human bodies. A moral judgments, in this sense, bridge the the gap between the realms of fact and value. It ascribes moral value to what would otherwise seem a merely empirical reality. The moral injunction against murder (or rape, or theft, or lying), for example, seems to be stated in abstractly universal terms. Don't murder: full stop. But the injunction only makes sense as a moral injunction if it universally applies in every individual instance: that is, to actual living, breathing human beings. Assenting to the abstract truth that one ought not murder one's fellow human beings and then going out and murdering actual people is only moral in the sense that it is a violation of the moral norm.

One of the most persuasive solutions to Kant's question was offered by G.W.F. Hegel in a little studied section at the very beginning of his Phenomenology of Spirit titled 'Sense-Certainty.' In effect, what Hegel claims is that Kant misunderstands the moral nature of human embodiment.

Let's do a brief recap. Kant claimed all of human knowledge was the product of one of three types of judgment: analytic a priori, synthetic a posteriori, and synthetic a priori. These are as follows:

Analytic a priori judgments are those sort of judgments that you can make in and of itself without reference to anything 'external.' Examples would include 'triangles have three sides' and 'mothers have children.' The predicate of the statement ('three sides,' 'children') is already contained in the subject of the statement ('triangles,' 'mothers'). To wit: a triangle isn't a triangle if it doesn't have three sides and a mother isn't a mother if she doesn't have children.

Synthetic a posteriori judgments, by contrast, are those judgments that you make by reference to something else. Examples would include 'the grass is green,' 'I am 33 years old,' and 'the weather is cold for the time of year.' These things might be true. They also might not be true. Whether they are true might change from place to place and time to time. Late in the fall, the grass is die and turn brown. Next year at this time, I will be 34 years old. And if I step outside, I might discover that the weather is quite pleasant, about what a person excepts for the time of year.

The third category of synthetic a priori judgments bridges the gap between analytic a priori judgment, which are necessary (always and everywhere the same), and synthetic a posteriori judgments, which are contingent (which might vary from place to place and time to time). It tries to make sense of the predictive ability of natural scientific claims. For example, we are able to determine that objects fall at 9.8 m/s2 near the earth's surface. We don't have to keep on verifying this fact ad nauseum. It doesn't need to be tested over and over again in order to determine that it is still true. We can demonstrate it, confirm it, assume it, and then move onto other more complex questions.

Hegel noticed what has already been pointed out: that Kant's three categories of judgment could not locate specifically human thought and action anywhere specifically within the spatiotemporal universe. His three categories of judgment were abstract in the sense that they were judgments of an ideal knower who was unrestricted by place and time, not an actual knower (like yourself or myself), who is restricted by place and time. That was (and still should be) genuinely perplexing. If you think about yourself, you will immediately notice that all your thought and action is tethered to your particular bodily existence.

Consider that you can think about and do many things. You can even think about (though not do) everything. But you can neither think about nor do anything without taking into account a fundamental relation between, on the one hand, your subjective thought, and on the other hand, and your particular objective situation. I might reflect, for example, for a moment on the 'Big History' of the universe, which is billions of years old and tens of billions of light-years across. I would have done so, however, in particular places and times--maybe by reading a book or by joining an online course and watching Youtube videos.

Hegel makes roughly this argument in the Phenomenology. He notices that all our thoughts about object in the world contain a reference to an 'I' which thinks, a 'This' which designates the thing being thought about, and a 'Here' and a 'Now' in which the 'This' is situated. So, for example, you (the 'I') might be thinking about a tree or a house or your wife (the particular 'This' in question). Regardless whether you were are thinking about the tree, the house, or the wife, your thought about 'This' has a 'I-This-Here-Now' structure.

Note the comparison to Kant's different forms of judgment. Hegel's 'I' never changes. It is always you (or always me) who is doing the thinking. In this sense, the 'I' is necessary like Kant's analytic a priori judgment. But the reference of Hegel's 'This-Here-Now' can and does change. If you let your mind wander from This (what you had for breakfast this morning) to This (what you need to pick up from the store after work) to This (the funny line from the movie last night), you will see what I mean. The 'This-Here-Now' is therefore contingent like Kant's synthetic a posteriori judgment.

Hegel uses the generic examples of day and night to illustrate the point he wants to make. He says, 'To the question, "What is Now?", let us answer, e.g. "Now is Night."' He continues, 'In order to test the truth truth of this sense-certainty a simple experiment will suffice. We write down this truth [since] a truth cannot lose anything cannot lose anything by being written down.'

So we write down 'Now is Night.' But now we have a problem. Time passes us by, night becomes day. What we have written down will no longer be strictly true. It was true when we wrote it down. It is no longer true. What was is no longer. Or, in the technical terminology that Hegel uses, Being has passed over into Non-Being.

Hegel repeats the demonstration with the examples of a house and tree. He says, 'The same will be the case with the other form of the "This," with "Here". "Here" is, e.g., the tree. If I turn round, this truth has vanished and is converted into its opposite: "No tree is here, but a house instead".'

No reference is made to writing down the word tree in order to test the truth of the assertion this second time around. Still, Hegel's basic point remains the same. As our perception alights on a different 'This-Here,' what was is no longer. Being has once more passed over into Non-Being.

There is a more sophisticated analysis of Hegel's argument which would point out that our perception of things in space differs from our perception of the same things in time, where Hegel seems to treat them as fundamentally equivalent. This line of inquiry would problematize the argument considerably--and needlessly at this point. Our interest will remain with what Hegel actually does say. We want to know what he does say about the complex 'I-This-Here-Now.'

Well. Stuff very quickly gets strange. One would expect Hegel to say he has found a complex, concrete universal, something a person can take with themselves everywhere and apply to every situation they find themselves in. Thinking about buying a house? 'I-This-Here-Now.' Thinking about breaking up with your girlfriend? 'I-This-Here-Now.' Wondering where you are going to vacation in the summer? 'I-This-Here-Now.' And so on, and on, and on, and on, ad infinitum (or until the moment that you fall asleep or that you die).

But Hegel doesn't do this. Or, if he does, he doesn't make it easy for his readers to see him doing this. So what does Hegel actually do? He applies what is known as the logic of 'the negation of the negation' to his argument. Returning to his examples for 'This-Now' and 'This-Here,' he points out that in both cases the 'This' passes over into 'Not-This.' Night becomes day. The tree is now the house. This is the first step of 'the negation of the negation': Being becomes Non-Being. What was is no longer. The day is not the night. The house is not the tree.

But we need to step back, so to speak, and reflect on what has occurred. Hegel also wants us to note carefully what has happened: the Not-This is not simply a Not-This. It is a This in its own right. This was night. This was a tree. In actual fact, however, This is now the day or a house.

The new 'This-Now' or the new 'This-here,' in their turn, these too pass over into 'Not-This.' Either time passes and/or a person's attention wanders to something else. This is the second step of 'the negation of the negation': The new This is itself negated, and becomes 'Not-This.'

Here is the really perplexing part. You would expect that Hegel would simply repeat the procedure over and over again, showing that This becomes Not-This, which which is shown to be a This in its own right, which becomes Not-This, and so on, and on, and on. It wouldn't matter what particular things you were thinking about. The 'I-This-Here-Now' structure would be recycled over and over again to make sense of every new thought that a person has about different things.

But Hegel argues that all you need demonstrate the logic of 'the negation of the negation' is two negations. Additional negation would both be superfluous and distract from the essential lesson. Two negations are all that is needed to demonstrate I am not actually thinking about particular things, but a universal 'This-Here-Now,' which I discern in and through particular things. The 'externals' might change; the essentials do not. And this means, Hegel says, that every particular 'This-Here-Now' is implied in every other 'This-Here-Now.' The entire spatiotemporal universe, and every particular thing in it, is joined together in this complex, concrete universal thought, which every person carries around with them in their own heads.

So where does Hegel leave us? He sees the deficiency of Kant's formulation of the question, How are synthetic a priori judgments possible? He sees that human thought and action must, in principle, be locatable in the spatiotemporal world. Even so, it is still not clear whether he leaves persons in the position of an ideal knower (a divine being), who knows all things in all places and all times in exactly the same way. In this case, the whole of human history will simply be the story of God's thoughts and actions manifesting themselves in and through human lives. But he might also leave persons in the position of an actual knower (a human being), who knows things in particular places and particular times in exactly the same way. In this case, the whole of human history is made of up human thoughts and actions, which manifests themselves in and though human lives.

Which reading is correct is not entirely clear from the actual wording of Hegel's text. Both readings commend themselves in different ways. The former reading, which sees Hegel claim to occupy the position of an ideal knower, has been the default reading of Continental philosophy down through the 19th and 20th centuries. Nietzsche, for example, famously derided Hegel for assuming that the whole of human history culminated in his own Berlin existence. The latter reading, which sees Hegel describing the situation of an actual human knower, is the reading given by Anglo-American idealists, or Progressives, at the end of the 19th and the beginning of the 20th centuries--a group fo thinkers who have largely been forgotten. The latter reading crops up from time to time, most recently in the work of the former Archbishop of Canterbury, Rowan Williams. Otherwise, it has fallen out of favour.

The problem, however, is one worthy of further reflection. We moderns, following Kant, have gotten into very bad habits when we try to think of facts in relation to values. We cite scientific studies at each other when it is convenient, and take principled stands when it is not. Starting from what we share in common, a bodily situation, may help to cut through some of the bullshit.

Wednesday, April 15, 2015

Kant: How are Synthetic A Priori Judgments Possible?

This rather obtuse question stands at the intellectual boundary between the early modern and modern worlds. The question is the philosophical equivalent of a 'shot heard around the world.' You can find it at the heart of how we 'moderns' (among whom I include the so-called 'post-moderns') distinguish between fundamentally basic things like empirical fact and moral value. The question frames the boundaries of acceptable public debate, including where the line between public and private is drawn. It divides our cultural world up into progressive and conservative forces.

The title question was first asked by a gregarious, though mild-mannered, Prussian (or German) professor of philosophy by the name of Immanuel Kant.

Let's first start with what a synthetic a priori judgment is.

Kant divided all of the bits of knowledge floating around in a persons head into three types. The first, analytic a priori judgments, designate knowledge that are 'self-contained.' These are the sort of judgments that you can make in and of itself without reference to anything 'external.' An example of an analytic a priori judgment is 'squares have four sides' or 'all bachelors are unmarried.' Squares have four sides. Bachelors are unmarried. If the object didn't have four sides, it wouldn't be a square. The same goes for bachelors: if the man in question was married, they wouldn't be a bachelor. They'd be a married man.

This, of course, doesn't seem like a very profound revelation. The intellectual traction of Kant's argument comes when you start comparing the different forms of judgment.

The exact opposite of an analytic a priori judgment are the synthetic a posteriori judgments. These judgments that you make with reference to 'something' external. Examples would include: 'The sky is blue,' 'Kant was born in 1724,' or 'Game of Thrones is fantasy fiction.' The sky might be blue. Kant might have been born in 1724. Game of Thrones might be fantasy fiction. All these things might be true. The difference in this case is that you will have to go and find out whether thus and such is actually the case. The sky, for example, might be grey or black, depending on the time or day or the weather conditions. Kant might have been born in 1723 or 1725. The sources that we possess might be wrong. And Game of Thrones might be better described as a medieval soap opera with fantasy fiction elements (like dragons, White Walkers, and shadows that look like Stannis Baratheon).

Note carefully the differences. Analytic a priori judgments are necessary in that they are always everywhere true. Synthetic a posteriori judgments are contingent insofar as they can change as situations change--though they don't necessarily have to.  The question that concerns now us here is whether these two forms of judgment can account for all of our knowledge of the world. And evidently they do not.

Take, for example, the prediction of a solar eclipse. We can predict when and where an solar eclipse will be visible with an amazing degree of accuracy. Our ability to predict, however, obviously does not fall into the category of an analytic a priori judgment. An eclipse is not defined essentially by its being visible then and there. It might be visible somewhen and somewhere else, but that doesn't negate the fact that it still is an eclipse. Our ability to predict also does not fit into the category of a synthetic a posteriori judgment. What is at stake is our ability to predict that the eclipse will happen. We don't need to wait for it to happen to see if it actually does. We already know it is going to happen before it does. Our calculations are good enough to predict these things. But how do we know it is going to happen? How can we be certain?

Or, more to the point, how are synthetic a priori judgments possible? (This is not a small matter, as you should now be able to see.)

Kant was fully aware of the significance of his question. In his book The Prolegomena to Any Future Metaphysic (1784), he charged all his readers to consider his question carefully before that made any metaphysical claims. In the term 'metaphysical,' he included claims about the nature of God (and presumably questions how many angels could dance on the head of pin) as well as the fundamental constitution of the natural world. If so-called scientists were going to claim anything with certainty about the world, Kant wanted them to show that they had understood what was at stake.

Kant's question (which was formulated with the help of Newton's Principia Mathmatica, which first sets out, as we presently understand them, The Three Laws of Motion and The Law of Gravity) explains we no longer think of the planets as moving through an ether or think about heat in terms of phlogiston or think of biological species as always and everywhere the same. In the longer run, it explains why we don't think the sun, moon, planets and stars evolve around the earth or that the orbits of 'celestial' objects are perfectly circular.

The question puts a break on attributing divine eternality, or self-sameness (which takes the form of an analytic a priori judgment), to anything in the natural world. Once you do that, you start to observe how things actually behave. The question also directed people to think more carefully on those features of the world that they could claim to know with certainty. Kant intends his third category of synthetic a priori judgments to show how we can be confident in the predictive claims of modern natural scientific inquiry, which are peculiar for being both necessary in the sense that they purport to be always everywhere true, but which hold good for contingent situations that can change.

How are they possible? Kant says: by the a priori forms of perception, space and time, and the a priori categories of understanding, quantity, quality, relation, and modality. The latter categories need not detain us very long. Suffice it to say that they are a straight-jacket on Kant's thinking in the way that they suppose the world can be combined and divided in order to make it intelligible. The former forms, however, are very interesting. To say that space and time are a priori form of perception is to say that every potential object of perception is locatable somewhere in space and time relative to other spatiotemporal objects (and so, by implication, is not divinely self-same).

By every potential object of perception, I mean absolutely everything one might come across in the universe that is 14 billion odd years old and 10s of billions of light-years across. From the atoms to the primordial soup, to the Andromeda Galaxy and everything else in between. Kant didn't explicitly mean this, of course. His conception of the actual dimension of the spatiotemporal extent of the universe was comparatively smaller, in line with the science of the times. But the basic principle, that space and time are a priori forms of perception, remain the same for Kant as it does for us. The actual dimensions of the universe are an a posteriori consideration--not something presupposed, but determined after the fact. So Kant's question, we may say, helps to explain how it is possible for us to think of the universe and all things in it on these terms.

The problem with Kant's question, as Kant himself well knew, was that moral judgments regarding human thought and action always take the form of an analytic a priori judgment. Persons can marshal all the evidence they want to 'prove' that something is good or bad that they want, but at the end of the day we think things are good or bad because we think so. There is a 'subjective' element in a moral judgment that cannot be reduced to an objective state of affairs. Many reasons can be offered, for example, for why murder is wrong. Because another person's life ends much too soon. Because you will go to jail. Because it is not conducive to social harmony to be arbitrarily off-ing members of a community. And so on, and so forth. But all of these are synthetic a posteriori reasons, none of which are ultimately persuasive in every case. If, on the other hand, we say that murder is wrong because it is a violation of an intrinsic human right--namely, the right to life--then we have offered an analytic a priori reason. It is wrong to murder a person because it is wrong to murder a person.

The problem of moral judgments is actually a little more difficult than for which even Kant allowed. His question implicitly assumes that the human world can be divided into two separate worlds: 'the starry heavens above' (by which he meant the natural order of the world given in space and time) and 'the moral law within' (by which he meant something like a universally accessible, rationally determinable standard for moral conduct). This distinction creates a huge problem for moral judgment. Why? Moral judgment is applied to human thought and action, which is always and everywhere locatable in space and time.

Take the case of murder. Murder is a grossly immoral act against a person's body. Bodies are locatable in space and time. They just are. There is no way around it. There is no such thing are murder in the abstract. Jesus suggested that murder in one's heart is tantamount to actual murder, but this is not a prosecutable offence. Same goes from stealing, destroying property, defaming, and so on. These are all acts committed against the bodies of persons or 'bodies' in a person's possession.

So in the case of the moral judgments regarding the specifically human body, you have this curious situation where divine self-sameness lives on in space and time. Kant doesn't account for it. His question, in fact, cannot account for it. We 'moderns,' who like to think like Kant in these matters and pretend there is a hard and fast distinction between facts and values, aren't able to identity precisely where the line between them lies either. We 'moderns' all can can agree in very rough terms about what constitutes a scientific fact. But we disagree vehemently about how these relate to our values--and, more specifically, to which set of values.

People will always find reasons, of course, to talk past each other. The reasons they use today go back to Kant's critical question. And that may help to shed some light on the present state of public discussion.

Sunday, April 12, 2015

Being Modern

What is it to be modern? What, in other words, is it that differentiates being modern from being pre-modern, i.e. being medieval or ancient, being non-Western?

The standard answer that was given through the 19th and 20th centuries was that modernity went hand in hand with a trust in rationality and science, while medieval-ness and antiquity rested on faith and miracles. The Ancient Greek and Roman philosophers were an exception to the rule, insofar as their confidence in the nascent powers of human rationality anticipated the modern Enlightenment. They were reasonable, where the rest of the pre-modern (and non-Western) world was inveterately religious. Theravadan Buddhism might also have been an exception, but that is story for another day.

The picture of human history as an inevitable march towards a reasonable, secular conclusion suggested itself to most every 20th century Western thinker, regardless whether they were secular or religious. The ideologies that dominated the 20th century were explicitly materialistic: Western capitalism and Eastern communism. And they impressed themselves on everything, even if only as a negative image. The most trenchant religious criticisms of a secular order to this day remain critiques of materialism: both in the naturalistic and economic senses of the term. The secular order denies that human life possesses a spiritual dimension and debases human interaction with the mindless pursuit of more things.

Then the Iranian Revolution happened in 1979. The event interrupted what seemed an otherwise smooth secular ascent. To many, it seemed as though history had reversed itself.

The resurgence of religious conservativisms and fundamentalisms around the world in the years following the Iranian Revolution forced scholars reassess the standard answer. Modernity was not as secular as it once seemed. It too had its characteristic forms of religion: one that tended to be intolerant, irrational, anti-scientific, and authoritarian. Of course, the extremities didn't manifest themselves everywhere. But the general form could easily be discerned: in an assertive American evangelicalism, Pentecostal revivalism in central and southern Africa, resurgent Islam in the Middle East and North Africa, Hindu nationalism in India, Buddhist insurgencies in Southeast Asia. The list goes on. One might also add, for example, atheist fundamentalism in Oxfordshire, England.

In his A Secular Age (2007), the Canadian Catholic philosopher Charles Taylor has suggested that our generic categories need to be rethought in the light of the resurgence of religion around the world. Since religion did not wither under the secular sun, but seems to have flourished, how modernity differs from the pre-modern (and non-Western) world requires a more nuanced explanation.

Taylor suggests that modernity differs from what came before by the fact of the proliferation of choices. Our pre-modern ancestors were born, lived their lives, and died rarely ever being confronted by a difference of opinion in matters divine. What difference of belief that did exist was usually condemned as evil or heretical. The scale of life was small; the oppourtunities it afforded smaller. Whereas today we are born into a world that is saturated with a smorgasbord of religious options. Even the staunchly orthodox must now define their orthodoxy over against a plethora of choices that before did not exist. The scale of life has expanded exponentially; and with it, an almost infinite number of possible options.

I am more or less on board with Taylor's account of a secular age: defined not by the absence of religion, but a plethora of choices for what or what not to believe in. I appreciate, for example, how Taylor's account encourages you to inhabit the mental spaces of other person. Subtract the internet, television, telephones, newspapers, mass-produced books. Add a strong sense of life's uncertainty, whether the roof is going to hold up under the next storm, where the next meal is coming from. And yes, subsistence living without a whole lot of outside information does look a lot more confined. In such circumstances, it is not hard to imagine how it would be impossible not to belief in the gods or God. Conversely, if you add a world of information and relative material security, then a plethora of choice would seem to be the natural result.

Even so, in my estimation, Taylor's account doesn't go deep enough to get at what differentiates modernity from its pre-modern antecedents. My discomfort arises from his subtle insistence that there is something about our modernity that fundamentally different about us, which sets us apart from everyone that came before. It defines us as special unique. Or, it is supposed to. Our democracy, science, technology, etc., have precipitated an radically new epistemic situation. These are supposed to fundamentally separates us from what came before.

This is not very likely. As shiny and new our ideas about ourselves may seem to us, it is best to remind ourselves that everything is relative, every perspective is partial, and will all be gone tomorrow. Put it this way: the more special we moderns we think are, the more we are fooling ourselves. We have extended human life, improved health care, sent men to the moon, unlocked the atom. But these constitute merely incremental improvements, which will never add up to anything radically new.

I propose that if we want to determine what essentially differentiates modernity from its pre-modern antecedents, our inquiry must begin by assuming how unexceptional we moderns actually are--in relation to each other, to our forebearers, in the grand scheme of things.

If we do so, some rather obvious points of comparison suggest themselves. Ancient and medieval ways of thinking about things tend to be defined by a spatial orientation: up and down. God is above, the earth below. The king is above, the subjects below. Etc. Modernity--how we think about things--tends to be defined by a temporal orientation: past and future. Religion belongs to the past, secularity to the future. Monarchy to the past, democracy to the future. Etc.

These are ultimately not comparable, you might say. We know what there is no God above us in the spatial sense of the word. Yes, I grant the point. We will get to it in a minute. My contention is that the spatial and temporal orientations are comparable because they are essentially moral orientations. They denote how things ought to be. Earthly things ought to be subject to heavenly things, just as subjects ought to be 'subject' to the rule of kings. Religion ought to belong to the past because the future is secular, just as monarchy ought to belong to the past because the future belongs to much more egalitarian forms of democracy.

To illustrate what I mean, consider that the biblical texts repeatedly claim that God is 'enthroned above the circle of the earth.' Modern exegetes would love to regard statements that seem to place God 'up there' above the clouds as spiritual metaphors. But that is not what they were for the original audience. Statements to this effect can be found in most ancient cultures. The Ancient Greeks and Romans were no exception. Aristotle thought of reason in the human being as divine. In his On the Heavens, he also claims that perfectly circular nature of reason was best exampled in the perfectly circular motions of celestial objects: sun, moon, planets, and stars. Like the ancient Hebrews, the Roman encyclopedist Pliny the Elder, though of the heavens as a dome encompassing the earth. What was beyond the dome? Pliny did not think it fit for human beings to inquire. Human understanding was suited only to terrestrial things. The next time you are standing in an open field, throw your head back and look up. The sky looks like a shallow over-turned bowl: the 'firmament.' God is above, in the spatial sense, whatever you find within your field of vision.

On the other hand, modern Western thought after the Enlightenment becomes progressively more temporal in orientation. One of the influential thinkers of the 19th and early 20th centuries, the German philosopher G.W.F. Hegel, argued people formerly though of God as 'above' and humanity as 'below,' but now they think about God as being everywhere and in everyone and everything. Christians, of course, believed that God revealed himself in a specific person: Jesus. Hegel believed the time had come that God could be found in every person. Karl Marx would later alter slightly the forms of Hegel's argument to say that formerly people thought that kings and lords ruled by divine right and commoners had a duty to submit to their rulers as they would to God. Whereas now, Marx thought, people can see these claims for what they actually are: false consciousness. Social divisions between rulers and subjects--which, in the 19th century, meant capital and labour--could only continue to grow more and more intolerable. What humanity desperately needs (in Marx' estimation) is a classless society. Friedrich Nietzsche would take the argument one step further: each of us, individually, must strive to overcome every distinction imposed on us from the outside. God has died, and we must become over-men.

Examples of outlooks, both from the ancient and medieval worlds and from the modern world, can be multiplied indefinitely. The appearance of these outlooks are many and various, but the essential spatial vs. temporal orientation remains roughly the same. The most recent versions of the modern outlook, for example are generally evolutionary in character: they show us a world 14 billion years old, while the evolutionary history of life extends 3.5 billion years. The question, What is it to be modern? concerns how one gets from the spatial orientation to the temporal orientation; i.e. how to get from thinking about the world in terms of up and down to thinking about it in terms of past and future.

Now, I grant that many 'causes' for how one gets from one to the other may be offered. My question asks about the necessary 'intellectual conditions.' The most basic requirement is that God cannot be thought of as being 'up there' in any literal spatial sense. We have sent men to the moon and satellites to the most distant planets in our solar system and beyond. No God anywhere to be found.

Once God is figuratively 'driven' from a literal place in space ('up there'), it becomes possible to conceive of space as (potentially) infinite. It becomes possible to conceive of the universe as tens of billions of light-years across, not just 100s or maybe 1000s of miles across. From here it is a short step to conceiving of time as (potentially) infinite as well. A spatially infinite world is not very interesting, since such a world never changes--at least not in any significant sense. It is always what it presently is. But we know that things change. The evidence is all around us--if only we know how to interpret it. If the world is spatially infinite, this line of argument goes, perhaps it is temporally infinite as well. If it is, then we should be able to figure out where things came from (in the past) and make some educated guesses about where they are going (in the future).

The transformation of these intellectual conditions began with Nicholas Copernicus and Galileo Galilee (16th and 17th centuries) and came to a provisional conclusion with Isaac Newton and Immanuel Kant (17th and 18th centuries). They make possible for someone like Charles Darwin to come along in the 19th century and think the way that he does about The Origin of Species. Their influence was so pervasive, you will notice, that even my discussion here is framed by a temporal orientation: past, present, and future.

To be modern, then, is to think about the world in terms of before and after, and to be implicitly uncomfortable with thinking about it in terms of up and down. This is the point that Taylor, who thinks about the world in terms of past, present, and future, does not quite grasp.

If this is what it is to be modern, why does religion persist, and even thrive, in modernity? Isn't it obvious that all of that is behind us?

Actually, no. It is to misunderstand what religion is about to say that because God's existence cannot be demonstrated scientifically, religion must wither away. The clue to why religion will not die is found at the intersection of space and time: with the conscious human awareness of being situated here and now in a world filled with many other being and things, including other persons. Religion has always been about what persons have made of their individual bodily lives: where the person comes from before they are born into the world, what they should and should not do while they are in the world (e.g. how people behave towards others in community), and what happens after they die.

Claims that seem to have been disproved by modern science, like the fact that God is not literally above the sky, do not actually discredit religion. The moral meaning is that the God who 'sits enthroned above the earth' is ultimately inaccessible to everyone in the same way. No person, therefore, is in a position to claim to be naturally closer to God.

Religion will persist and thrive in modernity because religion gets at the one thing modern science cannot assign: moral value to a person's bodily life. Yes, modern science can understand how bodies work. It may even suggest how human life can be extended and improved. But its statements are always generic and empirical, never individual and moral. And it is the latter, not the former, that people require to make sense of the day to day interactions with other people.

Friday, April 10, 2015

What 'Good' are the Humanities?

What ‘Good’ are the Humanities?

You may have heard that the so-called S(cience) T(echnology) E(ngineering) M(ath) disciplines have taken the field of public discourse. The humanities (or liberal arts) have been routed, and are off licking their wounds.

This is more or less true. The persuasive arguments all seem to be on the side of the STEM advocates. The humanities still have their distinguished representatives, certainly. These doyens of cultured inquiry can be counted on to crawl out of the woodwork to prognosticate on the general decline of the humanities.

They are fun to read and disagree with.

For example, the English literary critic Terry Eagleton has very recently published an article in The Chronicle for Higher Education titled ‘The Slow Death of the University.’ Unfortunately the article is behind a pay-wall, or I would encourage you to follow the link. The conclusion he shepherds his readers towards is that he has begun to ask prospective students whether they can afford his very expensive insights into Wordsworth and Elliot, Kafka and Proust. Unlike his own seven years of virtually free education, his students must now contend with the fact that what were formerly public expenses are now downloaded on private persons.

Ever the tragic humanist, Eagleton sees the double standard for what it is. Students pay for an increasingly large part their education because, the argument goes, they are the primary beneficiaries. The professoriate’s hands are increasingly tied by departmental budgetary requirements implemented — wait for it, wait for it — by the same people who themselves benefited from a largely free public education.

The discourse is telling. If the humanities are no longer reasonably accessible to the broad spectrum of humanity, then what? Are the finer things in life to be cultivated only by those with the private means to study the finer things in life? Must humanity be measured against the Almighty Dollar (the Euro, the Japenese yen, or the Chinese yuan.)?

The question is a reasonable one to ask. The most common defense of the humanities that you come across today is that they encourage critical, creative thinking. Fareed Zakaria made precisely this argument in ‘Why America’s obsession with STEM education is dangerous.’ Why has American traditionally been successful? What has made it wealthy beyond belief? Zakaria’s answer: it’s bottomless capacity to envision to solutions to problems that haven’t even been encountered yet, which is nurtured by a tradition of humanistic inquiry.

The difficulty with this sort of defense, as others have pointed out, is that it buys into the essential premise driving the STEM push: that what is ultimately valuable in education is its ability to generate wealth.

Not a small number classical texts would temper our embrace of a fundamentally materialistic motivation. The Gospels record Jesus saying that you can either serve God or Money (but certainly not both), with the not-so-subtle implication that something is debased in humanity when Money takes the place of God. Karl Marx adapted Hegel’s analysis of the master-slave dialectic to the realities of modern capitalism to show how the un-virtuous cycles of wealth accumulation can only end in violent social upheaval.

Such is the interminable dialectic of intellectual debate. One party says yes, and offers its reasons. The other party says no, and offers its reasons. Proponents of the STEM disciplines say the humanities are superfluous. They demand that the doyens of cultural inquiry justify lavish expenditures on their abstruse intellectual pursuits. For their part, the doyens protest that the money is not what studying the humanities is about. It is, in fact, to misunderstand what is valuable in studying the humanities.

Which would be what, exactly? What do the doyens say is value in studying the humanities? That’s where, as you can see with the examples of Eagleton and Zakaria, things get a bit muddled.

A more considered perspective would point out that humanities have been perpetually in crisis. Sort of like humanity itself is perpetually in crisis. It may seem to gray-hairs that things were better in the past. But this is only a function of how the world can seem a much brighter place and filled with much more possibility in one’s youth. The truth of the matter is that every generation must inevitably confront its demons — which is not to downplay the seriousness of the contemporary challenge to humanistic study, but rather to say, in the words of the arch-anti-humanist Friedrich Nietzsche, that it is human, all too human.

I will permit myself a personal aside at this point. My entire (which to say, very short) academic life has been filled with a fascination for historical narratives. Not in historical facts, at least not for their own sake, but in how people tell write history. The bigger the narrative, the better. And it has seemed to me, as I have worked to make sense of how other people write history, that the humanities have been dying a long death since the beginning of the 20th century. There have naturally been better days and worse days in the course of that drawn out affair. Though the end was never in doubt.

The beginning of the 20th century was witness to a couple of fairly esoteric discussions about the nature of humanistic study. The most revealing of these concerned whether history was an art or a science. Did writing history require the brilliant flash of an artist’s inspiration to organize its materials? or was there an objective basis against which all historical claims could be measured? The better responses to this question always decided on the study of history being a bit of both.

The difficulty is that if you ask the question in the either/or form, you have already conceded the field. The binary logic, which relentlessly cuts the human world into smaller bits, may take a few decades to work the out. But it will be worked out. And when it is, people will be left wondering what happened to the study of the humanities.

To illustrate what I mean, I will make use of J.M. Roberts’ famous The New Penguin History of the World (6th ed.; 2013). The illustration will require a bit of engagement on the part of the reader. I will make use of the Socratic method and ask whether thus and such makes sense of the world as the reader understands it.

Given the amount of abuse the humanities have taken recently, I think it safe to assume that most people think of the study of history is an ‘empirical’ science. Flipping through Robert’s History of the World, then, the discussion of ‘Peking man,’ who lived approximately 600,000 years ago in present-day China (page 126), the ‘Gupta era of Indian civilization,’ which ran from the fourth to the sixth centuries A.D./C.E, producing such classic texts as the Kama Sutra (page 307–8), and the spread of democratic institutions through the Anglo-American world (Australia, Great Britain, Canada, New Zealand, and the United States) in the latter part of the 19th and early part of the 20th centuries (page 770) will each be understood to constitute a different ‘field’ of study.

Many more examples than there are pages in Robert’s book can be offered. The general conclusion is that a specialist in any one of these fields will not be a specialist in anyone of the others. The archealogist working on the ‘Peking man’ won’t to say to the Indologist working on the Kama Sutra. Nor will specialists in the 19th century Anglo-American world have much to say about the earlier periods. There is now simply too much material to assimilate. This is simply the way things are. This is also, by the way, the way that Ph.D. programs are designed. Students are trained to specialize.

Of course, there is a measure of truth in the drive towards specialization. But to give specialists the upper hand lands the study of history (and, by extension, the humanities) is a mess of trouble.

How so? When the rest of the world stands on the outside and looks in on a historian’s field of specialization, the rest of the world is quite naturally inclined to ask what any of it has to do with them. Why should they be paying money out of the public purse to fund what is so obviously a self-indulgent inquiry? What does the study of something way back there and then have to do with the here and now? Fair point.

Let’s take a second look at our brief perusal of Robert’s History of the World. Do you notice anything odd about flipping pages through Robert’s book? Did it strike you at all as strange that you were crossing many millennia to alight in China more than 600,000 years ago? Or that you leapt halfway around the world to go from Gutpa India to the 19th century Anglo-American world (to the North Atlantic or the South Pacific)?

If you are one of those who thinks of history as a field of empirical study, which can be sub-divided in many distinct fields of study, you probably didn’t notice anything. Now that I point it out, it is more than likely that you are wondering what the ultimate point could be.

Contrast this with your actual experience of the world. You always seem to find yourself in one here and one now. Unlike the experience of paging through Robert’s book, you are constrained to move from place to place with considerably difficulty, and with a much greater expenditure of energy and resources; and you are unable to move about through time except at the single plodding pace at which time allows you to move.

The difference between the two perspectives can be given a bit more definition. In the former, you move across space and time with relative ease, following paths through history laid down Roberts. In the later, you are able to move through local space with relative ease (say, to walk to a shelf and pick up a book, or travel to work, travel home from work, etc). You also move through time, but with the caveat that you must do so at a pace that time itself sets. The former perspective envisions the world as a seamless spatiotemporal extension. The latter would seem to divide the world your experience of things as they are in space and your experience of things in time.

Perhaps most importantly, your own body (i.e. yourself) only appears in the latter perspective. You don’t figure into Robert’s narrative. It will be possible, in very rough terms, to find references to the place where you presently are, but not so the time when you presently are. I am presently writing these lines, for example, in my apartment downtown Montreal, Quebec. Roberts describes in broad terms the settlement of the St. Lawrence Valley by the French through the 16th and 17th centuries (page 655). These help me situate my present situation in a much longer temporal frame of reference. But this is different from saying I figure into the narrative. Very obviously I don’t. I am the one flipping through the pages of the book. I am the one whose mind wanders up and down the corridors of history. (So are you, by the way.)

What do these two different perspective say about the value of the humanities? Many things, actually. One especially significant thing should be brought to the fore. When history (and by extension, the humanities) are studied as fields of empirical study, what goes missing are the bodies of persons. Scholars go looking for facts or ideas. They don’t go looking, and so don’t see, actual people.

So quite naturally the rest of the world looks at what they are doing and asks why they should be paying money out of the public purse to fund what can only appear a self-indulgent inquiry.

Of course, the advocates of the STEM disciplines also don’t see people, but they can be absolved. The study of the STEM disciplines is was never about the study of persons as persons.

The difficulty in this whole business is that the value of the humanities is intrinsically self-referential. The study of the humanities ought to remind people of what they actually are — and, by implication, what they are not. People need to be reminded from time to time that other people are not reducible to a dollar value. And I am tempted to conclude that when the humanities loses its focus on this truth, it should also lose its funding.

Tuesday, March 24, 2015

Is Belief in God Immoral?

I have an ongoing conversation with a friend about belief in God, specifically whether there is anything 'redeemable' in it. We post things on Facebook in each other's general direction, baiting to the other to respond. Last evening he posted an opinion piece of Michael Ruse published today in the New York Times, 'Why God is a Moral Issue.' The article studies why the so-called 'New Atheists' think belief in God is immoral.'

So I'll bite. I'll respond to the general question of whether belief in God is moral or not.

There was a time, not so long ago, when it was well-nigh impossible to doubt the existence of divinity-whatever name that might go by--or that divinity laid moral norms for how person's were supposed to think and act.

This is not to say that people didn't mock the gods or behave badly. They did. It is to say, on the other hand, that when they mocked the gods or behaved badly, they did not have an alternative system of belief like atheism to justify mocking the gods or behaving badly.

There were divine laws. People did what people do, and disobeyed them. That's what laws are for, and why they are promulgated in the first place: because people disobey them, and need to be 'nudged' back into conformity with the law. The term atheism, in fact, did not always stand for a system of belief, which is devoid of all but a negative reference to divinity. To be an atheist formerly mean to willfully break God's law, i.e. an atheist was someone who acted as if there was no God.

Sometime in the course of the last 400 years, what was before impossible to doubt was increasingly called into question--until, in the 19th and 20th centuries, whole sectors of Western societies ceased to profess a belief in God. This process gradually gained speed, especially in the last half of the 20th century, after two world wars, the growth of widespread material prosperity, and the increasing success of scientific investigation to explain the natural world.

What changed? A good many things, to be sure. I want to focus on how it is possible for belief in God to become immoral.

The usual story that our ability to believe in God (as well as the human soul, angels, miracles, etc.) was gradually undermined by the explanatory successes of the natural sciences. The essential points of this story have been debated at very great length. The story has certain merits. It makes sense of things from a certain vantage. But it also hides as much as it reveals.

The usual story requires a bit of nuance. One of the essential things that sets our modern scientific age apart from previous 'religious' ages is that claims about whether something exists or not and whether something is good or not came to be regarded as essentially different.

I am referring to the so-called difference between the is and the ought. Prior to our modern scientific age, it would have been difficult to ultimately distinguish between them. In our modern scientific age, distinguishing between them has become almost second nature.

The following example may sound a bit simplistic, but it represents the main point. To a modern scientific age, the fact that gravity causes an object to fall is neither good nor bad. It just is. The question of what is good or bad pertains solely to human desires and motivations, and not to the natural world considered in and of itself. Whereas prior to our modern scientific age, it was good that objects fell to the ground because that is what objects are supposed to do. That is what God created them to do.

To illustrate the difference a little more concretely, to a modern scientific age, it is not assumed that a person's physical make-up is determinative of who/what they are. A person might be born with the genitalia of a woman, but discover in the course of their lives that they identify as a man. This possibility only becomes conceivable in our modern scientific age. Whereas prior to our modern scientific age, a person who is born with the physical make-up of a woman is a woman is a woman, and a person who is born with the physical make-up of a man is a man is a man. Other possibilities deviate from the moral norm for being women and being men.

The difference between the two perspectives is the difference between night and day. The switch between them doesn't happen all at once; and doesn't happen to entire societies all at once. But once the switch is made, it is very difficult to imagine the world on other terms. Even religious believers get caught up in the switch. The result is that strange phenomenon we call religious fundamentalism, which insists on a literal interpretation of the scriptures, but uses the language of modern science to justify its position.

Michael Ruse's basic argument is that it is immoral to believe in something for which there is little or no evidence. I agree with the sentiment of the claim, but not the claim itself. It is immoral to believe contrary to the evidence. But this does not settle the very important matter of what actually counts as evidence.

The one thing that Ruse and his 'New Atheist' compatriots cannot seem to wrap their heads around belief in God ultimately has nothing to do with the scientific evidence.

If one accepts that natural scientific inquiry leads to genuine knowledge--and I do--then one has already accepted that the switch described above has taken place. Questions of the morality will now be judged on different terms than the scientific evidence.

Think about it carefully and follow the argument to its conclusion. If it were immoral to believe anything except that which conformed to the latest and best scientific information, then questions of morality must ultimately be submitted to the scientist in the laboratory or the statistician at a computer for determination. The most moral persons, in this picture, would be the scientist a the statistician, since they are the one's drawing the conclusions and disseminating the information. (If that doesn't deserve an LOL, I don't know what does.)

This makes very little sense; not in the least because most people are too busy living their lives, making sure they have a roof over their heads, clothes on their backs, and food on the table, to consult the latest scientific journals.

Scientific conclusions are always contingent and provisional--and, perhaps more importantly, merely descriptive. They do not, and cannot, have the character of a moral judgment, which asks whether this course of action is better or worse than another course of action.

All of the scientific evidence in the world, in fact, will not produce a definitive moral conclusion. Science describes a universe that is billions of years old and tens of billions of light-years across. It describes increasingly complex natural and biological organisms. It even deigns to analyzes the complexities of human relationships. But for all its prescience, science never tells me whether it is better for me to do this or that, or to believe this or that.

Moral judgments, in this sense, differ from scientific conclusions like my body differs from the entire spatiotemporal extent of the universe. The difference is not a trivial one. Whether humanity evolved from 'lower' forms of life, for example, says nothing definitively about whether I should or should not do harm to the next person in this or that situation. The best an evolutionary explanation can do is offer reasons for why I did or did not harm the next person in this or that situation.

On the specific question about whether belief in God is immoral, Ruse is wrong to suggest it is immoral because the scientific evidence suggests that God does not exist. He is wrong because the evidence can suggest nothing of the sort.