No shit, there I was (see Libby, I do listen to you), standing in a field, giving serious consideration to an upright concrete pipe. It was old enough to qualify as a historic resource (marking on it indicated that it was erected in the early 1950s), but I am pretty sure that the authors of the National Historic Preservation Act weren't thinking of concrete pipes when they were writing the legislation. Still, it met the criteria necessary for federal agencies to have to consider impacts to it. Nonetheless, we were in the construction phase of this project, and this particular historic bit of field hardware hadn't been included in the considerations regarding historic impacts because the archaeologist leading this segment of the survey had not taken note of it.
Now, if you were to see this pipe, it would likely escape your notice. It's a vent for an irrigation system, the sort that is ubiquitous throughout California's agricultural lands. It's the sort of thing that just forms part of the background of most of California (remember, despite it's reputation as an urban beach-covered state, the vast majority of California's land is landlocked and rural). And I think that this might be the reason why the thing was overlooked by the people who should have at least made note of it.
Archaeologists are similar to medical doctors: we all get a basic course of training in anthropology* and archaeological methods, but then we gain a specialty. We first divide into historic and prehistoric archaeologists: the historic archaeologists study people who left behind written records (so, in California, they focus on the region after the missions began to be established in the late 18th century) and prehistoric archaeologists focus on people who had no writing system and therefore no written history (so, the Native Californians before the Spanish showed up)**.
And that is where the problem likely came from in this case. Prehistoric archaeologists tend to be very cognisant of the likelihood and possibility of prehistoric artifacts in a given area. They can identify historic materials, sometimes they are very knowledgeable about them, but they tend not to notice the historic materials that are hidden by the landscape. So, a historic irrigation feature in an agricultural area may be missed by a prehistoric archaeologist because of the fact that there is nothing odd about it, and it is precisely what one would expect to find in the area.
To be fair this goes both ways. I have met historic archaeologists who have missed carved rock art that any historic archaeologist would immediately pick up on, as such rock art sometimes looks like it might be a weird erosion pattern. In one particularly alarming case, I met a historic archaeologist who didn't comprehend how to identify a bedrock mortar*** - one of the most common types of prehistoric archaeological features. So, this can go both ways.
Still, when you are doing fieldwork, it is good to be aware of one's limitations, and to be extra-vigilant based on them.
*With the exception of many Classical Archaeologists, who forgo training in anthropology and focus instead on history.
**It gets more complicated, though. Most North American prehistoric archaeologists can identify prehistoric sites and materials across North America, and most North American historic archaeologists can do likewise for their time period. But, nonetheless, there are regional divisions. I am focused on California, for example, so while I can work competently in other regions, I will have more to learn and will have to work harder to do a good job than I would in California. And this is without even getting into material specialties - some people are specialized in stone tool analysis, others in faunal bone identification, others in geomorphology, etc. etc. etc.
***This was made especially annoying by the fact that he insisted on sending me to the field with "a more experienced archaeologist" who actually had less experience than me, kept missing bedrock mortars as well as other types of sites, showed up severely hung over and couldn't even manage to get to the correct location for work. What's more, if the guy who had sent us out had bothered to listen to anything that I had ever said to him, he'd have known that I am a prehistoric archaeologist with many years of Californian field experience. In other words, he'd have known that I am perfectly capable of identifying a bedrock mortar.
Subtitle
The Not Quite Adventures of a Professional Archaeologist and Aspiring Curmudgeon
Monday, January 31, 2011
Saturday, January 29, 2011
Friday, January 28, 2011
Archaeological Blind Spots
Earlier today, I was reading a report on excavations performed in the San Gabriel Valley, which is one of the many valleys in the hills of Los Angeles County. It was noted in the report that there is not much information available in the archaeological literature regarding these inland valleys, and that most of the research done in Los Angeles County is focused on the coast and the Mojave Desert. In fact, if you were to look only at the published work, you'd get the impression that people lived in the desert and on the coast, and nowhere else in southern California, which is not actually true.
When I was working on my Master's Thesis, I encountered a similar issue in the Santa Barbara Channel area. Almost all of the archaeological research on the area was performed on the coast or, more often, on the Channel Islands. There was one excavation report that was only a few years old, one MA thesis that was ten years old, and a smattering of MA theses, doctoral dissertations, and articles that ranged in age from 25 to 40 years old, almost all of which contained woefully out-of-date information. In the Santa Barbara area, the focus on the Channel Islands is so pronounced that a recent book that is allegedly about the socio-political complexity of the Chumash (the native peoples of the area) has only one chapter on the mainland (focused exclusively on the coast, naturally) and otherwise focuses entirely on the islands, despite the fact that the majority of the people lived on the mainland and that the interior is a pretty big chunk of real-estate that is often vaguely invoked in discussions of the complexity of the area.
Talking with colleagues in other parts of the U.S., and even in other countries, it becomes apparent that this is a common problem: certain areas get the bulk of the attention, while other areas which are arguably equally important are simply ignored or glossed over. The end result is that our view of the history and prehistory of certain parts of the world is rather skewed.
Sometimes the skewing is due to simple site availability: if sites are located on private land, they are harder or even impossible to access legally; if an area is heavily urbanized, there may be very few sites left, and so only those still in fair condition can be examined; and in some places, the geology itself may be against you and the deposition of sediments during floods may cover sites and make them either highly difficult to find or else prohibitively difficult to access and study.
Other times, the skewing is due to site condition: We all like pristine sites, but in many places these are hard to come by. The California Channel Islands lack the burrowing rodents that are the bane of the mainland archaeologist's existence, and have also not been subjected to the urban development that mainland sites often have. As a result, they are very appealing...I would say too appealing, as this is one of the factors that has resulted in archaeologists extrapolating conclusions drawn about the very specialized island environment onto the mainland of North America. Likewise, archaeologists may be dissuaded from studying a site if it has been subject to ground disturbance due to construction, but that site, even with the disturbances, may still hold important information.
Another factor that shouldn't be overlooked is the location. Sites on the coast or in the mountains offer beautiful scenery. Sites in an inland valley may be hot and unpleasant. While most archaeologists won't admit that this influences their decisions in public, it tends to come out when we are talking to each other.
It should also be considered that people's research questions are influenced by what has come before. If your goal is to study the socio-political complexity of the Chumash, then the fact that nearly every article and book on the subject mentions the importance of the inland and interior as trade partners within the complex system doesn't change the fact that almost all of the research has been done on the Channel Islands on on large coastal villages, and as a result that is where you go to do such work. For example, another recent book on the subject of Chumash complexity did a decent job of placing more attention on the mainland - where most of the people actually lived - but still largely glossed over everything that wasn't on the coast.
And, of course, we shouldn't forget that most archaeologists want to find spectacular sites. Most, if not all, of us want to see cool stuff and work in neat places. And some (though thankfully not all) funding sources are more likely to fund research at known spectacular sites, even if the research might be better directed elsewhere. As a result, the inclinations of the researcher and occasionally the money may sway research towards places known to have cool sites even when the active research questions would be better served by looking into new or less-explored areas.
One added issue is that many of these less-spectacular locations have been subject to archaeological work via CRM archaeology - the sort of consulting work that I do - but this data frequently doesn't make it into the published research work. Part of the reason for this is likely that, in my experience, many academic archaeologists remain suspicious of CRM as a line of employment for those with "weak minds and strong backs" - an assessment that is inaccurate. However, many academic archaeologists are free of such prejudices, and yet find themselves confronted by the problem of the "grey literature" - the volumes of data and observations generated by CRM archaeologists which are deposited in information centers and archives, and which is difficult to navigate even for those of us who know how it's organized. As a result, there has been work done in areas that are under-represented in the published record, but which is never brought in to alleviate the problem because it remains unknown or hard to find.
I have focused on southern California because that is the area that I know the most about, but this problem is not unique to the region. I have spoken at conferences with archaeologists from across North America and even other parts of the world, and this issue has come up on nearly every continent. I don't know how to solve the problem, but simply being aware of it is often the first step.
When I was working on my Master's Thesis, I encountered a similar issue in the Santa Barbara Channel area. Almost all of the archaeological research on the area was performed on the coast or, more often, on the Channel Islands. There was one excavation report that was only a few years old, one MA thesis that was ten years old, and a smattering of MA theses, doctoral dissertations, and articles that ranged in age from 25 to 40 years old, almost all of which contained woefully out-of-date information. In the Santa Barbara area, the focus on the Channel Islands is so pronounced that a recent book that is allegedly about the socio-political complexity of the Chumash (the native peoples of the area) has only one chapter on the mainland (focused exclusively on the coast, naturally) and otherwise focuses entirely on the islands, despite the fact that the majority of the people lived on the mainland and that the interior is a pretty big chunk of real-estate that is often vaguely invoked in discussions of the complexity of the area.
Talking with colleagues in other parts of the U.S., and even in other countries, it becomes apparent that this is a common problem: certain areas get the bulk of the attention, while other areas which are arguably equally important are simply ignored or glossed over. The end result is that our view of the history and prehistory of certain parts of the world is rather skewed.
Sometimes the skewing is due to simple site availability: if sites are located on private land, they are harder or even impossible to access legally; if an area is heavily urbanized, there may be very few sites left, and so only those still in fair condition can be examined; and in some places, the geology itself may be against you and the deposition of sediments during floods may cover sites and make them either highly difficult to find or else prohibitively difficult to access and study.
Other times, the skewing is due to site condition: We all like pristine sites, but in many places these are hard to come by. The California Channel Islands lack the burrowing rodents that are the bane of the mainland archaeologist's existence, and have also not been subjected to the urban development that mainland sites often have. As a result, they are very appealing...I would say too appealing, as this is one of the factors that has resulted in archaeologists extrapolating conclusions drawn about the very specialized island environment onto the mainland of North America. Likewise, archaeologists may be dissuaded from studying a site if it has been subject to ground disturbance due to construction, but that site, even with the disturbances, may still hold important information.
Another factor that shouldn't be overlooked is the location. Sites on the coast or in the mountains offer beautiful scenery. Sites in an inland valley may be hot and unpleasant. While most archaeologists won't admit that this influences their decisions in public, it tends to come out when we are talking to each other.
It should also be considered that people's research questions are influenced by what has come before. If your goal is to study the socio-political complexity of the Chumash, then the fact that nearly every article and book on the subject mentions the importance of the inland and interior as trade partners within the complex system doesn't change the fact that almost all of the research has been done on the Channel Islands on on large coastal villages, and as a result that is where you go to do such work. For example, another recent book on the subject of Chumash complexity did a decent job of placing more attention on the mainland - where most of the people actually lived - but still largely glossed over everything that wasn't on the coast.
And, of course, we shouldn't forget that most archaeologists want to find spectacular sites. Most, if not all, of us want to see cool stuff and work in neat places. And some (though thankfully not all) funding sources are more likely to fund research at known spectacular sites, even if the research might be better directed elsewhere. As a result, the inclinations of the researcher and occasionally the money may sway research towards places known to have cool sites even when the active research questions would be better served by looking into new or less-explored areas.
One added issue is that many of these less-spectacular locations have been subject to archaeological work via CRM archaeology - the sort of consulting work that I do - but this data frequently doesn't make it into the published research work. Part of the reason for this is likely that, in my experience, many academic archaeologists remain suspicious of CRM as a line of employment for those with "weak minds and strong backs" - an assessment that is inaccurate. However, many academic archaeologists are free of such prejudices, and yet find themselves confronted by the problem of the "grey literature" - the volumes of data and observations generated by CRM archaeologists which are deposited in information centers and archives, and which is difficult to navigate even for those of us who know how it's organized. As a result, there has been work done in areas that are under-represented in the published record, but which is never brought in to alleviate the problem because it remains unknown or hard to find.
I have focused on southern California because that is the area that I know the most about, but this problem is not unique to the region. I have spoken at conferences with archaeologists from across North America and even other parts of the world, and this issue has come up on nearly every continent. I don't know how to solve the problem, but simply being aware of it is often the first step.
Wednesday, January 26, 2011
Heroes, Villains, and History
I was listening to a podcast yesterday in which the host described the experience of watching Dr. Martin Luther King Jr. go from being a very real person who was a political leader of the radical variety (remember, in the 1950s and 1960s, his ideas, though widely accepted five decades later, were radical) to his posthumous transformation into a white-washed hero figure. The irony is that by turning him into this heroic figure, Dr. King has been stripped of those things that truly made him a great man, as well as those warts that made him a real human being. We hear about the non-violent approach to civil rights, complete with the March on Washington and his famous "I Have a Dream Speech", but we don't hear as much about his opposition to the War in Vietnam, his criticisms of capitalism, or the fact that factions within the Federal government considered him a dangerous radical. We hear about the fact that there were those amongst the racists who wanted to kill him, but we don't often hear that there were those within the FBI who wanted him "neutralized" despite the fact that this is now public record readily accessible to anyone who is willing to look. We also strip away many of his flaws, and he was a human and like all of us had many flaws, leaving not a historical figure of great importance but an idol who could be knocked off of his pedestal should we learn of his shortcomings.
the fact of the matter is that Dr. King was someone who took controversial positions with regards to race and civil rights, yes, but also with regards to a variety of other issues. He was someone who faced the wrath of not only the rednecks, but of the G-Men. He is someone who had many flaws, some of which even the less-uptight among us would find upsetting, and yet he accomplished great things at considerable personal risk. In other words, he was a great man, but he was a man, and it is when you realize that he had the same frailties as the rest of us that the truly amazing nature of what he did becomes clear, otherwise he's just another face without depth in our public history pantheon, he ceases to have been a human and simply becomes another set of names and dates.
Dr. King is, of course, only one of many historical figures to whom this has happened. We all know of George Washington's position as a Revolutionary War general and as the first president of the United states under the Constitution, but few know of his early military failures or his rather loose social life, or, for that matter, of his troubles as a political leader after becoming president. It's hard to come out of elementary school without knowing the inspirational story of how Helen Keller learned to communicate with the outside world, but few people know that she was a suffragist, a pacifist, a socialist, a birth control supporter, and one of the founders of the ACLU. And yet, without these pieces of information, our understandings of these people and their times are incomplete. George Washington's shortcomings as a husband and his military defeats are part of the frame that shows the sort of man he was and why it was he, as opposed to others, who was suited to doing what he did. Helen Keller's early life may be the stuff of Hallmark Hall of Fame movie inspirational stories, but it is her later life, and the fact that she was an articulate person brave enough to stake out positions that were controversial at best and unpopular to the point of marking those who held them as targets for violence that made her an important historical figure - indeed, if someone only knows of her early experience with Annie Potts, one would be fair in asking why she is so famous.
This is a process called heroification. I first heard the term when reading the book History on Trial about the 1990s political fight over establishing history standards for public schools. It is a process by which actual people who achieved undeniably great things are transformed in the public consciousness from the difficult, flawed, and often controversial figures that they really were into one-dimensional idols stripped of much of their true meaning for the purpose of providing the public with an inspirational, and decidedly safer, figure. This process happens through a variety of different, concurrent, activities. Those who feel that people (especially children) need clean-cut heroes will begin shaping the mythology of the individual via writing books, articles, and using influence on school boards to push a cleaned-up version of the person's life history. The process of producing school curriculum and text books often results in controversial elements of history being downplayed or left out (which is why so many people are shocked when they find out that history was rather different, and grittier, than they had been taught in school). When these individuals die, there are few who wish to "speak ill of the dead" and as such much of the later journalism written about them will either leave out difficult information or else give it only lip service while focusing on their accomplishments (witness what happened after Reagan's death, when very few public figures, including journalists, were willing to spend much time talking about the divisive nature of many of Reagan's policies - yes, there was some discussion, but it was drowned out by the eulogizing). Much of the public, wanting to see things in terms of black and white, will latch on to the hero figure, and any criticism of the individual becomes a mark of ill character*. Over time, sometimes a very short period of time, all of these factor together remove the individuals name from the person and replace it with a calcified and inhuman false memory. Although portions of this are directed by people with a media and/or political interest, much of it just happens because, for whatever reason, it is one of the things that we humans do, we want our historical figures to be safe. It provides us with heroes to worship, but also robs us of the ability to truly believe that we might be like these flawless men and women of the past.
Of course, there is also an equal and opposite process, which you could called vilification, that also occurs. Here, rather than the uncontroversial and good being accentuated, the uncontroversially bad is accentuated. Even the difficult but true fact that the bad could sometimes serve a greater good is cast away. The accomplishments of these individuals is forgotten, and their faults remembered. As a result, we have Nixon being synonymous with Watergate, with few people (outside of those trying, just as falsely, to turn him into a hero) remembering that he had many truly great accomplishments. Stalin is remembered, legitimately, as an amoral and paranoid dictator, but few people outside of Eastern Europe realize that Stalin's brutality is arguably the thing that stopped Hitler's conquests and that the Allied Powers would have fought longer, and possibly lost, without him.
There are numerous problems with this reduction of historical figures. The primary problem is that it is simply not true, and I believe that the truth matters. All of the individuals that we cast as heroes and villains were real people, who had flaws and good points, who did good and bad - even if the good outweighed the bad or vice-versa. When we forget this, we forget the reality of our species and our history.
Another problem, perhaps a more practical one, is that in removing these figures from humanity, we reduce their abilities to inspire or warn us. It is easy to look at the caricature of Dr. King and know that one can not match it, and perhaps not bother to try when we evaluate our own flaws. But when we know of his flaws, it lets us know that our flaws shouldn't hold us back from doing great things, as they didn't hold him back. At the same time, if we view Stalin and Hitler as inhuman monsters, we fail to see that they were humans like the rest of us, and that their societies were reacting to pressures both from within and without, and that we and our society have the same potential for evil if we are not watchful. A true history of the rise of these two would see how they used and manipulated their political systems, their national narrative and mythology, and the populace for their ends, and reveals that there are politicians throughout the world capable of doing the same under the right conditions - some of them in the places where we are most certain such things could never occur.
A third problem is that this "boiling down" of historical figures contributes to the names/dates approach to history and robs us of a true understanding of the past both by making such an understanding hard to get, and by making history appear dull and uninteresting to those who might benefit from it.
In the end, nobody truly benefits from this sort of history. It may be supercicially satisfying and make these figures seem safer, but in truth it increases their danger by failing to provide real inspirational figures and failing to warn us of danger. It may make history seem more grand, but it truly makes it duller. And, ultimately, it just isn't the truth.
*I fully expect that I will get emails claiming that I am trying to tear down Dr. King or Ms. Keller, which is ironic since, knowing about their controversial positions and their flaws, I respect them more now than I did when I saw them as historical idols. Before, to me, they were just names I was forced to learn, now they are individuals who achieved great things.
the fact of the matter is that Dr. King was someone who took controversial positions with regards to race and civil rights, yes, but also with regards to a variety of other issues. He was someone who faced the wrath of not only the rednecks, but of the G-Men. He is someone who had many flaws, some of which even the less-uptight among us would find upsetting, and yet he accomplished great things at considerable personal risk. In other words, he was a great man, but he was a man, and it is when you realize that he had the same frailties as the rest of us that the truly amazing nature of what he did becomes clear, otherwise he's just another face without depth in our public history pantheon, he ceases to have been a human and simply becomes another set of names and dates.
Dr. King is, of course, only one of many historical figures to whom this has happened. We all know of George Washington's position as a Revolutionary War general and as the first president of the United states under the Constitution, but few know of his early military failures or his rather loose social life, or, for that matter, of his troubles as a political leader after becoming president. It's hard to come out of elementary school without knowing the inspirational story of how Helen Keller learned to communicate with the outside world, but few people know that she was a suffragist, a pacifist, a socialist, a birth control supporter, and one of the founders of the ACLU. And yet, without these pieces of information, our understandings of these people and their times are incomplete. George Washington's shortcomings as a husband and his military defeats are part of the frame that shows the sort of man he was and why it was he, as opposed to others, who was suited to doing what he did. Helen Keller's early life may be the stuff of Hallmark Hall of Fame movie inspirational stories, but it is her later life, and the fact that she was an articulate person brave enough to stake out positions that were controversial at best and unpopular to the point of marking those who held them as targets for violence that made her an important historical figure - indeed, if someone only knows of her early experience with Annie Potts, one would be fair in asking why she is so famous.
This is a process called heroification. I first heard the term when reading the book History on Trial about the 1990s political fight over establishing history standards for public schools. It is a process by which actual people who achieved undeniably great things are transformed in the public consciousness from the difficult, flawed, and often controversial figures that they really were into one-dimensional idols stripped of much of their true meaning for the purpose of providing the public with an inspirational, and decidedly safer, figure. This process happens through a variety of different, concurrent, activities. Those who feel that people (especially children) need clean-cut heroes will begin shaping the mythology of the individual via writing books, articles, and using influence on school boards to push a cleaned-up version of the person's life history. The process of producing school curriculum and text books often results in controversial elements of history being downplayed or left out (which is why so many people are shocked when they find out that history was rather different, and grittier, than they had been taught in school). When these individuals die, there are few who wish to "speak ill of the dead" and as such much of the later journalism written about them will either leave out difficult information or else give it only lip service while focusing on their accomplishments (witness what happened after Reagan's death, when very few public figures, including journalists, were willing to spend much time talking about the divisive nature of many of Reagan's policies - yes, there was some discussion, but it was drowned out by the eulogizing). Much of the public, wanting to see things in terms of black and white, will latch on to the hero figure, and any criticism of the individual becomes a mark of ill character*. Over time, sometimes a very short period of time, all of these factor together remove the individuals name from the person and replace it with a calcified and inhuman false memory. Although portions of this are directed by people with a media and/or political interest, much of it just happens because, for whatever reason, it is one of the things that we humans do, we want our historical figures to be safe. It provides us with heroes to worship, but also robs us of the ability to truly believe that we might be like these flawless men and women of the past.
Of course, there is also an equal and opposite process, which you could called vilification, that also occurs. Here, rather than the uncontroversial and good being accentuated, the uncontroversially bad is accentuated. Even the difficult but true fact that the bad could sometimes serve a greater good is cast away. The accomplishments of these individuals is forgotten, and their faults remembered. As a result, we have Nixon being synonymous with Watergate, with few people (outside of those trying, just as falsely, to turn him into a hero) remembering that he had many truly great accomplishments. Stalin is remembered, legitimately, as an amoral and paranoid dictator, but few people outside of Eastern Europe realize that Stalin's brutality is arguably the thing that stopped Hitler's conquests and that the Allied Powers would have fought longer, and possibly lost, without him.
There are numerous problems with this reduction of historical figures. The primary problem is that it is simply not true, and I believe that the truth matters. All of the individuals that we cast as heroes and villains were real people, who had flaws and good points, who did good and bad - even if the good outweighed the bad or vice-versa. When we forget this, we forget the reality of our species and our history.
Another problem, perhaps a more practical one, is that in removing these figures from humanity, we reduce their abilities to inspire or warn us. It is easy to look at the caricature of Dr. King and know that one can not match it, and perhaps not bother to try when we evaluate our own flaws. But when we know of his flaws, it lets us know that our flaws shouldn't hold us back from doing great things, as they didn't hold him back. At the same time, if we view Stalin and Hitler as inhuman monsters, we fail to see that they were humans like the rest of us, and that their societies were reacting to pressures both from within and without, and that we and our society have the same potential for evil if we are not watchful. A true history of the rise of these two would see how they used and manipulated their political systems, their national narrative and mythology, and the populace for their ends, and reveals that there are politicians throughout the world capable of doing the same under the right conditions - some of them in the places where we are most certain such things could never occur.
A third problem is that this "boiling down" of historical figures contributes to the names/dates approach to history and robs us of a true understanding of the past both by making such an understanding hard to get, and by making history appear dull and uninteresting to those who might benefit from it.
In the end, nobody truly benefits from this sort of history. It may be supercicially satisfying and make these figures seem safer, but in truth it increases their danger by failing to provide real inspirational figures and failing to warn us of danger. It may make history seem more grand, but it truly makes it duller. And, ultimately, it just isn't the truth.
*I fully expect that I will get emails claiming that I am trying to tear down Dr. King or Ms. Keller, which is ironic since, knowing about their controversial positions and their flaws, I respect them more now than I did when I saw them as historical idols. Before, to me, they were just names I was forced to learn, now they are individuals who achieved great things.
Monday, January 24, 2011
Reservoir Sites
I have spent a surprisingly large part of my time as a professional archaeologist dealing with hydroelectric projects. Most hydroelectric projects fall under the jurisdiction of the Federal Energy Regulatory Commission, better known as FERC, and environmental review of these projects must meet the standards of both FERC and whatever agency is responsible for the land on which the project facilities site (usually, though not always, the Forest Service). As hydroelectric facilities typically involve dams and reservoirs, this means that I get to spend alot of time thinking about the effects of reservoirs on archaeological sites.
The effects of submersion vary due both to the type of site in question, and where in the reservoir it ends up. For example, I have just finished assessing the condition of a historic-era site, the remains of an old building now long-since fallen over, and it's doing alright. The reservoir water cover part of the site for a couple of months every year, which results in sediments being brought to and taken from the site because of the changing reservoir levels (reservoirs fill and are emptied at various time throughout the year) and because of the impacts from waves. In this location, a species of grass has grown that manages to survive the occasional inundation, and it is helping to hold more of the sediment in place than would otherwise be the case. The building remains are concrete, and while the reservoir is wearing them down, it is doing so slowly, and most of the material appears to be more-or-less intact.
By contrast, a prehistoric site nearby is located just a few feet lower on it's slope, and as a result the reservoir spends a bit more time inundating it, and all of the vegetation has been removed, which results in the ground being bare, which results in site soils being leached of some organic materials and generally eroded away as the reservoir rises, lowers, and as waves hit the site. An area that was once midden* is now barren sand, fragments of flaked stone debitage** as well as some of the obsidian tools whose manufacture created the debitage has been moved around (and likely some of it removed) by the water - although it should be noted that in some cases the reservoir actually deposits sand on sites and prevents this kind of damage. A combination of soaking and drying out and pounding action from the waves have caused the bedrock mortars to erode, to the point that some of the mortar cups are barely recognizable anymore. Materials that can be important to figuring out how old a site is or what plants were used, such as charcoal, tend to be moved and/or destroyed by the water.
There are other sites in the vicinity, which are always submerged below the reservoir. The effects of the reservoir on these sites is open to question, but based on examples from other places it is likely that much of their organic material (midden, pollens, animal bone, etc.) has been damaged, but that materials such as flaked stone and bedrock features may have been preserved under layers of deposited sediments (though it is also possible that these have been damaged by subsurface water movement). Regardless of the site's preservation or lack thereof, the submersion effectively removes it from both archaeological study and use by Native Americans descended from the original site occupants.
Impacts don't just come from the reservoirs themselves, though. The presence of a reservoir often results in recreation as swimmers, fishers, and boaters flock to the area. The effects of this can include damage to sites due to boat propellers (I was recently on a site where the bedrock mortars had marks from where boat propellers had hit them during high-water levels), artifacts being inadvertently destroyed by people walking on them (or in some cases doing things like building campfires on top of site features), and looting due to more people being aware of the site's presence because they are around it more often. All of this gets intensified when, as often happens, communities appear around reservoirs due to people wanting lakefront houses.
So, reservoirs have detrimental effects on archaeological sites, to be certain. However, this is a classic example of the trade-offs that one sees when working in environmental consulting. Reservoirs are expensive to create and maintain, and they are always built with a reason, the supply of water and the creation of hydroelectric power being two big reasons. The need for water - for drinking and for agriculture - is obvious. Hydroelectric facilities have their own set of environmental problems, but also help to reduce the need/desire for the construction of coal and petroleum burning power plants. So, reservoirs are typically a case where archaeology has had to suffer for what the society at large has deemed a greater good. It's this sort of trade-off that has caused many of my academic colleagues to reject cultural resource management as a career choice, but it's also these sorts of trade-offs that point to the need for someone to speak for these resources so that they aren't forgotten in the constellation of other issues surrounding land modification, even if that simply means that sites are excavated prior to being inundated.
*Middens are dark, nutrient-rich soils that contain artifacts, the remains of prehistoric trash dumps and a treasure-houses of archaeological information.
**The rock that is chipped away when making flaked stone tools.
One of the many reservoirs that I have worked around.
The effects of submersion vary due both to the type of site in question, and where in the reservoir it ends up. For example, I have just finished assessing the condition of a historic-era site, the remains of an old building now long-since fallen over, and it's doing alright. The reservoir water cover part of the site for a couple of months every year, which results in sediments being brought to and taken from the site because of the changing reservoir levels (reservoirs fill and are emptied at various time throughout the year) and because of the impacts from waves. In this location, a species of grass has grown that manages to survive the occasional inundation, and it is helping to hold more of the sediment in place than would otherwise be the case. The building remains are concrete, and while the reservoir is wearing them down, it is doing so slowly, and most of the material appears to be more-or-less intact.
By contrast, a prehistoric site nearby is located just a few feet lower on it's slope, and as a result the reservoir spends a bit more time inundating it, and all of the vegetation has been removed, which results in the ground being bare, which results in site soils being leached of some organic materials and generally eroded away as the reservoir rises, lowers, and as waves hit the site. An area that was once midden* is now barren sand, fragments of flaked stone debitage** as well as some of the obsidian tools whose manufacture created the debitage has been moved around (and likely some of it removed) by the water - although it should be noted that in some cases the reservoir actually deposits sand on sites and prevents this kind of damage. A combination of soaking and drying out and pounding action from the waves have caused the bedrock mortars to erode, to the point that some of the mortar cups are barely recognizable anymore. Materials that can be important to figuring out how old a site is or what plants were used, such as charcoal, tend to be moved and/or destroyed by the water.
Another view of the same reservoir.
There are other sites in the vicinity, which are always submerged below the reservoir. The effects of the reservoir on these sites is open to question, but based on examples from other places it is likely that much of their organic material (midden, pollens, animal bone, etc.) has been damaged, but that materials such as flaked stone and bedrock features may have been preserved under layers of deposited sediments (though it is also possible that these have been damaged by subsurface water movement). Regardless of the site's preservation or lack thereof, the submersion effectively removes it from both archaeological study and use by Native Americans descended from the original site occupants.
Impacts don't just come from the reservoirs themselves, though. The presence of a reservoir often results in recreation as swimmers, fishers, and boaters flock to the area. The effects of this can include damage to sites due to boat propellers (I was recently on a site where the bedrock mortars had marks from where boat propellers had hit them during high-water levels), artifacts being inadvertently destroyed by people walking on them (or in some cases doing things like building campfires on top of site features), and looting due to more people being aware of the site's presence because they are around it more often. All of this gets intensified when, as often happens, communities appear around reservoirs due to people wanting lakefront houses.
So, reservoirs have detrimental effects on archaeological sites, to be certain. However, this is a classic example of the trade-offs that one sees when working in environmental consulting. Reservoirs are expensive to create and maintain, and they are always built with a reason, the supply of water and the creation of hydroelectric power being two big reasons. The need for water - for drinking and for agriculture - is obvious. Hydroelectric facilities have their own set of environmental problems, but also help to reduce the need/desire for the construction of coal and petroleum burning power plants. So, reservoirs are typically a case where archaeology has had to suffer for what the society at large has deemed a greater good. It's this sort of trade-off that has caused many of my academic colleagues to reject cultural resource management as a career choice, but it's also these sorts of trade-offs that point to the need for someone to speak for these resources so that they aren't forgotten in the constellation of other issues surrounding land modification, even if that simply means that sites are excavated prior to being inundated.
*Middens are dark, nutrient-rich soils that contain artifacts, the remains of prehistoric trash dumps and a treasure-houses of archaeological information.
**The rock that is chipped away when making flaked stone tools.
Friday, January 21, 2011
Philosophers Without Fieldwork
I remember the late-night college conversations, those in which someone would make a claim about the basics of all human behavior and then back it up by pointing out that their claim must be true because it was a view espoused by Friedrich Nietzsche, or Adam Smith, or Karl Marx, or Immanuel Kant, or...well, you get the point. I had thought that this was the sort of thing that people grew out of shortly after they graduated from college, but then I shared an apartment with a friend who would frequently make statements about the state of humanity and felt that his positions were unassailable because they were espoused by Thomas Hobbes or John Locke (the philosopher, not the guy from Lost); I had family members become heavily influenced by Libertarianism, and cite Adam Smith's arguments in favor of capitalism as if they were unquestionably proven fact; and I lived in Santa Cruz, so of course I knew more than a few folks who were utterly convinced that anything that Karl Marx wrote was a revelation from on high*.
What was so strange to me about this is that all of these philosophers were laying out what amounted to untested or semi-tested hypotheses about human behavior. Yes, the philosophers themselves didn't phrase them as hypotheses, they were usually assertions about "the truth", but the fact of the matter is that they were essentially testable notions regarding which data could be gathered, and so they were hypotheses, whether intended to be or not. These philosophers laid intellectual groundwork for what would become modern economics, sociology, anthropology, and the modern social sciences in general. But we wouldn't take the writings of a 17th century "natural philosopher" as the final word on modern chemistry, so why do we so often take the writings of 17th, 18th, and 19th century social and political philosophers as the final word on human nature, despite the fact that we now have over a century's worth of data on the subject?
I think there's a couple of answers for that. One is that these philosophers all had agendas that are still at play in today's politics**. Adam Smith argued in favor of liberalizing economies for the good of society, Karl Marx was insisting that the downtrodden needed to be considered and not oppressed, John Locke argued for a model of government in which the people chose to be governed rather than having it imposed upon them, and so on. All of these ideas are still at play in modern global politics, and when one can pull out the writings of someone who was clearly intelligent and articulate, and who's name is considered near-sacred and above reproach, it serves well to further that agenda.
Also, all of these individuals had brilliant insights that actually did improve the general understanding of how humans function. Smith had an understanding of abstract economics that nobody before him could claim. Locke understood the structure of social hierarchies in the context of government in a way that few others of or before his time could rival. Marx grasped how the working class was impacted by the conditions of the Industrial Revolution in a way that few others of his time could, or cared to. But all of them were also very wrong on certain points. To take the examples of Smith and Marx: Smith failed to sufficiently account for the ability of those who were particularly successful in aggregating money and power via capitalist economics to manipulate the markets artificially, and he failed to grasp that the factors that influence the marketplace are so varied that models based on rational actors will ultimately fail to describe the marketplace in sufficient detail. Marx, by contrast, failed to grasp that, while class identity and power relations are important to the formation of group and individual identity, they are not necessarily the most important factors and will only motivate most people so far***. And this is not the end of the places where these two got things wrong. And outside of those two, don't get me started on why the "Social Contract" of is a nice idea but only abstractly defines the nature of government and the governed in a very vague way.
But, remember, these people were trailblazers. They were dealing with issues that others had not, and naturally they're going to get points wrong. That's not a knock against them, it's just the way that things work - the early thinkers will always get things wrong, which sets later people working on the same issues looking for better answers. As complex as the models of society constructed by these people were, the reality is much more complex, and people simply don't consistently behave in the ways that one would predict from the writings of these philosophers, even if their writings do hold many truthful elements. These men were writing based on their personal experiences and their own insights, but they did so with a relative dearth if fieldwork and data collection (yes, I know many of them traveled extensively to gather information, but they generally did so in a less systematic and coherent way than a professional researcher would), and much of the information that has been gathered in fields as diverse as anthropology and neuroscience since their writings has required the modification or even abandonment of their results.
Ultimately, this lack of disconfirming or modifying information may be part of the reason why these philosophers remain so popular. If you can ultimately boil government down to the social contract, or economics down to abstract market forces, or power relations to struggle between identifiable classes, then it makes the world seem clearer and easier to navigate. When I have had conversations with devotees of any of these people and pointed out the flaws in their arguments in light of the information now available, I have found that they simply re-state the argument without bothering to account for the new information. It becomes something of a matter of faith, and the advocates of these positions will tend to simply ignore or rationalize anything that doesn't quite work out (I remember one occasion when a Marxist told me that I wouldn't understand Marxism unless I was a Marxist...which is pretty much what many religious people tell me when trying to recruit me). That the arguments seem internally consistent and robust is more important than whether or not they actually describe the real world.
All of which would be amusing but irrelevant if it weren't for the fact that these philosophers are still cited and remain influential in modern political discourse. The end result is that, very often, politicians and voters put an ideology shaped by these early philosophers ahead of reality when it comes to shaping policy. It creates a huge potential for a lowering tide that grounds all boats.
*Cue the Marxists pointing out that Marx didn't believe in divine revelation in 3...2...1...
**Of course, you could argue that the reason, at least in part, that these issues are still in play in politics is that these individuals were so articulate in putting them forth in the first place.
***It's worth noting that the societies that really took Marx's ideas to heart were not the industrial nations where Marx thought that the workers would rise up, but rather the primarily agrarian nations where there was a much starker difference between the agrarian workers and the urbanized landowners and politicians. Of course, Smith wrote largely about an agrarian society (remember, he was pre-industrial revolution), but his ideas were largely taken up by industrial nations, so there ya' go.
What was so strange to me about this is that all of these philosophers were laying out what amounted to untested or semi-tested hypotheses about human behavior. Yes, the philosophers themselves didn't phrase them as hypotheses, they were usually assertions about "the truth", but the fact of the matter is that they were essentially testable notions regarding which data could be gathered, and so they were hypotheses, whether intended to be or not. These philosophers laid intellectual groundwork for what would become modern economics, sociology, anthropology, and the modern social sciences in general. But we wouldn't take the writings of a 17th century "natural philosopher" as the final word on modern chemistry, so why do we so often take the writings of 17th, 18th, and 19th century social and political philosophers as the final word on human nature, despite the fact that we now have over a century's worth of data on the subject?
I think there's a couple of answers for that. One is that these philosophers all had agendas that are still at play in today's politics**. Adam Smith argued in favor of liberalizing economies for the good of society, Karl Marx was insisting that the downtrodden needed to be considered and not oppressed, John Locke argued for a model of government in which the people chose to be governed rather than having it imposed upon them, and so on. All of these ideas are still at play in modern global politics, and when one can pull out the writings of someone who was clearly intelligent and articulate, and who's name is considered near-sacred and above reproach, it serves well to further that agenda.
Also, all of these individuals had brilliant insights that actually did improve the general understanding of how humans function. Smith had an understanding of abstract economics that nobody before him could claim. Locke understood the structure of social hierarchies in the context of government in a way that few others of or before his time could rival. Marx grasped how the working class was impacted by the conditions of the Industrial Revolution in a way that few others of his time could, or cared to. But all of them were also very wrong on certain points. To take the examples of Smith and Marx: Smith failed to sufficiently account for the ability of those who were particularly successful in aggregating money and power via capitalist economics to manipulate the markets artificially, and he failed to grasp that the factors that influence the marketplace are so varied that models based on rational actors will ultimately fail to describe the marketplace in sufficient detail. Marx, by contrast, failed to grasp that, while class identity and power relations are important to the formation of group and individual identity, they are not necessarily the most important factors and will only motivate most people so far***. And this is not the end of the places where these two got things wrong. And outside of those two, don't get me started on why the "Social Contract" of is a nice idea but only abstractly defines the nature of government and the governed in a very vague way.
But, remember, these people were trailblazers. They were dealing with issues that others had not, and naturally they're going to get points wrong. That's not a knock against them, it's just the way that things work - the early thinkers will always get things wrong, which sets later people working on the same issues looking for better answers. As complex as the models of society constructed by these people were, the reality is much more complex, and people simply don't consistently behave in the ways that one would predict from the writings of these philosophers, even if their writings do hold many truthful elements. These men were writing based on their personal experiences and their own insights, but they did so with a relative dearth if fieldwork and data collection (yes, I know many of them traveled extensively to gather information, but they generally did so in a less systematic and coherent way than a professional researcher would), and much of the information that has been gathered in fields as diverse as anthropology and neuroscience since their writings has required the modification or even abandonment of their results.
Ultimately, this lack of disconfirming or modifying information may be part of the reason why these philosophers remain so popular. If you can ultimately boil government down to the social contract, or economics down to abstract market forces, or power relations to struggle between identifiable classes, then it makes the world seem clearer and easier to navigate. When I have had conversations with devotees of any of these people and pointed out the flaws in their arguments in light of the information now available, I have found that they simply re-state the argument without bothering to account for the new information. It becomes something of a matter of faith, and the advocates of these positions will tend to simply ignore or rationalize anything that doesn't quite work out (I remember one occasion when a Marxist told me that I wouldn't understand Marxism unless I was a Marxist...which is pretty much what many religious people tell me when trying to recruit me). That the arguments seem internally consistent and robust is more important than whether or not they actually describe the real world.
All of which would be amusing but irrelevant if it weren't for the fact that these philosophers are still cited and remain influential in modern political discourse. The end result is that, very often, politicians and voters put an ideology shaped by these early philosophers ahead of reality when it comes to shaping policy. It creates a huge potential for a lowering tide that grounds all boats.
*Cue the Marxists pointing out that Marx didn't believe in divine revelation in 3...2...1...
**Of course, you could argue that the reason, at least in part, that these issues are still in play in politics is that these individuals were so articulate in putting them forth in the first place.
***It's worth noting that the societies that really took Marx's ideas to heart were not the industrial nations where Marx thought that the workers would rise up, but rather the primarily agrarian nations where there was a much starker difference between the agrarian workers and the urbanized landowners and politicians. Of course, Smith wrote largely about an agrarian society (remember, he was pre-industrial revolution), but his ideas were largely taken up by industrial nations, so there ya' go.
Thursday, January 20, 2011
Television, Hope, and a Doctor
I think that television gets a bad rap. Now, don't get me wrong, I do agree with those who hold that the average American (and likely the average person from many industrialized nations) watches far more television than is healthy, but this does not mean that television itself is a great evil.
Thinking back to my childhood, there was one show that was very important to me. It was a science fiction/fantasy show, and it was fun, but that's not what made it important to me. What made it important was that it's lead character was one that I could identify with, and it gave me a sense of hope that was extremely important given where I was living and who I grew up around. The show was Doctor Who. The original run of the series was aired in the United States on PBS stations during the 1980s, and it wasn't the slickly produced show that people viewing the current run of the series would know, but it was (like many BBC productions) a low-budget affair (at least as compared to the American equivalents) with abundant over-acting, silly costumes, laughable sets, and special effects that, as I once read somewhere, inspire the sort of affection that one might have for a three-legged dog.
Here's a couple of images to give you a taste:
So, why did a show that, when compared to the production values and acting of most American Science Fiction shows truly was lacking, become so important to a kid growing up in California in the 1980s? The answer to that question lies in two places: who I was as a child, and the lead character of the show.
I grew up in Salida, California. It's a small town that is now essentially a suburb of the city of Modesto, but was a bit more isolated when I was a kid. California's Central Valley, in which Salida is situated, was and remains one of the world's agricultural powerhouses. Local industries were largely based around agriculture and the transport of agricultural products, and Salida had a very blue collar and rural character when I was a kid*. There was nothing wrong with this, in and of itself, but it meant that I didn't fit in.
I was a brainy kid who enjoyed reading and enjoyed imagination, but who lacked coordination or muscle strength. I was the definition of unathletic, and was disinterested in sports. Given that sports were important to many of the people - adult and child alike - where I lived, this marked me as an outsider. Worse was the fact that the community in which I lived didn't much value education past the high school level (I have written about this before) and so my interest in reading, and the fact that I was just as interested in reading non-fiction as fiction, was one factor that led to me being branded a "nerd."
Another factor was the fact that I just didn't get how to socialize. I found it difficult to understand subjects that were popular (football, professional wrestling, pop music) which made it difficult to talk about them. I had, and often exercised, the potential to be very boring by talking about topics that interested me and none of my peers. I couldn't comprehend many of the more subtle social cues that the other kids were learning and employing, and therefore any attempt that I made to relate to them fell flat, and often led to ridicule. Even something as seemingly simple as dressing to fit in failed, as I failed to grasp the often small differences that made one article of clothing popular and another undesirable. There are alot of potential explanations for this, but I'm not interested in getting into them here. The end result, though, is that I was the oddball, and in the neighborhood in which I lived, this meant that I was constantly tormented by bullies, and was even bullied and tormented by the kids who were known for being kind and gentle to pretty much everybody else. In the town in which I lived, few adults noticed this, and most of the few who did (including many school personnel) ignored it and a few even approved of and encouraged it (I remember one occasion when the father of one of the worst bullies - one who actually made death threats to me - informed my father that if I was incapable of fighting off six other kids at once, then I deserved to get beaten up).
At least once a week I came home from school with bruises and cuts from getting beaten up by a group of kids, having objects including metal shards, rocks, and broken glass thrown at me as I walked home from school was not uncommon. Even trying to isolate myself didn't help, as I would be sought out by kids wanting to cause me grief - to the point that I could be sitting in my yard at home and have someone walk up and start throwing punches at me. With the exception of two kids who were genuinely my friends, most of those I called friends were just the kids who let me hang out with them so that they would have someone to pick on. It got to the point that, by the time I was 13, I assumed that all friendly gestures were simply a set-up for me to be embarrassed.
My parents did what they could, but their help alternated between simply being a sympathetic ear, to trying to get uninterested school administrators to do something, to encouraging me to "try to fit in" while not listening when I tried to explain that that was precisely what I had been doing.
And so when I discovered Doctor Who, it was a validation.
You see, the lead character, known only as The Doctor, was a time-traveling scientist-adventurer from another world given to weird eccentricities. His eccentricities weren't due solely to his being an alien, he was a definite oddball even when he met with members of his own species. But the Doctor was never a "fish out of water." He always know what he was doing, and how he was doing it, and if other people took issue with his general weirdness, well, that was their problem, not his. The Doctor had all of the traits of the nerd or geek - strong intellect, esoteric interests, an aversion to violence, a tendency to monologue on matters of interest to nobody but him, and fashion sense so atrocious as to become weirdly brilliant (see the photo below) - and yet he was neither a nerd or a geek. He had transcended such labels. And unlike the brainy characters of so many television shows, the Doctor was not the sidekick or technical support to an action hero, he was the hero and he often put a stop to the violence and chaos caused by "action hero" type characters when they appeared. He used his intellect, sense of compassion, and strong sense of justice to put an end to the villains, and he did so while never carrying a weapon and only occasionally resorting to doing so much as hitting anyone (and most of that during the time that the character was in his third incarnation - oh yeah, the Doctor could change bodies rather than dying, which allowed the production staff to hire a new actor when the old one decided to go).
All of this stood in contrast to the rest of the pop culture to which I was exposed. The Doctor was usually played by a middle-aged actor in fairly average physical condition, quite the contrast to the stars of the action movies that were popular at the time. The Doctor always solved problems via his intellect and eschewed violence (the only time in the original series that I can recall him picking up a gun with an intention to fire it, it was made very clear that he was disgusted with himself for doing so and that he couldn't bring himself to use it in the end), in contrast to most of the other media role-models offered up to boys. Although the Doctor was willing to work with a team, he was just as willing to go it alone, unlike most other oddball fictional characters (contrast this with another geek culture icon, Mr. Spock, who longs to fit in and reaches his apex as a member of a team following Captain Kirk).
This was the message that I needed to hear: "don't worry about not fitting in, the problem isn't you, and just because you are the outsider doesn't mean that you can't be brilliant." Seeing this on a television show made for a broad audience (albeit a British rather than American audience), and a television show that I quickly learned had been running for several decades, meant that enough actors, producers, television executives, etc. etc. - and most importantly, enough of an audience - found the notion of the heroic eccentric outsider plausible or compelling enough for the show to have been successful. This meant, I reasoned, that as alone as I usually felt, I really wasn't - there were plenty of other people out there like me, and I would find them eventually.
So, really, television did one very positive thing for me. It introduced me to the concept that I was not alone, that I was not doomed to be the butt of ridicule and a literal punching bag forever, and I might even come into my own and become someone of importance, however small and lonely I often felt. I needed that as a child, and so when I hear that television is somehow inherently bad for children, I often think that the person making the claim is cherry-picking their evidence. Yes, alot of kids watch too much television to the exclusion of other things. But even the silliest of shows can be a window into a wider world that a child may need to see.
*This began to change in the late 80s, and is only somewhat true now. Lower land costs in the Central Valley during the 80s and 90s led to many Silicon Valley professionals buying houses and commuting form between 4 and 6 hours on their daily round trip. This changed alot of things about the area, including the general character of the cities and towns, and they have become rather "yuppified" as compared to what they were when I was a kid.
Thinking back to my childhood, there was one show that was very important to me. It was a science fiction/fantasy show, and it was fun, but that's not what made it important to me. What made it important was that it's lead character was one that I could identify with, and it gave me a sense of hope that was extremely important given where I was living and who I grew up around. The show was Doctor Who. The original run of the series was aired in the United States on PBS stations during the 1980s, and it wasn't the slickly produced show that people viewing the current run of the series would know, but it was (like many BBC productions) a low-budget affair (at least as compared to the American equivalents) with abundant over-acting, silly costumes, laughable sets, and special effects that, as I once read somewhere, inspire the sort of affection that one might have for a three-legged dog.
Here's a couple of images to give you a taste:
So, why did a show that, when compared to the production values and acting of most American Science Fiction shows truly was lacking, become so important to a kid growing up in California in the 1980s? The answer to that question lies in two places: who I was as a child, and the lead character of the show.
I grew up in Salida, California. It's a small town that is now essentially a suburb of the city of Modesto, but was a bit more isolated when I was a kid. California's Central Valley, in which Salida is situated, was and remains one of the world's agricultural powerhouses. Local industries were largely based around agriculture and the transport of agricultural products, and Salida had a very blue collar and rural character when I was a kid*. There was nothing wrong with this, in and of itself, but it meant that I didn't fit in.
I was a brainy kid who enjoyed reading and enjoyed imagination, but who lacked coordination or muscle strength. I was the definition of unathletic, and was disinterested in sports. Given that sports were important to many of the people - adult and child alike - where I lived, this marked me as an outsider. Worse was the fact that the community in which I lived didn't much value education past the high school level (I have written about this before) and so my interest in reading, and the fact that I was just as interested in reading non-fiction as fiction, was one factor that led to me being branded a "nerd."
Another factor was the fact that I just didn't get how to socialize. I found it difficult to understand subjects that were popular (football, professional wrestling, pop music) which made it difficult to talk about them. I had, and often exercised, the potential to be very boring by talking about topics that interested me and none of my peers. I couldn't comprehend many of the more subtle social cues that the other kids were learning and employing, and therefore any attempt that I made to relate to them fell flat, and often led to ridicule. Even something as seemingly simple as dressing to fit in failed, as I failed to grasp the often small differences that made one article of clothing popular and another undesirable. There are alot of potential explanations for this, but I'm not interested in getting into them here. The end result, though, is that I was the oddball, and in the neighborhood in which I lived, this meant that I was constantly tormented by bullies, and was even bullied and tormented by the kids who were known for being kind and gentle to pretty much everybody else. In the town in which I lived, few adults noticed this, and most of the few who did (including many school personnel) ignored it and a few even approved of and encouraged it (I remember one occasion when the father of one of the worst bullies - one who actually made death threats to me - informed my father that if I was incapable of fighting off six other kids at once, then I deserved to get beaten up).
At least once a week I came home from school with bruises and cuts from getting beaten up by a group of kids, having objects including metal shards, rocks, and broken glass thrown at me as I walked home from school was not uncommon. Even trying to isolate myself didn't help, as I would be sought out by kids wanting to cause me grief - to the point that I could be sitting in my yard at home and have someone walk up and start throwing punches at me. With the exception of two kids who were genuinely my friends, most of those I called friends were just the kids who let me hang out with them so that they would have someone to pick on. It got to the point that, by the time I was 13, I assumed that all friendly gestures were simply a set-up for me to be embarrassed.
My parents did what they could, but their help alternated between simply being a sympathetic ear, to trying to get uninterested school administrators to do something, to encouraging me to "try to fit in" while not listening when I tried to explain that that was precisely what I had been doing.
And so when I discovered Doctor Who, it was a validation.
You see, the lead character, known only as The Doctor, was a time-traveling scientist-adventurer from another world given to weird eccentricities. His eccentricities weren't due solely to his being an alien, he was a definite oddball even when he met with members of his own species. But the Doctor was never a "fish out of water." He always know what he was doing, and how he was doing it, and if other people took issue with his general weirdness, well, that was their problem, not his. The Doctor had all of the traits of the nerd or geek - strong intellect, esoteric interests, an aversion to violence, a tendency to monologue on matters of interest to nobody but him, and fashion sense so atrocious as to become weirdly brilliant (see the photo below) - and yet he was neither a nerd or a geek. He had transcended such labels. And unlike the brainy characters of so many television shows, the Doctor was not the sidekick or technical support to an action hero, he was the hero and he often put a stop to the violence and chaos caused by "action hero" type characters when they appeared. He used his intellect, sense of compassion, and strong sense of justice to put an end to the villains, and he did so while never carrying a weapon and only occasionally resorting to doing so much as hitting anyone (and most of that during the time that the character was in his third incarnation - oh yeah, the Doctor could change bodies rather than dying, which allowed the production staff to hire a new actor when the old one decided to go).
All of this stood in contrast to the rest of the pop culture to which I was exposed. The Doctor was usually played by a middle-aged actor in fairly average physical condition, quite the contrast to the stars of the action movies that were popular at the time. The Doctor always solved problems via his intellect and eschewed violence (the only time in the original series that I can recall him picking up a gun with an intention to fire it, it was made very clear that he was disgusted with himself for doing so and that he couldn't bring himself to use it in the end), in contrast to most of the other media role-models offered up to boys. Although the Doctor was willing to work with a team, he was just as willing to go it alone, unlike most other oddball fictional characters (contrast this with another geek culture icon, Mr. Spock, who longs to fit in and reaches his apex as a member of a team following Captain Kirk).
This was the message that I needed to hear: "don't worry about not fitting in, the problem isn't you, and just because you are the outsider doesn't mean that you can't be brilliant." Seeing this on a television show made for a broad audience (albeit a British rather than American audience), and a television show that I quickly learned had been running for several decades, meant that enough actors, producers, television executives, etc. etc. - and most importantly, enough of an audience - found the notion of the heroic eccentric outsider plausible or compelling enough for the show to have been successful. This meant, I reasoned, that as alone as I usually felt, I really wasn't - there were plenty of other people out there like me, and I would find them eventually.
So, really, television did one very positive thing for me. It introduced me to the concept that I was not alone, that I was not doomed to be the butt of ridicule and a literal punching bag forever, and I might even come into my own and become someone of importance, however small and lonely I often felt. I needed that as a child, and so when I hear that television is somehow inherently bad for children, I often think that the person making the claim is cherry-picking their evidence. Yes, alot of kids watch too much television to the exclusion of other things. But even the silliest of shows can be a window into a wider world that a child may need to see.
*This began to change in the late 80s, and is only somewhat true now. Lower land costs in the Central Valley during the 80s and 90s led to many Silicon Valley professionals buying houses and commuting form between 4 and 6 hours on their daily round trip. This changed alot of things about the area, including the general character of the cities and towns, and they have become rather "yuppified" as compared to what they were when I was a kid.
Wednesday, January 19, 2011
Degrees and Career
I used to think that I would get a PhD. I wanted a career in academics, and that requires a PhD*. Then I went to graduate school. I had entered UC Santa Barbara's MA program intending to switch over to the PhD program if I thought I could handle it after the first year (the first two years for the MA and PhD students were essentially identical, except that the PhD students generally had more funding opportunities). At the end of the first year, I knew I could handle the PhD program, but I no Longer wanted a PhD. I saw how the faculty had to structure their lives because of their work demands, and decided that I didn't want to be an academic. Generally, if one is an archaeologist and one is not an academic, then one works in cultural resource management, and here I am today.
In order to be successful in academics, you have to live, breathe, eat, and drink your subject. Especially early in the career, when you're seeking tenure and trying to establish yourself as a researcher and a teacher, you can expect to take low-paying, lousy jobs and work to the exclusion of other parts of your life. There are people who thrive in this sort of setting, who do truly brilliant work and love the research enough to make the sacrifices. I'm not one of them. I like a steady paycheck, I like having numerous hobbies outside of work. I like having a personal life in which I get to spend significant amounts of time with my partner. In short, I like having the sort of life that an academic career would make difficult, at least until I was well into my career and had a tenured position.
What's more, there are damn few academic jobs. The last time I bothered to look up the statistics, some time around 2005, there was something in the neighborhood of 10 PhDs granted per job opening per years. And you weren't just competing for that job with the other nine people who had earned a degree that year, you were also competing with the nine people who hadn't gotten a job the year before, the nine the year before that, the nine before that, etc. By contrast, in the year before I had finished my MA, I had received an average of one unsolicited job offer per month, and I could expect to work an average of 40-50 hours per week at most of these jobs.
Now, you can certainly get a CRM job with a PhD, and many successful and excellent CRM archaeologists have such degrees, I have had the good fortune to work with many of them. But there are few, if any, CRM jobs that require that the job holder have a PhD. The reason for this is simple - the CRM industry was created by law and regulation, and while those regulations vary a bit from government agency to government agency, none of them require a Principle Investigator (the head-honcho archaeologist) to have a degree higher than an MA, which means that no other position within the hierarchy is required to have anything higher than an MA. In fact, it is usually assumed by CRM firms (rather unjustly, I might add) that if someone has a PhD, then it means that they want to be an academic but couldn't find a job, and so many hiring managers will pass up someone with a PhD in favor of someone with an MA.
Yeah, CRM was the career path for me, and an MA was the degree for me.
Strangely, not everyone saw it this way. My advisor, knowing the academic and CRM job markets, certainly was supportive of my decision. Brian Fagan, who's last year teaching was my first year as a grad student, was encouraging of both myself and the other fellow in the grad program looking at a CRM career. The people I knew who worked in CRM were all extremely encouraging, including those who were themselves working towards PhDs. However, a few other grad students and faculty members had a different attitude.
I have spoken with other people who have attended grad schools with both MA and PhD programs, and they have told me stories of just how obnoxious the PhD students could be towards the MA students. This was not my experience. While there were a small number of snobbish PhD students who treated us MA folks badly, the majority considered us colleagues and treated us no different from the other grad students, which was appropriate seeing as how the only real difference between the MA and PhD programs were that PhD students wrote a dissertation that was potentially longer and more complex than the standard MA thesis (I say potentially because several MA students, myself included, wrote MA theses that were longer and more complex than was required of the PhD dissertations).
No, the attitude that we encountered was one that was intended to be encouraging, and this good intention was appreciated, but which grew tiresome. Most of the PhD students wanted careers in academia, and were willing to put up with all that such a career entailed. Like many people who are passionate about something, they assume that other interested people are just as passionate as they are. In truth, us MA students differed from the PhD students not in terms of our merit but we were simply less passionate than them about our topic - we were not willing or interested in making the necessary sacrifices to have an academic career, but this was something that most of the nascent academics had a hard time understanding (though, again, there were a few who did get it). Of course, it didn't help that my department had a large number of people who were interested in Peruvian temples, and as such had a hard time grasping that there are, in fact, people who are genuinely interested in hunter-gatherer archaeology and not interested in massive temples.
But, again, this was at least usually an encouraging attitude. Stranger were the sorts of things that I got from family members, primarily my mother. My father granted that I had a pretty good idea of what my career required and expected that I knew what I was doing. My sisters generally got that, but occasionally would get confused and pester me about my "stopping with just a Masters degree**." My mother seems to have now accepted that I am in a line of work where an MA is the ideal degree, but from the time I started graduate school until about a year after I finished, she would routinely express her dissapointment that I wasn't earning a PhD, and kept insisting that I would be better off with a PhD than an MA even though this was demonstrably false. Her basic logic being that a PhD would open up academic jobs, and therefore give me a wider career field. Of course, as already stated, the academic job market is so terrible that opening it up only marginally widens the job search field and having a PhD often limits the willingness of CRM firms to hire a person, so having a PhD may actually reduce employment opportunities overall. It took six years of explaining this to my mother before she finally got it.
Ultimately, this seemd to be the basic pattern: people assume that a PhD is more prestigious (arguably true) and therefore will lead to greater career success (demonstrably false), and most laypeople assume that all or most archaeologists work in a research/academic setting (about as far from true as you can get and still be in the same galaxy). As a result, when people hear what I do for a living, they start addressing me as "Dr." and when I point out that I hold an MA and not a PhD, they become confused and tend to ask why I dropped out of my PhD program, never thinking that someone in my line would actually seek an MA.
So it goes.
*With the exception of teaching at community colleges, which is actually a pretty sweet gig, but for which full-time jobs are increasingly rare.
**I always found the "you only have a Masters degree" attitude to be bizarre. It's a difficult degree to get, and only a small portion of the population has it, but because people expect someone in my line of work to have a PhD (even though most of us don't), there's this weird tendency for people outside of my profession to try to shame archaeologists who have Masters degrees.
In order to be successful in academics, you have to live, breathe, eat, and drink your subject. Especially early in the career, when you're seeking tenure and trying to establish yourself as a researcher and a teacher, you can expect to take low-paying, lousy jobs and work to the exclusion of other parts of your life. There are people who thrive in this sort of setting, who do truly brilliant work and love the research enough to make the sacrifices. I'm not one of them. I like a steady paycheck, I like having numerous hobbies outside of work. I like having a personal life in which I get to spend significant amounts of time with my partner. In short, I like having the sort of life that an academic career would make difficult, at least until I was well into my career and had a tenured position.
What's more, there are damn few academic jobs. The last time I bothered to look up the statistics, some time around 2005, there was something in the neighborhood of 10 PhDs granted per job opening per years. And you weren't just competing for that job with the other nine people who had earned a degree that year, you were also competing with the nine people who hadn't gotten a job the year before, the nine the year before that, the nine before that, etc. By contrast, in the year before I had finished my MA, I had received an average of one unsolicited job offer per month, and I could expect to work an average of 40-50 hours per week at most of these jobs.
Now, you can certainly get a CRM job with a PhD, and many successful and excellent CRM archaeologists have such degrees, I have had the good fortune to work with many of them. But there are few, if any, CRM jobs that require that the job holder have a PhD. The reason for this is simple - the CRM industry was created by law and regulation, and while those regulations vary a bit from government agency to government agency, none of them require a Principle Investigator (the head-honcho archaeologist) to have a degree higher than an MA, which means that no other position within the hierarchy is required to have anything higher than an MA. In fact, it is usually assumed by CRM firms (rather unjustly, I might add) that if someone has a PhD, then it means that they want to be an academic but couldn't find a job, and so many hiring managers will pass up someone with a PhD in favor of someone with an MA.
Yeah, CRM was the career path for me, and an MA was the degree for me.
Strangely, not everyone saw it this way. My advisor, knowing the academic and CRM job markets, certainly was supportive of my decision. Brian Fagan, who's last year teaching was my first year as a grad student, was encouraging of both myself and the other fellow in the grad program looking at a CRM career. The people I knew who worked in CRM were all extremely encouraging, including those who were themselves working towards PhDs. However, a few other grad students and faculty members had a different attitude.
I have spoken with other people who have attended grad schools with both MA and PhD programs, and they have told me stories of just how obnoxious the PhD students could be towards the MA students. This was not my experience. While there were a small number of snobbish PhD students who treated us MA folks badly, the majority considered us colleagues and treated us no different from the other grad students, which was appropriate seeing as how the only real difference between the MA and PhD programs were that PhD students wrote a dissertation that was potentially longer and more complex than the standard MA thesis (I say potentially because several MA students, myself included, wrote MA theses that were longer and more complex than was required of the PhD dissertations).
No, the attitude that we encountered was one that was intended to be encouraging, and this good intention was appreciated, but which grew tiresome. Most of the PhD students wanted careers in academia, and were willing to put up with all that such a career entailed. Like many people who are passionate about something, they assume that other interested people are just as passionate as they are. In truth, us MA students differed from the PhD students not in terms of our merit but we were simply less passionate than them about our topic - we were not willing or interested in making the necessary sacrifices to have an academic career, but this was something that most of the nascent academics had a hard time understanding (though, again, there were a few who did get it). Of course, it didn't help that my department had a large number of people who were interested in Peruvian temples, and as such had a hard time grasping that there are, in fact, people who are genuinely interested in hunter-gatherer archaeology and not interested in massive temples.
But, again, this was at least usually an encouraging attitude. Stranger were the sorts of things that I got from family members, primarily my mother. My father granted that I had a pretty good idea of what my career required and expected that I knew what I was doing. My sisters generally got that, but occasionally would get confused and pester me about my "stopping with just a Masters degree**." My mother seems to have now accepted that I am in a line of work where an MA is the ideal degree, but from the time I started graduate school until about a year after I finished, she would routinely express her dissapointment that I wasn't earning a PhD, and kept insisting that I would be better off with a PhD than an MA even though this was demonstrably false. Her basic logic being that a PhD would open up academic jobs, and therefore give me a wider career field. Of course, as already stated, the academic job market is so terrible that opening it up only marginally widens the job search field and having a PhD often limits the willingness of CRM firms to hire a person, so having a PhD may actually reduce employment opportunities overall. It took six years of explaining this to my mother before she finally got it.
Ultimately, this seemd to be the basic pattern: people assume that a PhD is more prestigious (arguably true) and therefore will lead to greater career success (demonstrably false), and most laypeople assume that all or most archaeologists work in a research/academic setting (about as far from true as you can get and still be in the same galaxy). As a result, when people hear what I do for a living, they start addressing me as "Dr." and when I point out that I hold an MA and not a PhD, they become confused and tend to ask why I dropped out of my PhD program, never thinking that someone in my line would actually seek an MA.
So it goes.
*With the exception of teaching at community colleges, which is actually a pretty sweet gig, but for which full-time jobs are increasingly rare.
**I always found the "you only have a Masters degree" attitude to be bizarre. It's a difficult degree to get, and only a small portion of the population has it, but because people expect someone in my line of work to have a PhD (even though most of us don't), there's this weird tendency for people outside of my profession to try to shame archaeologists who have Masters degrees.
Monday, January 17, 2011
Me and Noah's Ark
One morning, about four years ago, I arrived at work, turned on my computer, and checked me email to find that two friends of mine, both field archaeologists who had worked for me, had sent me emails sent around the same time the previous night wishing me luck on my search for Noah's Ark. Knowing these two, I just figured that they were a few bottles in and decided that that it would be funny to send me emails wishing me well on an imaginary Indiana Jones-esque expedition.
As the day went on, I received further emails from these two, and it eventually became clear that the both of them were referring to something and assuming that I know what the hell they were talking about. So, naturally, I sent them emails asking them just what the hell they were talking about. As it turns out, they had received emails from my University of California account that requested donations in order to sponsor an expedition to search for Noah's Ark...an expedition that the emails stated I was going on.
Obviously, I wasn't keen on someone sending out emails to my colleagues allegedly from me, asking for money for an idiotic pseudo-scientific pursuit. So, I went to a university laboratory computer that evening and followed the link, which led to a Paypal page. I worked out who the owner of the account was, and found their web page (it is no longer up, so I can not link to it here). They were, indeed, planning a trip to Noah's Ark.
I wrote to them and asked if they could explain why my email account was hacked and an email sent out with a link to their Paypal account claiming that I was working for them. A few hours later, I received an email from the fellow behind the website, the would-be explorer, who wrote that he had no idea how the emails had been sent out, but that he would look into it. I responded asking that he keep his word on that. I never heard back.
To this day, I'm not sure what happened. It's possible that the fellow is telling the truth, that he's deluded but honest and someone hacked my account as a prank on he and I (and maybe a few other archaeologists).
On the other hand, a promised expedition for Noah's Ark would get the donations from many people who were looking for physical evidence to support their beliefs, and as such would be a smooth operation for a con man. In this case, I wouldn't put it past someone to hack the email of actual archaeologists in an attempt to con the faithful of their cash.
In the end, I never did find out what, precisely, was going on. So it goes.
As the day went on, I received further emails from these two, and it eventually became clear that the both of them were referring to something and assuming that I know what the hell they were talking about. So, naturally, I sent them emails asking them just what the hell they were talking about. As it turns out, they had received emails from my University of California account that requested donations in order to sponsor an expedition to search for Noah's Ark...an expedition that the emails stated I was going on.
Obviously, I wasn't keen on someone sending out emails to my colleagues allegedly from me, asking for money for an idiotic pseudo-scientific pursuit. So, I went to a university laboratory computer that evening and followed the link, which led to a Paypal page. I worked out who the owner of the account was, and found their web page (it is no longer up, so I can not link to it here). They were, indeed, planning a trip to Noah's Ark.
I wrote to them and asked if they could explain why my email account was hacked and an email sent out with a link to their Paypal account claiming that I was working for them. A few hours later, I received an email from the fellow behind the website, the would-be explorer, who wrote that he had no idea how the emails had been sent out, but that he would look into it. I responded asking that he keep his word on that. I never heard back.
To this day, I'm not sure what happened. It's possible that the fellow is telling the truth, that he's deluded but honest and someone hacked my account as a prank on he and I (and maybe a few other archaeologists).
On the other hand, a promised expedition for Noah's Ark would get the donations from many people who were looking for physical evidence to support their beliefs, and as such would be a smooth operation for a con man. In this case, I wouldn't put it past someone to hack the email of actual archaeologists in an attempt to con the faithful of their cash.
In the end, I never did find out what, precisely, was going on. So it goes.
Labels:
Anti-Science,
Archaeology,
Irritants,
Pseudo-Science,
Religion
Friday, January 14, 2011
Acorn Economics
For those who attended school in California, there is a better than even chance of being exposed to the fact that acorns were an important part of the diet of the Native Californians up to European colonization (and for many people, up through the early 20th century). When you begin looking into it, the reasons are pretty simple: the acorns are storeable (some species keep for up to a couple of years after being picked), predictable (you know where they'll be year after year), semi-stable (unlike many other food resources, when you go out looking for acorns, you will come back with some, even if some years were leaner than others), and very nutritious. However, while acorns are the emblematic food of California, they weren't used with any regularity until approximately 4,000 to 3,000 years ago in much of California, or 7,000 to 9,000 years after California was first populated (though there is evidence that acorns were used intensively as early as 8-9,000 years ago in some locations, especially in northwest California). Given the advantages of acorns this seems pretty strange, until you consider the cost of acorns in terms of time and labor.
Picking acorns, peeling/shelling them, pounding them into mush, and then leaching them of tannic acid is a long, laborious process. It can consume the efforts of an entire community, and can also lead to greater social rigidity due both to the semi-fixed locations for gathering and the social organization needed for efficient acorn gathering and processing. To put it in modern economic terms, I recently learned that a gallon of acorn mush made by members of a local tribe costs $50 - and that is after it is made using the more efficient technology of blenders and cheesecloth rather than stone mortars and woven baskets to process the acorn. So, we have a staple food, something that you would eat as part of your most basic diet, requiring so much work that with modern labor-saving devices it prices out at $50 a gallon.
Knowing that, it's fair to ask why people began eating acorns at all. Of course, like most things in archaeology, we don't know for certain, but, at least in California, we have some pretty good ideas.
The dominant idea regarding this, and I think it's generally a pretty sound one, is that the answer lies in a basic economic principle. Acorns are an expensive resource in that they take so much time, energy, and specific forms of social organization to gather and process, so it seems reasonable to expect that they would not be used if a cheaper (that is, more convenient) resource were available. Say, if something costs $10 per gallon as opposed to $50 a gallon. Now, I usually argue against over-extending these sorts of simplistic economic arguments - if they were as strong as many advocates of Adam Smith's writings believe then things such as designer jeans and urban SUVs would not exist - but they are useful if thought of as general trends rather than hard-and-fast laws of commerce. Proceeding this way, we can see that, while there are many, many exceptions, as a general rule it is true that when presented with two options for feeding a society, people tend to choose the easier/less expensive option* provided that it produces comparable nutrition value. In the case of acorns, there are numerous grass seeds, tubers, and other such basic foods that can serve much the same purpose that are significantly easier to gather and process, and in pre-acorn sites we see very few mortars but many, many milling basins and milling slicks of the sort that would be used for processing such plants. Over time, however, these milling tools, while never completely going away, begin to loose favor as compared to the mortar, which is good for processing oily seeds such as acorns.
There are alot of explanations for what happened, and given that acorns became the staple food at different times in different parts of California, I am of the opinion that several different things happened in several different places. In some places, the population may have grown sufficiently that it outstripped the capacity of the convenient resources, forcing people to use the more difficult ones. In other places, ecological change may have resulted in plant die-off that made the previously used resources more difficult to obtain or not sufficient to support the population. In still others, growing populations (and perhaps growing territorialism) may have resulted in less seasonal mobility and the need to use more local resources to make up for the fact that travel to resources was no longer feasible.
I could go on for a while, but I think you get the point - changes occurred which either made the previous staple foods more expensive in terms of time and labor involved in obtaining and processing them, or else rendered them insufficient for providing for the needs of the people depending on them. Generally the archaeological evidence seems to be consistent with this model - though it is one that is rather damnably difficult to test - but every new archaeologist looking at it teases out new and interesting elements. While most of us working in California would agree to the overall use of the economic model here, there are probably as many permutations of it as there are people looking at it.
*In case you're trying to think of an exception and stuck, then consider the number of foods that we eat that are actually quite expensive in terms of either money or time to prepare or obtain, but which we nonetheless consume whenever we get the chance because of intangibles such as taste or social prestige (Filet Mignon is a wildly impractical but delicious food, while I find it hard to imagine that anyone likes caviar enough to justify the price tag unless you take into account the social prestige that comes from having and serving caviar). Also, consider that many more difficult/expensive foods are consumed for reasons of ideology rather than reasons of economics, which explains the success of Whole Foods and similar stores. These types of behaviors all make sense, but only if you allow that economics is only one of many things that humans consider when making their choices.
Picking acorns, peeling/shelling them, pounding them into mush, and then leaching them of tannic acid is a long, laborious process. It can consume the efforts of an entire community, and can also lead to greater social rigidity due both to the semi-fixed locations for gathering and the social organization needed for efficient acorn gathering and processing. To put it in modern economic terms, I recently learned that a gallon of acorn mush made by members of a local tribe costs $50 - and that is after it is made using the more efficient technology of blenders and cheesecloth rather than stone mortars and woven baskets to process the acorn. So, we have a staple food, something that you would eat as part of your most basic diet, requiring so much work that with modern labor-saving devices it prices out at $50 a gallon.
Knowing that, it's fair to ask why people began eating acorns at all. Of course, like most things in archaeology, we don't know for certain, but, at least in California, we have some pretty good ideas.
The dominant idea regarding this, and I think it's generally a pretty sound one, is that the answer lies in a basic economic principle. Acorns are an expensive resource in that they take so much time, energy, and specific forms of social organization to gather and process, so it seems reasonable to expect that they would not be used if a cheaper (that is, more convenient) resource were available. Say, if something costs $10 per gallon as opposed to $50 a gallon. Now, I usually argue against over-extending these sorts of simplistic economic arguments - if they were as strong as many advocates of Adam Smith's writings believe then things such as designer jeans and urban SUVs would not exist - but they are useful if thought of as general trends rather than hard-and-fast laws of commerce. Proceeding this way, we can see that, while there are many, many exceptions, as a general rule it is true that when presented with two options for feeding a society, people tend to choose the easier/less expensive option* provided that it produces comparable nutrition value. In the case of acorns, there are numerous grass seeds, tubers, and other such basic foods that can serve much the same purpose that are significantly easier to gather and process, and in pre-acorn sites we see very few mortars but many, many milling basins and milling slicks of the sort that would be used for processing such plants. Over time, however, these milling tools, while never completely going away, begin to loose favor as compared to the mortar, which is good for processing oily seeds such as acorns.
There are alot of explanations for what happened, and given that acorns became the staple food at different times in different parts of California, I am of the opinion that several different things happened in several different places. In some places, the population may have grown sufficiently that it outstripped the capacity of the convenient resources, forcing people to use the more difficult ones. In other places, ecological change may have resulted in plant die-off that made the previously used resources more difficult to obtain or not sufficient to support the population. In still others, growing populations (and perhaps growing territorialism) may have resulted in less seasonal mobility and the need to use more local resources to make up for the fact that travel to resources was no longer feasible.
I could go on for a while, but I think you get the point - changes occurred which either made the previous staple foods more expensive in terms of time and labor involved in obtaining and processing them, or else rendered them insufficient for providing for the needs of the people depending on them. Generally the archaeological evidence seems to be consistent with this model - though it is one that is rather damnably difficult to test - but every new archaeologist looking at it teases out new and interesting elements. While most of us working in California would agree to the overall use of the economic model here, there are probably as many permutations of it as there are people looking at it.
*In case you're trying to think of an exception and stuck, then consider the number of foods that we eat that are actually quite expensive in terms of either money or time to prepare or obtain, but which we nonetheless consume whenever we get the chance because of intangibles such as taste or social prestige (Filet Mignon is a wildly impractical but delicious food, while I find it hard to imagine that anyone likes caviar enough to justify the price tag unless you take into account the social prestige that comes from having and serving caviar). Also, consider that many more difficult/expensive foods are consumed for reasons of ideology rather than reasons of economics, which explains the success of Whole Foods and similar stores. These types of behaviors all make sense, but only if you allow that economics is only one of many things that humans consider when making their choices.
Tuesday, January 11, 2011
Arizona Shootings and Jumping to Conclusions
You have likely heard about Arizona Congresswoman Gabrielle Giffords having been the target of a shooting that also left six other victims dead including Federal Judge John Roll and a 9-year old girl. The Congresswoman is in the hospital, not in good shape, but may pull through. In the immediate aftermath, there has been much made about the fact that the congresswoman is a Democrat in a state in which the Tea Party holds a good deal of sway, that the apparent shooter posted politically-oriented videos online (though it should be noted that he apparently also produced a number in which he ranted about grammar for no apparent reason), and this has led to people, including the local sheriff, arguing that the increasingly toxic and divorced-from-reality political rhetoric in the U.S. has been a factor in the shooting.
I understand why people are feeling this way, but I think they're looking for an answer that doesn't exist. While the current nature of political rhetoric is toxic to us as a nation, based on what I have read, I rather sincerely doubt that it had all that much to do with this particular crime. What little information has been released on the shooter (or, I suppose I should say shooting suspect, as he has not yet been convicted), Jared Loughner, indicates that while he would go off on rants similar in nature and content to what one hears from many Tea Party members, he is likely mentally ill. Perhaps political rhetoric directed him in a way that he would not have otherwise gone, and perhaps it did not. At this point, all of the shouting about the role that Fox News and right-wing talk radio played in the crime is premature*. We know little about this individual, and less about his motives.
Still, the narrative seems to be forming: the right-wing radio and television networks are dangerous, and must be stopped before they kill again! I have heard it repeated, and it might become part of the mythology surrounding this shooting. Interestingly, the only outlet that seems to be stepping back and placing trying to actually make sense of this ahead of sensationalistic finger pointing or making counter-recriminations is the allegedly "hard-left" National Public Radio.
I am reminded of the Columbine shootings. In the days following the shootings, we heard about how the shooters, Eric Harris and Dylan Klebold, were loners, how they had been taunted and picked on and ostracized, how they were members of a group of students known as the "trench coast mafia" and part of the Goth scene, how their interest in role-playing games such as Vampire: The Masquerade led to their nihilistic view of life, and how their enjoyment of the music of Marylin Manson fueled their sense of rage. If you ask most people about this shooting, you will hear some or all of this repeated as fact, and even most journalists who covered the event now remember these details.
But none of it was true. The two had a close circle of friends and were not ostracized. Their taste in music and entertainment was not what early media reports claimed. They were not members of the Goth sub-culture. The "trenchcoat mafia" had long since graduated before these two went on their attack. They may have been occasional victims of bullies, but neither seems to have been the main target of anyone (and there may be some reason to think that Harris was a bully). One book, Columbine argues that Harris appears to have been a sociopath, and Klebold so desperate for the approval of Harris that he was ready to go along with what the other wanted. The narrative is wrong, it was based on supposition and a need to find something, anything to blame.
And it seems that the same thing is now happening in Arizona.
Did rhetoric and the frankly abhorrent claims spewed by people such as Hannity, Limbaugh, and O-Reilly play a role in the shooting? Perhaps, but we simply don't know yet. Was the shooting a part of an actual political assassination? It's possible, such things do happen, but, again, we don't have any real information and claiming that it is when information is lacking says more about the beliefs of the claimant than the motives of the criminal. When one is confronted with the, frankly, anti-reality, demonizing, fear-mongering, and vitriolic statements put forward by much of the political punditry, it is perfectly understandable that we want to blame them, and there have even been those who have warned that this sort of rhetoric would eventually lead to pointless violence. So, I understand the urge, but the fact remains that there is little evidence.
There is an on-going investigation, and part of the investigation is focusing ont he question of whether there was a conspiracy and if the crime had political motives. But until the investigation concludes and the prosecution begins to make public it's case, we should avoid coming to any conclusions. We know little about the shooter himself, and even less about whether or not other people were involved in the shooting. This could be little more than a mentally ill man acting out of insanity, or it could be something that reflects poorly on where our national politics are headed, or it could be something else entirely. We simply don't know much right now, and to claim that we do is an act of ignorance.
*And on the off-chance that you think I'm taking the side of the pundits, try looking through some of the other entries on thei blog in which I discuss the media or politics - I have no love for these people, but that doesn't mean that I am willing to blame them for a crime rather than look for the real cause.
I understand why people are feeling this way, but I think they're looking for an answer that doesn't exist. While the current nature of political rhetoric is toxic to us as a nation, based on what I have read, I rather sincerely doubt that it had all that much to do with this particular crime. What little information has been released on the shooter (or, I suppose I should say shooting suspect, as he has not yet been convicted), Jared Loughner, indicates that while he would go off on rants similar in nature and content to what one hears from many Tea Party members, he is likely mentally ill. Perhaps political rhetoric directed him in a way that he would not have otherwise gone, and perhaps it did not. At this point, all of the shouting about the role that Fox News and right-wing talk radio played in the crime is premature*. We know little about this individual, and less about his motives.
Still, the narrative seems to be forming: the right-wing radio and television networks are dangerous, and must be stopped before they kill again! I have heard it repeated, and it might become part of the mythology surrounding this shooting. Interestingly, the only outlet that seems to be stepping back and placing trying to actually make sense of this ahead of sensationalistic finger pointing or making counter-recriminations is the allegedly "hard-left" National Public Radio.
I am reminded of the Columbine shootings. In the days following the shootings, we heard about how the shooters, Eric Harris and Dylan Klebold, were loners, how they had been taunted and picked on and ostracized, how they were members of a group of students known as the "trench coast mafia" and part of the Goth scene, how their interest in role-playing games such as Vampire: The Masquerade led to their nihilistic view of life, and how their enjoyment of the music of Marylin Manson fueled their sense of rage. If you ask most people about this shooting, you will hear some or all of this repeated as fact, and even most journalists who covered the event now remember these details.
But none of it was true. The two had a close circle of friends and were not ostracized. Their taste in music and entertainment was not what early media reports claimed. They were not members of the Goth sub-culture. The "trenchcoat mafia" had long since graduated before these two went on their attack. They may have been occasional victims of bullies, but neither seems to have been the main target of anyone (and there may be some reason to think that Harris was a bully). One book, Columbine argues that Harris appears to have been a sociopath, and Klebold so desperate for the approval of Harris that he was ready to go along with what the other wanted. The narrative is wrong, it was based on supposition and a need to find something, anything to blame.
And it seems that the same thing is now happening in Arizona.
Did rhetoric and the frankly abhorrent claims spewed by people such as Hannity, Limbaugh, and O-Reilly play a role in the shooting? Perhaps, but we simply don't know yet. Was the shooting a part of an actual political assassination? It's possible, such things do happen, but, again, we don't have any real information and claiming that it is when information is lacking says more about the beliefs of the claimant than the motives of the criminal. When one is confronted with the, frankly, anti-reality, demonizing, fear-mongering, and vitriolic statements put forward by much of the political punditry, it is perfectly understandable that we want to blame them, and there have even been those who have warned that this sort of rhetoric would eventually lead to pointless violence. So, I understand the urge, but the fact remains that there is little evidence.
There is an on-going investigation, and part of the investigation is focusing ont he question of whether there was a conspiracy and if the crime had political motives. But until the investigation concludes and the prosecution begins to make public it's case, we should avoid coming to any conclusions. We know little about the shooter himself, and even less about whether or not other people were involved in the shooting. This could be little more than a mentally ill man acting out of insanity, or it could be something that reflects poorly on where our national politics are headed, or it could be something else entirely. We simply don't know much right now, and to claim that we do is an act of ignorance.
*And on the off-chance that you think I'm taking the side of the pundits, try looking through some of the other entries on thei blog in which I discuss the media or politics - I have no love for these people, but that doesn't mean that I am willing to blame them for a crime rather than look for the real cause.
Monday, January 10, 2011
Bereavement Leave
Yesterday morning, there was a death in my family. While it wasn't expected, it wasn't a surprise either. The family member, my grandmother, was elderly and her health had been flagging for several years - though she remained healthy enough to live on her own up to the very end. She died in her house, in her own bed, apparently in her sleep, which seems a good way to go. Because of this, I may or may not be having to help family with funeral arrangements and associated activities, so my writing on this blog might flag over the next week or so.
It has been interesting to see how everyone describes their feelings and how they are coming to terms with this. The religious members of my family talk about her being in a "better place" and look forward to seeing her again, while I hold out no such hopes and simply mourn the loss.
It is common for me to hear my fellow atheists deride belief in the afterlife as a fear-motivated notion from people who wish to never die. That may be true for some, but I don't think it's that simple for many people. I know that while I do not believe in an afterlife myself, I do find the idea attractive not because I fear death, but because I miss those who have died. It's not that I want to live forever, it's that I very much want to see other people again. Unfortunately, wanting to believe something has never been sufficient for me to make myself believe it.
At any rate, I have other entries I am working on, and I will get back to a semi-normal posting schedule (I might even have one this week), but if nothing posts for a little bit, that's the reason why.
It has been interesting to see how everyone describes their feelings and how they are coming to terms with this. The religious members of my family talk about her being in a "better place" and look forward to seeing her again, while I hold out no such hopes and simply mourn the loss.
It is common for me to hear my fellow atheists deride belief in the afterlife as a fear-motivated notion from people who wish to never die. That may be true for some, but I don't think it's that simple for many people. I know that while I do not believe in an afterlife myself, I do find the idea attractive not because I fear death, but because I miss those who have died. It's not that I want to live forever, it's that I very much want to see other people again. Unfortunately, wanting to believe something has never been sufficient for me to make myself believe it.
At any rate, I have other entries I am working on, and I will get back to a semi-normal posting schedule (I might even have one this week), but if nothing posts for a little bit, that's the reason why.
Friday, January 7, 2011
This Location Intentionally Left Blank
A frustration of archaeology is the fact that while we can easily identify places where people left things, we can't always identify places where they intentionally didn't. A place that is devoid of artifacts or features because it was a forbidden place, a sacred space, or a contested boundary between two potentially hostile groups of people looks pretty much the same as a place that is devoid of artifacts or features because people just didn't bother going there out of general mundane disinterest.
To be certain, there are places where we can tell that a lack of features or artifacts is an anomaly - a lack of archaeological sites around a spring in an otherwise arid environment is pretty strange and is likely to grab our attention. But a lack of sites around a stream in an area where there are several other sources of water? Well, that could be intentional avoidance or the intentional clean-up of artifacts, or it could just be that nobody bothered to go to the location because, hey, it's just another stream.
This becomes important because of the basic purpose of archaeology: to investigate and explain the past of humanity based on the material record. Through ethnography and history, we know that there are places where people did not leave materials because either the place was actively avoided or because the area was intentionally not "polluted" with artifacts and other materials. Knowing where these places are is valuable because it can help us tease out information regarding the ways in which humans interacted with their environments, and in cases where areas were left empty for social reasons (such as the previously mentioned borders between two groups) how they interacted with each other.
Which leads to the obvious question: What's an archaeologist to do about this? There are many answers, some better than others.
One answer is to try to determine what areas would have been left empty based on some set of criteria. At it's best, this method produces predictive models based on resource distribution, known site locations, and criteria relevant to the area based on what has been previously learned, and then look for places where sites should be, but aren't. At worst, you get new-agey bullshit nonsense like this. But even at its best, all that this approach can do is tell us that a given place has a higher likelihood of having a site than not, but can't tell us if it's lack of site is due to intentional disuse or simply a bit of random chance.
Another approach, and one that can be used in conjunction with the previous one, is to use ethnographic data to figure out what areas were "left intentionally blank" rather than simply not occupied. Ethnographers have collected huge amounts of information regarding the lore and habits of the descendants of many of the people studied by archaeologists. When such information exists, it can be used to help make sense out of blank spaces, but it must always be remembered that even when the information gathered by ethnography is correct, it always reflects the time in which it was gathered and may not reflect the past.
Regardless of the difficulties, I don't know that there is a good solution to the problem, just varying degrees of dubious effectiveness. It's a fascinating idea, it makes you think, but it is also a bit frustrating. Regardless, it is something that we have to keep in mind when we are performing both research and fieldwork.
To be certain, there are places where we can tell that a lack of features or artifacts is an anomaly - a lack of archaeological sites around a spring in an otherwise arid environment is pretty strange and is likely to grab our attention. But a lack of sites around a stream in an area where there are several other sources of water? Well, that could be intentional avoidance or the intentional clean-up of artifacts, or it could just be that nobody bothered to go to the location because, hey, it's just another stream.
This becomes important because of the basic purpose of archaeology: to investigate and explain the past of humanity based on the material record. Through ethnography and history, we know that there are places where people did not leave materials because either the place was actively avoided or because the area was intentionally not "polluted" with artifacts and other materials. Knowing where these places are is valuable because it can help us tease out information regarding the ways in which humans interacted with their environments, and in cases where areas were left empty for social reasons (such as the previously mentioned borders between two groups) how they interacted with each other.
Which leads to the obvious question: What's an archaeologist to do about this? There are many answers, some better than others.
One answer is to try to determine what areas would have been left empty based on some set of criteria. At it's best, this method produces predictive models based on resource distribution, known site locations, and criteria relevant to the area based on what has been previously learned, and then look for places where sites should be, but aren't. At worst, you get new-agey bullshit nonsense like this. But even at its best, all that this approach can do is tell us that a given place has a higher likelihood of having a site than not, but can't tell us if it's lack of site is due to intentional disuse or simply a bit of random chance.
Another approach, and one that can be used in conjunction with the previous one, is to use ethnographic data to figure out what areas were "left intentionally blank" rather than simply not occupied. Ethnographers have collected huge amounts of information regarding the lore and habits of the descendants of many of the people studied by archaeologists. When such information exists, it can be used to help make sense out of blank spaces, but it must always be remembered that even when the information gathered by ethnography is correct, it always reflects the time in which it was gathered and may not reflect the past.
Regardless of the difficulties, I don't know that there is a good solution to the problem, just varying degrees of dubious effectiveness. It's a fascinating idea, it makes you think, but it is also a bit frustrating. Regardless, it is something that we have to keep in mind when we are performing both research and fieldwork.
Thursday, January 6, 2011
Not Archaeology, but Important
So, in case you haven't heard, the British Medical Journal is doing something unusual, and producing journalism that pretty well shows that Andrew Wakefield's study that has been used to claim a link between vaccines and autism was not only done in an unethical manner, but was outright fraudulent.
Take a look.
Take a look.
Wednesday, January 5, 2011
Why You Shouldn't Discuss the Origins of Religion While Drunk
Ten or eleven years ago, I was at a party at a friend's house, a fellow by the name of Paul. Paul was a psychologist, and so it was no surprise that many of his guests were also psychologists. Upon introducing me to one fellow, Paul made a point of telling the guy that I had a degree in anthropology, focused in archaeology - I noticed Paul's grin, but didn't know quite what it meant at first.
The fellow to whom Paul had introduced me spent a few minutes conversing with me about various odds and ends, before he paused, and then asked "so, Mr. archaeology, what's the origin of religion?"
"well," I started to answer, "what aspect of religion are we talking about?"
He got a sadistic grin on his face and said "Oh no, you're not going to weasel out of this one! What is the origin of religion?"
I attempted to explain that religion is a rather complicated phenomenon, that different aspects of it likely had different origins, and that what we call religion today is simply the amalgamation of all of these aspects into the various belief systems that hover around our cultures, and therefore it wasn't possible to answer his question without breaking it down into several more specific questions. Again, I attempted to explain this, but he kept interrupting me and claiming that I was trying to "weasel out" of an answer.
Finally he stopped and said, in the most condescending tones I have ever heard come out of a mental health professional, "I know what the origin of religion is, and I can tell you."
"Oh?"
"Yeah, it's pretty damn simple. See, one person with schizophrenia begins to hear voices, and decides that this must be god, so he goes on to tell someone else, who then tells another person that the first guy is hearing god, and it keeps going until the schizophrenic is a priest or a prophet, and everyone thinks that he's hearing god."
This sort of explanation for the origins of religion is common amongst the non-believers, but it is deeply flawed. I would have tried to point out the problems with it to this guy, but given his behavior up to that point, it was pretty clear that he wouldn't bother listening - he knew the answer (even if it was wrong) and plainly didn't want anyone trying to change his mind with inconvenient things like facts and evidence.
Still, his assertion had a number of problems. The first problem is simply that the notion of a god (or gods) has to come from somewhere before A) a schizophrenic can interpret hearing voices as being communications from a god, and B) people other than the schizophrenic can accept that the notion that a god is speaking through someone is a plausible thing. The psychologist Bruce Hood argues that a belief in spirits that animate the world is due to the way that our brains are designed, that (whe lacking materialistic explanations or sometimes in spite of materialistic explanations) we naturally see agency and therefore intelligence in the inanimate. This may be true, but if it is, then beliefs in gods and spirits would rise naturally from physically/psychologically normal people, no need for schizophrenia or any other forms of mental illness. So, if Bruce Hood is correct (and I think he makes some compelling arguments) this renders schizophrenia unnecessary for the formation of the supernatural beliefs that are the basis for most religious systems.
Another problem is that, on those occasions where it has been possible to examine shamans, which are probably our best analogy for the earliest clergy, for mental illness, the results have been decidedly mixed. Some studies have found that they exhibit signs of various mental illnesses, while others indicate that they are either no more likely to suffer mental illness than the average person, or even less likely do to careful screening by the elder shamans. What are we to make of this? Well, likely, this means that whether mental illness is a boon or a block to becoming a shaman is dependent on the culture in question, and we have absolutely no way of knowing which it was in the earliest of human cultures. So, while the first problem makes mental illness unnecessary for the origin of religion, this one makes it untestable, and makes the party-goers explanation an increasingly tenous "just-so" story.
Also, this fellow's explanation ignores the rather inconvenient fact that most religions provide a social code that is rather specific to the conditions of that society, and in most pre-literate societies these codes are maleable based on current conditions. Even if the belief in the supernatural were due to a mental illness, it wouldn't change the fact that by the time it becomes religion, complete with the trappings thereof, it has been heavily modified to suit the needs of society, or at least the desires of the society's louder voices.
So, the notion that religion began simply as a schizophrenic's ravings are untestable at best, and rather spurious. And understand, I am not an apologist for religion - if you click the tags on the side bar for "atheism" or "religion" you will quickly see my views on the subject - but that doesn't change the fact that I have a serious problem with people pushing their pet hypotheses without bothering to look at evidence. To be fair, this guy dealt with mental illness for a living, so it's no surprise that this is the first place that he would think to look, but that doesn't excuse the fact that it is also the only place that he was willing to look.
The fellow to whom Paul had introduced me spent a few minutes conversing with me about various odds and ends, before he paused, and then asked "so, Mr. archaeology, what's the origin of religion?"
"well," I started to answer, "what aspect of religion are we talking about?"
He got a sadistic grin on his face and said "Oh no, you're not going to weasel out of this one! What is the origin of religion?"
I attempted to explain that religion is a rather complicated phenomenon, that different aspects of it likely had different origins, and that what we call religion today is simply the amalgamation of all of these aspects into the various belief systems that hover around our cultures, and therefore it wasn't possible to answer his question without breaking it down into several more specific questions. Again, I attempted to explain this, but he kept interrupting me and claiming that I was trying to "weasel out" of an answer.
Finally he stopped and said, in the most condescending tones I have ever heard come out of a mental health professional, "I know what the origin of religion is, and I can tell you."
"Oh?"
"Yeah, it's pretty damn simple. See, one person with schizophrenia begins to hear voices, and decides that this must be god, so he goes on to tell someone else, who then tells another person that the first guy is hearing god, and it keeps going until the schizophrenic is a priest or a prophet, and everyone thinks that he's hearing god."
This sort of explanation for the origins of religion is common amongst the non-believers, but it is deeply flawed. I would have tried to point out the problems with it to this guy, but given his behavior up to that point, it was pretty clear that he wouldn't bother listening - he knew the answer (even if it was wrong) and plainly didn't want anyone trying to change his mind with inconvenient things like facts and evidence.
Still, his assertion had a number of problems. The first problem is simply that the notion of a god (or gods) has to come from somewhere before A) a schizophrenic can interpret hearing voices as being communications from a god, and B) people other than the schizophrenic can accept that the notion that a god is speaking through someone is a plausible thing. The psychologist Bruce Hood argues that a belief in spirits that animate the world is due to the way that our brains are designed, that (whe lacking materialistic explanations or sometimes in spite of materialistic explanations) we naturally see agency and therefore intelligence in the inanimate. This may be true, but if it is, then beliefs in gods and spirits would rise naturally from physically/psychologically normal people, no need for schizophrenia or any other forms of mental illness. So, if Bruce Hood is correct (and I think he makes some compelling arguments) this renders schizophrenia unnecessary for the formation of the supernatural beliefs that are the basis for most religious systems.
Another problem is that, on those occasions where it has been possible to examine shamans, which are probably our best analogy for the earliest clergy, for mental illness, the results have been decidedly mixed. Some studies have found that they exhibit signs of various mental illnesses, while others indicate that they are either no more likely to suffer mental illness than the average person, or even less likely do to careful screening by the elder shamans. What are we to make of this? Well, likely, this means that whether mental illness is a boon or a block to becoming a shaman is dependent on the culture in question, and we have absolutely no way of knowing which it was in the earliest of human cultures. So, while the first problem makes mental illness unnecessary for the origin of religion, this one makes it untestable, and makes the party-goers explanation an increasingly tenous "just-so" story.
Also, this fellow's explanation ignores the rather inconvenient fact that most religions provide a social code that is rather specific to the conditions of that society, and in most pre-literate societies these codes are maleable based on current conditions. Even if the belief in the supernatural were due to a mental illness, it wouldn't change the fact that by the time it becomes religion, complete with the trappings thereof, it has been heavily modified to suit the needs of society, or at least the desires of the society's louder voices.
So, the notion that religion began simply as a schizophrenic's ravings are untestable at best, and rather spurious. And understand, I am not an apologist for religion - if you click the tags on the side bar for "atheism" or "religion" you will quickly see my views on the subject - but that doesn't change the fact that I have a serious problem with people pushing their pet hypotheses without bothering to look at evidence. To be fair, this guy dealt with mental illness for a living, so it's no surprise that this is the first place that he would think to look, but that doesn't excuse the fact that it is also the only place that he was willing to look.
Monday, January 3, 2011
Books I Love: Frauds, Myths, and Mysteries
In the early days of this blog, I wrote a post about a book that I love - Dr. Milton Love's Probably More Than You Want to Know About the Fishes of the Pacific Coast. I wanted to belated continue that and write about another book that I love: Kenneth Feder's Frauds, Myths, and Mysteries: Science and Pseudo-Science in Archaeology.
Although Brian Fagan is probably the best known popular archaeology writer alive today, Kenneth Feder has written some of my personal favorite books. A Village of Outcasts is one of the best non-technical narrative descriptions of archaeological research that I have read, but Frauds, Myths, and Mysteries is a perennial favorite.
This book serves three basic purposes: 1) it is a good primer on the sorts of nonsense that gets propagated as "the REAL TRUTH about the human past" in pop culture; 2) it provides a good introduction in the basic methods used by archaeologists and historians to sort out information and evaluate claims; 3) it provides an entertaining primer in basic critical thinking, dealing more with the sorts of claims that crackpots make as well as the tendency for people to be fooled especially when dealing with claims outside of their area of expertise, rather than focus on the formal names of logical fallacies. The book is entertaining in equal parts because of Feder's enjoyable writing style and because he chooses case studies that are themselves entertaining.
The case studies within this book run the gamut from the Piltdown Man and the Cardiff Giant to "scientific" creationism to people who believe that human civilization is the reasult of intervention by aliens from other worlds. In each case, Feder gives a fair description of the claim, points towards it's main (or in some cases, original) proponents, and then proceeds to explain what the archaeological record actually shows.
Throughout the book, he never takes the attitude of "the professional archaeologists know the truth and everyone else is fools!" He starts the book by pointing out that he once believed some pretty strange things about physics, biology, and other fields outside of archaeology, indicating that one can be intelligent and still be taken in by some of these claims, and he dedicates page space to explaining how archaeological views have changed over time, showing that archaeologists are open to correction. In fact, it's this last point that he holds (and I agree) really separates science from pseudoscience - if you dismiss disconfirming evidence because you don't like the implication rather than because of the nature of the evidence, then you are practicing pseudoscience. And the book finishes up by discussing actual archaeological mysteries - subjects on which the professionals are well and truly stumped, showing that the archaeologists neither know it all nor claim to, but they simply are willing to put out the time and effort to reach strong conclusions rather than simply promote their pet hypotheses.
I once owned two copies of this book, and would loan one out to friends and family members who came to me with bizarre claims about human history. Unfortunately (for me) one of the borrowers never returned it, so now I am down to one...which might be a good excuse to get the new edition.
Although Brian Fagan is probably the best known popular archaeology writer alive today, Kenneth Feder has written some of my personal favorite books. A Village of Outcasts is one of the best non-technical narrative descriptions of archaeological research that I have read, but Frauds, Myths, and Mysteries is a perennial favorite.
This book serves three basic purposes: 1) it is a good primer on the sorts of nonsense that gets propagated as "the REAL TRUTH about the human past" in pop culture; 2) it provides a good introduction in the basic methods used by archaeologists and historians to sort out information and evaluate claims; 3) it provides an entertaining primer in basic critical thinking, dealing more with the sorts of claims that crackpots make as well as the tendency for people to be fooled especially when dealing with claims outside of their area of expertise, rather than focus on the formal names of logical fallacies. The book is entertaining in equal parts because of Feder's enjoyable writing style and because he chooses case studies that are themselves entertaining.
The case studies within this book run the gamut from the Piltdown Man and the Cardiff Giant to "scientific" creationism to people who believe that human civilization is the reasult of intervention by aliens from other worlds. In each case, Feder gives a fair description of the claim, points towards it's main (or in some cases, original) proponents, and then proceeds to explain what the archaeological record actually shows.
Throughout the book, he never takes the attitude of "the professional archaeologists know the truth and everyone else is fools!" He starts the book by pointing out that he once believed some pretty strange things about physics, biology, and other fields outside of archaeology, indicating that one can be intelligent and still be taken in by some of these claims, and he dedicates page space to explaining how archaeological views have changed over time, showing that archaeologists are open to correction. In fact, it's this last point that he holds (and I agree) really separates science from pseudoscience - if you dismiss disconfirming evidence because you don't like the implication rather than because of the nature of the evidence, then you are practicing pseudoscience. And the book finishes up by discussing actual archaeological mysteries - subjects on which the professionals are well and truly stumped, showing that the archaeologists neither know it all nor claim to, but they simply are willing to put out the time and effort to reach strong conclusions rather than simply promote their pet hypotheses.
I once owned two copies of this book, and would loan one out to friends and family members who came to me with bizarre claims about human history. Unfortunately (for me) one of the borrowers never returned it, so now I am down to one...which might be a good excuse to get the new edition.
Labels:
Archaeology,
Books,
Critical Thinking,
Pseudo-Science,
Science
Saturday, January 1, 2011
Out and About With Olive
Kaylia and I got bored a while back, and made this video:
I want to stress that I AM PLAYING A CHARACTER. I am not that much of a dick in real life, and we have a very good relationship. I am actually supportive, and not dismissive, and Kaylia is not a ditz.
Okay, with that in mind, here's the next one:
I want to stress that I AM PLAYING A CHARACTER. I am not that much of a dick in real life, and we have a very good relationship. I am actually supportive, and not dismissive, and Kaylia is not a ditz.
Okay, with that in mind, here's the next one:
Subscribe to:
Posts (Atom)