Many counties have lists of approved archaeologists. These lists dictate who is, and often who is not, allowed to perform basic compliance work on projects permitted or funded by the county, and often by cities within the county. Most of the time, being placed on the lists is a simple matter, you simply send in proof of your credentials, and your name is added. Sometimes, however, there's a kink in the system.
Several years back, I worked for a large company out of their Santa Barbara office. As San Luis Obispo County was one of our neighboring counties, it made good sense to be on their approved list. So, I contacted the county, found out what they needed, sent it in, and voila! a few weeks later, my company and my name appeared on the list. At this time, individual archaeologists were listed and shown as qualified.
A couple of years later, I moved to Santa Cruz and went to work for a different company. As there were some good business opportunities in San Luis Obispo County at that point in time, I wanted to make sure that the county approved list reflected where I was. So, I sent an email to the county employee who kept the list explaining that I had changed employers, and requesting that my listing on the approved list be changed to reflect this. I received an email in response telling me what documentation I had to turn in in order to be listed. Thinking that the fellow had simply misunderstood my request - the county had already been provided with all of the documentation he was requesting when last I applied - I wrote back explaining that I was already on the list, and I was simply seeking to have me listing modified. I again received an email telling me that I needed to turn in proof of my credentials in order to be listed.
A bit non-plussed, I called the county offices to speak with the fellow. I explained that, based on his emails, it looked as if he thought I was asking to be newly-listed, which I was not, and that I had already turned in the requested documents. I was simply asking that my contact information be changed to reflect my current employer.
The response? I was told that the county had kept poor records of what they received in the past, and therefore he didn't have my past credentials on file, and therefore if I wanted to have my contact information changed, I'd have to send them all again. Through all of this, it was never mentioned that I might be removed from the list, so as far as I could tell, the county still considered my qualified, they just didn't want to change my contact information to reflect reality.
Deciding that putting up with this nonsense wasn't worth my time, I just gathered up and sent the documentation in. I looked up the approved list a few weeks later, and found that I was not listed under my current employer, but was still listed under my previous employer.
I contacted the fellow at the county offices again and asked what was up. His response? He didn't like my employer. He claimed that they had screwed up a big project several years back, and he was considering whether or not he wanted me to be on the list at all now that I worked for "the enemy". When I went and looked up the project in question, it was not my company that performed it, but another one altogether. I contacted him to point this out, and was told that he didn't like my company anyway, so I shouldn't hold my breath on being listed.
This entire time I was still listed with my old contact information. And, again, it was individuals who were listed, not companies at that time. So, regardless of my employer, I was qualified and there was no legitimate reason to keep me off of the list*.
This was a classic "petty tyrant" as far as I could tell. The guy seemed to have very little power, and so he enjoyed exercising what little power he did have, however arbitrarily or poorly. I have run into these guys in plenty of other places - they are common in municipal and county governments, but also in most large businesses (where they can find a niche and use it to push people around - and in my experience, because they tend to be sycophantic towards a few well-placed higher-ups, they are often hard or even impossible to fire), when I worked on a military base I saw several of them there as well, and they seem to breed at universities.
After several emails, with varying degrees of "you can't push me around" attitude coming from me, and a bit of "you know, we could take legal action" coming from my employer, the fellow finally agreed to change my contact information on the list. I figured it was done. However, about a year later, a possible project came up in San Luis Obispo County, and so I checked to make sure that we were listed. I was still listed under my previous employer, but not under my new employer.
The dick hadn't actually made the changes that he had agreed - in writing I will add - to make!
I contacted the county, and discovered that this guy had been let go, much to the joy of the other county employees. It seems that I had nailed the "petty tyrant" thing, and that he had been trying to throw his weight around with everybody, until someone with actual power had enough of it and pushed him out. I explained what I needed to the new fellow in charge of the list, who was sympathetic and quickly had me placed on the list (within a week, very fast as these things go).
*The people who tended to be listed were project managers, who weild a fair amoutn of power concerning how work is done. So, even if everyone else at a company is bad at their job, a good project manager can still do things well.
Subtitle
The Not Quite Adventures of a Professional Archaeologist and Aspiring Curmudgeon
Friday, April 29, 2011
Wednesday, April 27, 2011
Not Archaeology?
In 1998, I participated in Cabrillo College's field school at the Presidio of San Francisco. We were excavating the original Presidio chapel, erected by Spanish missionaries in the late 18th century, and subsequently buried by the development of the Presidio grounds into a modern Army base. Our location was next to a large parking lot, near a very pleasant grassy area, and very close to many offices and recreation trails. San Francisco tour buses routinely passed by us, as did motorists and many joggers, office workers on their lunch breaks, and people just out for walks. Because of this volume of traffic, and curious onlookers, the head of the field school, Rob Edwards, assigned a different person each day to be responsible for talking with members of the public who passed by.
On one of my days, a man came by, read our sign, and asked "so, you've got archaeologists out here? What other kind of scientists do you have?" I explained that it was just archaeologists, and that this was a field school aimed at training young archaeologists in the discipline. He then proceeded to explain to me that if we didn't have geneticists on-hand, we wouldn't be able to do anything with any bones we found*. He then asked what, specifically, we were finding. I explained that we were digging up the original Spanish Chapel, and that the materials found in the remains of the chapel were telling us more about the people who lived here than had been recorded in the historical record. The guy shook his head, gave me a pitying "you poor, ignorant bastard" kind of look, and said "well, that's not really archaeology. That's cultural anthropology."**
It was the beginning of a thread that has run through my career ever since. I will describe what I am doing or what my research questions are to somebody, and they will, in all seriousness, attempt to inform me that it's not archaeology because it doesn't conform to some weird-ass notion that they have about what archaeology is***.
Sometimes the reason for the person denying that something is archaeology makes a certain degree of contextual sense - clients of mine will often claim that materials on land that they wish to develop is not archaeological in nature, even though it clearly is. This includes everything from historic trash to old contaminated soils to (bizarrely) rock art in one case. Some may have honestly convinced themselves of this position through sheer force of will, while others are simply trying to push me away. Underlying the denial, whether the person honestly believes it or not, is a simple desire to not want to have to deal with whatever mitigation measures may be required. But, in each case, the materials, however much they may fail to meet the general public vision of archaeology, were certainly archaeological in nature.
Stranger are the cases where someone not involved in a project attempts to inform me of "what archaeologists do" because my actual job doesn't match their fantasies. I have been told that historic-era materials are not archaeological. I have been told that performing survey (walking the land and looking for sites) is not archaeology because it doesn't involve digging (at least on the west coast). I have been told that there are no archaeological sites in North American because the people up here didn't build temples or palaces (there are so many things wrong with that particular claim that I didn't even know where to start). And I have been told that I am not an archaeologist because I am not trying to prove either the Bible or the Book of Mormon true.
What I want to know is whether other professionals get this sort of treatment. Are lawyers who specialize in creating contracts told that they aren't practicing law because they aren't involved in the criminal courts? Do podiatrists get told that they aren't practicing medicine because they aren't performing heart surgery? Yeah, they probably do, but that doesn't help me, so screw it.
*An odd assertion, as archaeologists were making sense of bone long before the structure of DNA was discovered. Genetic work has led to some great discoveries in modern archaeology, but you need to have a research question requiring genetic information in order to actually be able to make use of it. Most archaeological research questions simply don't require genetic information.
**As obnoxious as this guy proved himself to be - he stuck around and kept making ignorant statements in line with those described above - at least someone else had "public information" duty on the day that a fellow came by and began going on about how the sphinx were built to scare the dinosaurs away from the lake that used to be near the pyramids. I'm not joking.
***This is not to be confused with friends of mine who describe me as an "un-archaeologist" because I often serve a function of making sure that places are clear of archaeological sites, rather than going to places in order to study sites. I like this title enough that I have occasionally introduced myself as such.
On one of my days, a man came by, read our sign, and asked "so, you've got archaeologists out here? What other kind of scientists do you have?" I explained that it was just archaeologists, and that this was a field school aimed at training young archaeologists in the discipline. He then proceeded to explain to me that if we didn't have geneticists on-hand, we wouldn't be able to do anything with any bones we found*. He then asked what, specifically, we were finding. I explained that we were digging up the original Spanish Chapel, and that the materials found in the remains of the chapel were telling us more about the people who lived here than had been recorded in the historical record. The guy shook his head, gave me a pitying "you poor, ignorant bastard" kind of look, and said "well, that's not really archaeology. That's cultural anthropology."**
It was the beginning of a thread that has run through my career ever since. I will describe what I am doing or what my research questions are to somebody, and they will, in all seriousness, attempt to inform me that it's not archaeology because it doesn't conform to some weird-ass notion that they have about what archaeology is***.
Sometimes the reason for the person denying that something is archaeology makes a certain degree of contextual sense - clients of mine will often claim that materials on land that they wish to develop is not archaeological in nature, even though it clearly is. This includes everything from historic trash to old contaminated soils to (bizarrely) rock art in one case. Some may have honestly convinced themselves of this position through sheer force of will, while others are simply trying to push me away. Underlying the denial, whether the person honestly believes it or not, is a simple desire to not want to have to deal with whatever mitigation measures may be required. But, in each case, the materials, however much they may fail to meet the general public vision of archaeology, were certainly archaeological in nature.
Stranger are the cases where someone not involved in a project attempts to inform me of "what archaeologists do" because my actual job doesn't match their fantasies. I have been told that historic-era materials are not archaeological. I have been told that performing survey (walking the land and looking for sites) is not archaeology because it doesn't involve digging (at least on the west coast). I have been told that there are no archaeological sites in North American because the people up here didn't build temples or palaces (there are so many things wrong with that particular claim that I didn't even know where to start). And I have been told that I am not an archaeologist because I am not trying to prove either the Bible or the Book of Mormon true.
What I want to know is whether other professionals get this sort of treatment. Are lawyers who specialize in creating contracts told that they aren't practicing law because they aren't involved in the criminal courts? Do podiatrists get told that they aren't practicing medicine because they aren't performing heart surgery? Yeah, they probably do, but that doesn't help me, so screw it.
*An odd assertion, as archaeologists were making sense of bone long before the structure of DNA was discovered. Genetic work has led to some great discoveries in modern archaeology, but you need to have a research question requiring genetic information in order to actually be able to make use of it. Most archaeological research questions simply don't require genetic information.
**As obnoxious as this guy proved himself to be - he stuck around and kept making ignorant statements in line with those described above - at least someone else had "public information" duty on the day that a fellow came by and began going on about how the sphinx were built to scare the dinosaurs away from the lake that used to be near the pyramids. I'm not joking.
***This is not to be confused with friends of mine who describe me as an "un-archaeologist" because I often serve a function of making sure that places are clear of archaeological sites, rather than going to places in order to study sites. I like this title enough that I have occasionally introduced myself as such.
Monday, April 25, 2011
Terry Jones and Free Speech
So, the Florida guy with a Koran-burning fetish and a xenophobic streak the width of Michigan was arrested last week for protesting outside of a Dearborn, Michigan Mosque. He paid a token $1 bond to get out of jail, and he is now looking at suing the local government because of his arrest, and the ACLU is coming to his assistance.
I never thought I'd write this, but I hope that Terry Jones wins.
I still don't like him. After reading through much of the material he has put out, I am pretty-well convinced that he's a bigot and a hypocrite (he rants about the violence promoted in the Koran while never bothering to look at his own Bible to see much of the same).
However, any dogmatic belief system (including religions - and including both Islam and Christianity, and also including some political and social doctrines) has the potential to push people towards violent or extremist behavior. This is something that we need to be able to talk about. While Jones' own views seem to be fed by his own xenophobia more than an actual understanding of what he is criticizing, if he is denied the right to protest, that means that the rest of us may also be denied that right.
If you think that Jones is evil, and that I am being short-sighted in thinking that the right to protest should overshadow Jones' shortcomings, then all I can say is that I am in good company. This matter is a piece with the trial of the even-more-loathesome Fred Phelps, who was determined by the Supreme Court to have the right to protest at funerals without fear of being sued. I despise Phelps, but I think that the court made the right decision.
There is a tendency for people to support a right for themselves without acknowledging that the same right must then be extended to others. We routinely see people upset if someone of their political affiliation is censored in some way, but they have no problem with someone on the other side of the political spectrum receiving the same treatment. Similarly, several years back, a group of Evangelical Christian lawyers with the incorrectly-named Liberty Counsel threatened to sue the Albermarle County, Va. school district if it didn't allow distribution of a local church's flyers, and then were livid when they discovered that this meant that non-Christian groups were also allowed to make use of this. I have heard people who support Rush Limbaugh argue that Michael Moore should be arrested for "hate speech" and people who support Michael Moore say the same about Limbaugh.
The simple fact of the matter is that if we live in a society that has any real freedom, rather than just claiming to have freedom, it means that we do have a right to make our views known and, conversely, don't have a right to not be upset or offended. It is important to remember that there are legitimate, reality-based reasons to oppose mosques, churches, synagogues, temples, etc., and that if we are to have a right to protest under those, it means that Jones must have the right to protest as well.
If you think that Jones or Phelps is simply too loathsome, hey, you can protest outside of their churches. In fact, it would be amazing to see their reactions, to see whether or not they still support free speech while on the receiving end. But we shouldn't support use of government force (while I believe they are necessary, we shouldn't fool ourselves, police and prison are government uses of force) to push our viewpoint over others.
I never thought I'd write this, but I hope that Terry Jones wins.
I still don't like him. After reading through much of the material he has put out, I am pretty-well convinced that he's a bigot and a hypocrite (he rants about the violence promoted in the Koran while never bothering to look at his own Bible to see much of the same).
However, any dogmatic belief system (including religions - and including both Islam and Christianity, and also including some political and social doctrines) has the potential to push people towards violent or extremist behavior. This is something that we need to be able to talk about. While Jones' own views seem to be fed by his own xenophobia more than an actual understanding of what he is criticizing, if he is denied the right to protest, that means that the rest of us may also be denied that right.
If you think that Jones is evil, and that I am being short-sighted in thinking that the right to protest should overshadow Jones' shortcomings, then all I can say is that I am in good company. This matter is a piece with the trial of the even-more-loathesome Fred Phelps, who was determined by the Supreme Court to have the right to protest at funerals without fear of being sued. I despise Phelps, but I think that the court made the right decision.
There is a tendency for people to support a right for themselves without acknowledging that the same right must then be extended to others. We routinely see people upset if someone of their political affiliation is censored in some way, but they have no problem with someone on the other side of the political spectrum receiving the same treatment. Similarly, several years back, a group of Evangelical Christian lawyers with the incorrectly-named Liberty Counsel threatened to sue the Albermarle County, Va. school district if it didn't allow distribution of a local church's flyers, and then were livid when they discovered that this meant that non-Christian groups were also allowed to make use of this. I have heard people who support Rush Limbaugh argue that Michael Moore should be arrested for "hate speech" and people who support Michael Moore say the same about Limbaugh.
The simple fact of the matter is that if we live in a society that has any real freedom, rather than just claiming to have freedom, it means that we do have a right to make our views known and, conversely, don't have a right to not be upset or offended. It is important to remember that there are legitimate, reality-based reasons to oppose mosques, churches, synagogues, temples, etc., and that if we are to have a right to protest under those, it means that Jones must have the right to protest as well.
If you think that Jones or Phelps is simply too loathsome, hey, you can protest outside of their churches. In fact, it would be amazing to see their reactions, to see whether or not they still support free speech while on the receiving end. But we shouldn't support use of government force (while I believe they are necessary, we shouldn't fool ourselves, police and prison are government uses of force) to push our viewpoint over others.
Friday, April 22, 2011
Namaste Porn
When I returned to Santa Cruz in early 2007, I began to see bumper stickers with the single word "Namaste" emblazoned on them glued to cars all over the area. After a short time, I also began to see the word on T-shirts, and occasionally worked into conversation. So, being the sort of person that I am, I decided to look it up. Namaste is a Hindi word, derived from Sanskrit, that translates literally into "I bow to you" but is more typically taken to mean "My soul bows to yours", likely because modern Sanskrit is largely a liturgical language. In India, Nepal, and parts of Pakistan, it is used as a greeting and farewell, is accompanied by a bow, and in hierarchical settings (such as a young person meeting and older person, or an employee meeting an employer) it is initiated by the junior person.
So, it's a greeting that has a specific meaning within a deeply hierarchical culture. Why is it showing up on the back bumper of every Prius in the Bay Area? Well, it's associated with India. That's pretty much all you need to know*.
But, I've never let necessity stop me from blathering on, so I'm going to write some more. Wheee!
As I say, the term comes from India**, and is, in the minds of a particular sub-set of (usually white) Americans and Europeans, therefore connected with deep spirituality and mysticism. Or, rather, connected with western stereotypes of deep spirituality and mysticism. That it actually derives from a strongly hierarchical social system with which most people in Europe and North America (especially most of the people who like to throw the term around) would be deeply uncomfortable is lost. This is not surprising, as to most of us in Europe and North America, the fact that India itself is a rising technological power and a place of tremendous trouble and promise is lost. We know it as a land of gurus and magic, and that we now tend to view the gurus and magicians as wise sages rather than superstitious fools as our ancestors did in no way changes the fact that the notion that India is a place of mysticism outside of our mundane world is just a continuation of the racist attitudes of our ancestors.
India is a fascinating place. It is a place with an amazing history, and with a potential for a very bright future based on the resources that it dedicates to the training of scientists and engineers, as well as the willingness of its business community and government to take advantage of the opportunities available. It is also a place of deep social problems, often crushing poverty, and forms and degrees of inequality that would make most modern westerner's heads spin. But it is a place very much of this world, and the Indian people are living in the here and now, with the rest of us. The relegation of this huge quantity of people to the mystic ghetto, is both arrogant and stupid. The notion that the traditional religious practices of India should serve as a ready-made balm for our western psyches, bored from privilege and affluence, is absurd and demonstrates our willingness to take part of multi-culturalism when, and only when, the other cultures fill the roles that we deem they should. The conversion of a greeting that has a specific social and religious meaning into a bumper sticker simply shows how frivolous we are when we claim that we are "enlightened."
In other words, the appearance and spread of the term "namaste" within Santa Cruz is, once again, another example of culture porn. The people who have this bumper sticker or who use the term in conversation are, in my experience, not even vaguely interested in what is really happening in India, nor have they an interest in actual enlightenment. They are interested in consumer products, and ideas that can be commodified like consumer products, that make them look "deep" or "spiritual" or "enlightened" without ever actually having to leave their comfort zones.
*Well, that, and a group of frequently violent people used it on the show Lost, which I always found funny and I assume was intended to be ironic.
*Actually, I said that it comes from the larger region of south Asia, but in most American/European eyes, it's identified with India.
So, it's a greeting that has a specific meaning within a deeply hierarchical culture. Why is it showing up on the back bumper of every Prius in the Bay Area? Well, it's associated with India. That's pretty much all you need to know*.
But, I've never let necessity stop me from blathering on, so I'm going to write some more. Wheee!
As I say, the term comes from India**, and is, in the minds of a particular sub-set of (usually white) Americans and Europeans, therefore connected with deep spirituality and mysticism. Or, rather, connected with western stereotypes of deep spirituality and mysticism. That it actually derives from a strongly hierarchical social system with which most people in Europe and North America (especially most of the people who like to throw the term around) would be deeply uncomfortable is lost. This is not surprising, as to most of us in Europe and North America, the fact that India itself is a rising technological power and a place of tremendous trouble and promise is lost. We know it as a land of gurus and magic, and that we now tend to view the gurus and magicians as wise sages rather than superstitious fools as our ancestors did in no way changes the fact that the notion that India is a place of mysticism outside of our mundane world is just a continuation of the racist attitudes of our ancestors.
India is a fascinating place. It is a place with an amazing history, and with a potential for a very bright future based on the resources that it dedicates to the training of scientists and engineers, as well as the willingness of its business community and government to take advantage of the opportunities available. It is also a place of deep social problems, often crushing poverty, and forms and degrees of inequality that would make most modern westerner's heads spin. But it is a place very much of this world, and the Indian people are living in the here and now, with the rest of us. The relegation of this huge quantity of people to the mystic ghetto, is both arrogant and stupid. The notion that the traditional religious practices of India should serve as a ready-made balm for our western psyches, bored from privilege and affluence, is absurd and demonstrates our willingness to take part of multi-culturalism when, and only when, the other cultures fill the roles that we deem they should. The conversion of a greeting that has a specific social and religious meaning into a bumper sticker simply shows how frivolous we are when we claim that we are "enlightened."
In other words, the appearance and spread of the term "namaste" within Santa Cruz is, once again, another example of culture porn. The people who have this bumper sticker or who use the term in conversation are, in my experience, not even vaguely interested in what is really happening in India, nor have they an interest in actual enlightenment. They are interested in consumer products, and ideas that can be commodified like consumer products, that make them look "deep" or "spiritual" or "enlightened" without ever actually having to leave their comfort zones.
*Well, that, and a group of frequently violent people used it on the show Lost, which I always found funny and I assume was intended to be ironic.
*Actually, I said that it comes from the larger region of south Asia, but in most American/European eyes, it's identified with India.
Wednesday, April 20, 2011
Economics, Books, and Incomplete Models
Over the weekend, in a conversation between myself, Kaylia, and our friends John and Jen regarding the publishing industry brought up an issue that is directly related to the ways in which social scientists, including archaeologists, attempt to make sense of resource procurement. Although our discussion centered around paperback books and e-readers (such as the Kindle), the basic argument is applicable to everything from hunter-gatherer food procurement to broad discussions about economic policy. It essentially pits a philosophical model against questions of material reality.
The way that the discussion came up is as follows: a couple of months ago, Kaylia attended a writer's conference in San Francisco. Also in attendance were several representatives of the publishing industry, who discussed what they predicted for the future of their industry. One common claim went something like this: just as the CD and DVD production industry took a hit from internet distribution of music and movies/television shows, it should be expected that the printing industry will take a hit from the advent of e-readers. As prices are adjusted, it should eventually become less expensive to buy the e-reader and electronic books than to buy an equivalent number of paperback books. In the model described to us, the price of the e-reader remains more-or-less constant, but the cost of the electronic books decreases dramatically.
The model makes perfect intuitive sense, if someone can save money over the long term by making a larger one-time purchase (the e-reader) and then expending less money over the short term, then they would be wise to do so. It sounds correct - someone will choose saving money over spending money, right?
Well, not necessarily. Mind you, I'm not saying that this isn't what will happen - it may well be the case that e-readers will eventually drive the paperback out of existence - but the model itself is based on a principle that, while it seems to make perfect sense, doesn't take the realities of human behavior into account.
The first problem with the model is that it fails to take into account that the one-time purchase is a new cost. The comparison to the CD and DVD market is flawed for the simple reason that one has to have a CD or DVD player if one is to play the music, so there is an equipment cost right up front that is not present for the paperback book market. If one is going to spend the money to buy a CD player anyway, why not spend that money on another device (say, an iPod) which is in the same general price bracket but has improved function. What's more, the electronic delivery method for CDs and DVDs uses personal computers and, increasingly, video game consoles, devices which the consumer already owns but which can be turned to an additional purpose. While it is true that electronic versions of many books can be bought for us on a personal computer, the reality is that very few people do so due to limitations of physical space and readability. So, the changeover to electronic distribution of music and video is due in large part to the use of technology that most people already possess and the fact that the new devices purchased replace devices that the consumer would have had to purchase anyway.
Still, it follows that someone would opt to spend less money in the long run. Even if there is a large up-front cost, if it saves money over time, then people would want to do it, right?
Well, again, not necessarily. One of the things that has bedeviled social science researchers for well over a century is that people's behavior is often based on perception rather than actual measurable practicality. So, we'll predict that phenomenon X will occur because it is the most cost-effective course of action (whether in terms of actual money, or energy expended, or time spent on task), only to find that phenomenon X doesn't occur, or that it sort-of occurs, but only within certain limited parameters. In the case at hand, the fact that someone could have long-term savings if they make a large initial investment could easily be off-set by the fact that there is less money spent in the short-term. And given that the cost of a Kindle is somewhere in the neighborhood of the cost of ten paperback books, this might create an illusion of lesser cost if one simply buys the books - whether true or not, it may be perceived that way. Also, if someone is of more limited means, they may not have the one-time cost of the e-reader at hand, but may have the money to purchase a single paperback. You could make the argument that the person could simply save the paperback money until they had enough for the e-reader, but the fact of the matter is that only a small portion of people will actually behave in that fashion, rather than simply buying the paperbacks as they have the means.
And understand that this is not a behavior limited to modern people living in the U.S. Archaeologists and historians have long created models for everything from the purchasing habits of historic people to the foraging habits of hunter-gatherers based on the principle that option that produces the best long-term return on investment (whether measured in money spent on goods or calories burnt while gathering and hunting) will be the option chosen. Time and again we have found that this is only partially true, and that nobody ever seems to behave in a truly optimal fashion. People will often go for a "good enough" return rather than a really good one, and there are often factors that make a lousy return on investment attractive.
To this last point, there is one other thing that may protect the paperback market, at least to a degree. For most people, buying books is not about economics - yes, economics plays a role, and prices will influence if and what people buy, but if it was solely economic most book buyers would be more likely to simply go to the library. There is an experience involved with browsing book stores, with being able to pick up a book that you want and find one that you had not been aware of but now really want to read. The smell, appearance, and layout of book stores make for part of the experience. Many readers find the purchase and possession of a paperback to be satisfying. All of this is part of the book market, but none of this gets factored in when one only considers basic financial transactions.
Now, will e-readers eliminate the paperback market? Quite possibly. They do seem to have economics on their side. However, one should always be wary of predictions made on the basic equation of "X costs less, therefore it will spell the doom of Y" - there are many factors that may prevent such predictions from coming true, or may only allow them to become partially true. Whether you're building models for human behavior (like I do) or pondering national policy, these types of arguments sound really good until you look at the data and find out that it's never THAT simple. No model that fails to account for complicating factors should ever be taken as gospel (in other words, no model at all should ever be taken as gospel).
The way that the discussion came up is as follows: a couple of months ago, Kaylia attended a writer's conference in San Francisco. Also in attendance were several representatives of the publishing industry, who discussed what they predicted for the future of their industry. One common claim went something like this: just as the CD and DVD production industry took a hit from internet distribution of music and movies/television shows, it should be expected that the printing industry will take a hit from the advent of e-readers. As prices are adjusted, it should eventually become less expensive to buy the e-reader and electronic books than to buy an equivalent number of paperback books. In the model described to us, the price of the e-reader remains more-or-less constant, but the cost of the electronic books decreases dramatically.
The model makes perfect intuitive sense, if someone can save money over the long term by making a larger one-time purchase (the e-reader) and then expending less money over the short term, then they would be wise to do so. It sounds correct - someone will choose saving money over spending money, right?
Well, not necessarily. Mind you, I'm not saying that this isn't what will happen - it may well be the case that e-readers will eventually drive the paperback out of existence - but the model itself is based on a principle that, while it seems to make perfect sense, doesn't take the realities of human behavior into account.
The first problem with the model is that it fails to take into account that the one-time purchase is a new cost. The comparison to the CD and DVD market is flawed for the simple reason that one has to have a CD or DVD player if one is to play the music, so there is an equipment cost right up front that is not present for the paperback book market. If one is going to spend the money to buy a CD player anyway, why not spend that money on another device (say, an iPod) which is in the same general price bracket but has improved function. What's more, the electronic delivery method for CDs and DVDs uses personal computers and, increasingly, video game consoles, devices which the consumer already owns but which can be turned to an additional purpose. While it is true that electronic versions of many books can be bought for us on a personal computer, the reality is that very few people do so due to limitations of physical space and readability. So, the changeover to electronic distribution of music and video is due in large part to the use of technology that most people already possess and the fact that the new devices purchased replace devices that the consumer would have had to purchase anyway.
Still, it follows that someone would opt to spend less money in the long run. Even if there is a large up-front cost, if it saves money over time, then people would want to do it, right?
Well, again, not necessarily. One of the things that has bedeviled social science researchers for well over a century is that people's behavior is often based on perception rather than actual measurable practicality. So, we'll predict that phenomenon X will occur because it is the most cost-effective course of action (whether in terms of actual money, or energy expended, or time spent on task), only to find that phenomenon X doesn't occur, or that it sort-of occurs, but only within certain limited parameters. In the case at hand, the fact that someone could have long-term savings if they make a large initial investment could easily be off-set by the fact that there is less money spent in the short-term. And given that the cost of a Kindle is somewhere in the neighborhood of the cost of ten paperback books, this might create an illusion of lesser cost if one simply buys the books - whether true or not, it may be perceived that way. Also, if someone is of more limited means, they may not have the one-time cost of the e-reader at hand, but may have the money to purchase a single paperback. You could make the argument that the person could simply save the paperback money until they had enough for the e-reader, but the fact of the matter is that only a small portion of people will actually behave in that fashion, rather than simply buying the paperbacks as they have the means.
And understand that this is not a behavior limited to modern people living in the U.S. Archaeologists and historians have long created models for everything from the purchasing habits of historic people to the foraging habits of hunter-gatherers based on the principle that option that produces the best long-term return on investment (whether measured in money spent on goods or calories burnt while gathering and hunting) will be the option chosen. Time and again we have found that this is only partially true, and that nobody ever seems to behave in a truly optimal fashion. People will often go for a "good enough" return rather than a really good one, and there are often factors that make a lousy return on investment attractive.
To this last point, there is one other thing that may protect the paperback market, at least to a degree. For most people, buying books is not about economics - yes, economics plays a role, and prices will influence if and what people buy, but if it was solely economic most book buyers would be more likely to simply go to the library. There is an experience involved with browsing book stores, with being able to pick up a book that you want and find one that you had not been aware of but now really want to read. The smell, appearance, and layout of book stores make for part of the experience. Many readers find the purchase and possession of a paperback to be satisfying. All of this is part of the book market, but none of this gets factored in when one only considers basic financial transactions.
Now, will e-readers eliminate the paperback market? Quite possibly. They do seem to have economics on their side. However, one should always be wary of predictions made on the basic equation of "X costs less, therefore it will spell the doom of Y" - there are many factors that may prevent such predictions from coming true, or may only allow them to become partially true. Whether you're building models for human behavior (like I do) or pondering national policy, these types of arguments sound really good until you look at the data and find out that it's never THAT simple. No model that fails to account for complicating factors should ever be taken as gospel (in other words, no model at all should ever be taken as gospel).
Monday, April 18, 2011
Toxic Archaeology
Very often, archaeological remains take a form that most folks (and even most archaeologists) never think of. And this can create odd situations and interesting opportunities.
Case in point. I have been working on a historic-era archaeological site the is comprised of the remains of an old manufactured gas plant. Manufactured gas is a product of coal*, where the coal was heated in oxygen-poor environments to release combustible gases, which, during the 19th and early 20th centuries, were piped to street lamps and homes to provide fuel for gas lamps.
The production of manufactured gas results in the production of a range of waste products, most of them toxic to some extent. Given that manufactured gas was primarily used during the 19th and 20th centuries, the waste was not typically disposed of in an environmentally safe fashion. The end result is that former manufactured gas facilities are very often waste sites, and that the companies (often, though not always, public utilities) that own them are now saddled with the responsibility of cleaning up the mess made by previous generations.
Because these plants are typically well over 50 years old, and the National Historic Preservation Act (and, in California, the California Environmental Quality Act) require the evaluation of archaeological sites over 50 years in age, it is common for archaeologists to be present monitoring the clean-up and remediation work done at these sites. Which brings us to the interesting point - the chemical waste itself is a product of the use of the manufactured gas facilities that comprise these sites, meaning that the chemical waste is an archaeological deposit.
No sensible person would recommend leaving these waste materials in the ground, and so I am not advocating that they be protected under the relevant regulations - nor would they be, as safety regulation typically trumps historic preservation. But, as I have been doing this work, I have found myself taking much more extensive notes than I typically would for a monitoring project. Given that the waste will eventually be removed and processed, the monitoring of the clean-up provides a weird opportunity to fully document the archaeological site as it is dismantled for proper waste treatment. In fact, as some of these sites represent the intact remaisn of the disposal of waste from the manufactured gas plants, they are intact features by any reasonable definition.
Of course, few people, including few archaeologists, think of these materials this way. This is usually considered nothing but toxic waste, and the notion that it represents and intact archaeological deposit, devoid as it is of tools and construction materials, would strike most as odd. But, well, there you have it.
Okay, not the most exciting archaeological site ever, and probably not that interesting to most people. But these are the sorts of things I think while I stand around and watch people produce soil cores. I have to convince myself that there's a silver lining some times.
*Other combustible materials, such as wood and oil, could also work, but it was almost always made from coal.
Case in point. I have been working on a historic-era archaeological site the is comprised of the remains of an old manufactured gas plant. Manufactured gas is a product of coal*, where the coal was heated in oxygen-poor environments to release combustible gases, which, during the 19th and early 20th centuries, were piped to street lamps and homes to provide fuel for gas lamps.
The production of manufactured gas results in the production of a range of waste products, most of them toxic to some extent. Given that manufactured gas was primarily used during the 19th and 20th centuries, the waste was not typically disposed of in an environmentally safe fashion. The end result is that former manufactured gas facilities are very often waste sites, and that the companies (often, though not always, public utilities) that own them are now saddled with the responsibility of cleaning up the mess made by previous generations.
Because these plants are typically well over 50 years old, and the National Historic Preservation Act (and, in California, the California Environmental Quality Act) require the evaluation of archaeological sites over 50 years in age, it is common for archaeologists to be present monitoring the clean-up and remediation work done at these sites. Which brings us to the interesting point - the chemical waste itself is a product of the use of the manufactured gas facilities that comprise these sites, meaning that the chemical waste is an archaeological deposit.
No sensible person would recommend leaving these waste materials in the ground, and so I am not advocating that they be protected under the relevant regulations - nor would they be, as safety regulation typically trumps historic preservation. But, as I have been doing this work, I have found myself taking much more extensive notes than I typically would for a monitoring project. Given that the waste will eventually be removed and processed, the monitoring of the clean-up provides a weird opportunity to fully document the archaeological site as it is dismantled for proper waste treatment. In fact, as some of these sites represent the intact remaisn of the disposal of waste from the manufactured gas plants, they are intact features by any reasonable definition.
Of course, few people, including few archaeologists, think of these materials this way. This is usually considered nothing but toxic waste, and the notion that it represents and intact archaeological deposit, devoid as it is of tools and construction materials, would strike most as odd. But, well, there you have it.
Okay, not the most exciting archaeological site ever, and probably not that interesting to most people. But these are the sorts of things I think while I stand around and watch people produce soil cores. I have to convince myself that there's a silver lining some times.
*Other combustible materials, such as wood and oil, could also work, but it was almost always made from coal.
Friday, April 15, 2011
The Vague Illusion of Safety First
I have found that many people seem to look down on industrial safety measures, viewing them as overkill and somehow unnecessary. Of course, most of these people probably don't have vivid memories of a past life as a 19th century Welsh miner or a 1910's-era child factory laborer, which may account for why they tend to view safety regulations without the appropriate respect. The reality is that the safety measures that are often derided as silly or extraneous are very often the reason why it is no longer common to have multiple deaths on the average building construction.
However, like anything, the safety-measure pendulum can swing too far in the other direction, and when it does it can become a safety hazard in of itself.
When I am doing fieldwork, I am required to have a minimum amount of safety gear. At minimum, I wear long sleeved shirts and long pants, as well as boots and usually an orange safety vest (good for being seen by vehicle and equipment operators in urban and construction environments, and in rural environments it also allows crew members to keep track of each other and notifies hunters that we are not strange, clothed, flesh-colored deer). When working around heavy equipment, this is usually supplemented by safety glasses, a hard hat, steel-toed boots (oh so comfortable), and hearing protection. Usually the requirements for the higher level of PPE (personal protective equipment) makes perfect sense, and we're happy to have it.
Sometimes, however, the higher PPE requirements reflect the draconian application of a policy to a situation to which no sensible person would try to apply it.
For example, a couple of years ago I worked in the oil fields of Kern County. We were required to have the full deal - steel-toed boots, hard hat, safety glasses, etc. When we were in active oil production fields, this was sensible - there's enough heavy equipment and vehicle traffic to make the safety equipment a wise investment in such areas. For much of the project, though, we were on land that was technically part of the oil fields, but which had no pumps, no vehicles, no equipment, nothing at all that could blow up, pop out, fly away, or drop on our heads. Nothing that necessitated the additional safety equipment. Now, this would have been annoying, but nothing more, if you didn't take into account the fact that we were working there during the summer, when temperatures were normally over 110 degrees Fahrenheit, and we were being required to lug around extra equipment that not only weighed us down, but also restricted air movement in and through our clothing and prevented us from wearing other safety equipment made for hot weather (such as some special-made vests and hats that stored and released water) that would have helped us lower our body temperatures. The safety equipment had become a safety hazard.
Another case comes to mind, one where I was not in the field, but did work for the company that had the problem. My previous employer, a large multi-national company that provided a wide range of environmental and engineering services, had a contract with a petroleum company to provide environmental services in their fields in southern San Luis Obispo County. During fieldwork, one of the wildlife biologists was bit by a soft-bodied tick. Soft-bodied ticks are particularly nasty, and severe adverse reactions to their bites are not uncommon, and the biologist ended up in the hospital (they fully recovered, by the way).
The client was outraged. They demanded to know how this could have happened, and were unwilling to accept that fieldwork, by its very nature, has risks associated with it. One of these risks is the possibility of being injured by wildlife. I left the company before the matter was resolved, but last that I had heard, the client was demanding that all field workers be dressed in head-to-toe protective garments that would prevent ticks from being able to enter people's clothing. The problem, of course, is that the garment in question would also diminish the wearer's body's ability to cool off via sweat, which would greatly increase the chance of heat injury, including the very deadly condition of heat stroke. So, the oil company executives were so obsessed with preventing a low-probability event that had occurred once (the tick bite) that they were willing to require measures that created an even greater danger.
Attempts to explain this to the petroleum company executives fell on deaf ears, with one of them even going so far as to say:
The executive then ignored everyone who pointed out that in order to prevent the highly improbable (in fact, I'd go so far as to say practically impossible) snake, he was greatly increasing the chances of the very likely death of someone due to heat stroke. It was a classic case of the unlikely even that happened eclipsing the very likely event that was avoided - a classic error that leads to extremely poor risk management*.
On more example comes from the same employer. The upper management decided that they wished to improve our company's safety record, and to that end, they assigned a team to develop safety standards that could be followed by all company personnel. Now, keep in mind that this was a big, multinational company that provided a wide variety of services (yes, I worked for Veridian Dynamics), and so a wise set of safety standards would allow flexibility for the people performing different types of tasks under different conditions.
Needless to say, we did not receive a wise set of safety standards. The problem with the new standards can best be summed up by the recommended use of a ladder for crossing over all walls, fences, and other obstructions 3' tall or higher. Recommending the use of a ladder for such purposes is fine, if perhaps a bit overzealous, when one is working within a small area where the ladders can be put in place at the beginning of the work day and left in place throughout the day. When one is performing archaeological or biological survey, though, it becomes wildly impractical. The usual method for dealing with fences and walls when performing survey is to crawl under them or climb through them (in the case of barbed wire) or climb over them (in the case of sturdy wood, metal, concrete, or stone walls and fences). If done correctly, this is just as safe as climbing up and down a ladder (remember, with a ladder, you're increasing the maximum height a person can fall, therefore potentially actually triggering yet another set of OSHA requirements). When there's an unlocked or open gate nearby, we obviously just go through that, but it's not uncommon for the nearest gate to be a mile or more away from where you are. Add to this that carrying a ladder means lugging another heavy object along with your other field equipment, and that ladders tend to get caught on plants, making them hazards to try to carry through dense vegetation - which we routinely move through. The simple fact of the matter is that carrying a ladder on survey actually creates a few new risks, while doing little to eliminate existing ones.
When the use of the ladder during survey was put to a room of archaeologists and wildlife biologists, we all looked at each other, and then at the presenter. We explained the basic problem, and had to spend the next hour describing out work in detail so that she could understand why this was a lousy way to improve safety. At least she listened, but had we actually been consulted about our work before the new safety standards were developed in the first place, more practical and useful standards might have been created.
The simple fact of the matter is that field work does have risks. These risks can be minimized, however, through the appropriate use of PPE, and through the application of both training and some basic common sense. There is always going to be the possibility of injuries, though, no matter how hard we try to prevent them. The problem is that very often we are required to work with rules written by people who don't understand the nature of our work, and therefore don't understand that what makes someone safer in one environment may actually put them in danger in another.
*A more common example: how many times have you heard someone say that they won't use a seat belt because they know of someone who was thrown free from a collision and therefore survived something that smashed the vehicle? You may have noticed that showing someone that the statistical likelihood of surviving a horrible collision because you wore your seat belt is orders of magnitude higher than the odds of you surviving by being thrown free from an accident never seems to sway them. The weird exception will often trump the quieter reality in people's minds. Which is, incidentally, the primary reason why we as a species are doomed.
However, like anything, the safety-measure pendulum can swing too far in the other direction, and when it does it can become a safety hazard in of itself.
When I am doing fieldwork, I am required to have a minimum amount of safety gear. At minimum, I wear long sleeved shirts and long pants, as well as boots and usually an orange safety vest (good for being seen by vehicle and equipment operators in urban and construction environments, and in rural environments it also allows crew members to keep track of each other and notifies hunters that we are not strange, clothed, flesh-colored deer). When working around heavy equipment, this is usually supplemented by safety glasses, a hard hat, steel-toed boots (oh so comfortable), and hearing protection. Usually the requirements for the higher level of PPE (personal protective equipment) makes perfect sense, and we're happy to have it.
Sometimes, however, the higher PPE requirements reflect the draconian application of a policy to a situation to which no sensible person would try to apply it.
For example, a couple of years ago I worked in the oil fields of Kern County. We were required to have the full deal - steel-toed boots, hard hat, safety glasses, etc. When we were in active oil production fields, this was sensible - there's enough heavy equipment and vehicle traffic to make the safety equipment a wise investment in such areas. For much of the project, though, we were on land that was technically part of the oil fields, but which had no pumps, no vehicles, no equipment, nothing at all that could blow up, pop out, fly away, or drop on our heads. Nothing that necessitated the additional safety equipment. Now, this would have been annoying, but nothing more, if you didn't take into account the fact that we were working there during the summer, when temperatures were normally over 110 degrees Fahrenheit, and we were being required to lug around extra equipment that not only weighed us down, but also restricted air movement in and through our clothing and prevented us from wearing other safety equipment made for hot weather (such as some special-made vests and hats that stored and released water) that would have helped us lower our body temperatures. The safety equipment had become a safety hazard.
Another case comes to mind, one where I was not in the field, but did work for the company that had the problem. My previous employer, a large multi-national company that provided a wide range of environmental and engineering services, had a contract with a petroleum company to provide environmental services in their fields in southern San Luis Obispo County. During fieldwork, one of the wildlife biologists was bit by a soft-bodied tick. Soft-bodied ticks are particularly nasty, and severe adverse reactions to their bites are not uncommon, and the biologist ended up in the hospital (they fully recovered, by the way).
The client was outraged. They demanded to know how this could have happened, and were unwilling to accept that fieldwork, by its very nature, has risks associated with it. One of these risks is the possibility of being injured by wildlife. I left the company before the matter was resolved, but last that I had heard, the client was demanding that all field workers be dressed in head-to-toe protective garments that would prevent ticks from being able to enter people's clothing. The problem, of course, is that the garment in question would also diminish the wearer's body's ability to cool off via sweat, which would greatly increase the chance of heat injury, including the very deadly condition of heat stroke. So, the oil company executives were so obsessed with preventing a low-probability event that had occurred once (the tick bite) that they were willing to require measures that created an even greater danger.
Attempts to explain this to the petroleum company executives fell on deaf ears, with one of them even going so far as to say:
"It's possible that deadly snake could crawl into a crate of merchandise packaged in Africa, and then get out in the U.S., slide off of a train, and find its way to the field and bite one of your biologists. You have to protect against even the very small possibility of that happening! These garments will keep out ticks, and will even stand up to the snakes! We make our workers in Africa wear them for that very reason!"
The executive then ignored everyone who pointed out that in order to prevent the highly improbable (in fact, I'd go so far as to say practically impossible) snake, he was greatly increasing the chances of the very likely death of someone due to heat stroke. It was a classic case of the unlikely even that happened eclipsing the very likely event that was avoided - a classic error that leads to extremely poor risk management*.
On more example comes from the same employer. The upper management decided that they wished to improve our company's safety record, and to that end, they assigned a team to develop safety standards that could be followed by all company personnel. Now, keep in mind that this was a big, multinational company that provided a wide variety of services (yes, I worked for Veridian Dynamics), and so a wise set of safety standards would allow flexibility for the people performing different types of tasks under different conditions.
Needless to say, we did not receive a wise set of safety standards. The problem with the new standards can best be summed up by the recommended use of a ladder for crossing over all walls, fences, and other obstructions 3' tall or higher. Recommending the use of a ladder for such purposes is fine, if perhaps a bit overzealous, when one is working within a small area where the ladders can be put in place at the beginning of the work day and left in place throughout the day. When one is performing archaeological or biological survey, though, it becomes wildly impractical. The usual method for dealing with fences and walls when performing survey is to crawl under them or climb through them (in the case of barbed wire) or climb over them (in the case of sturdy wood, metal, concrete, or stone walls and fences). If done correctly, this is just as safe as climbing up and down a ladder (remember, with a ladder, you're increasing the maximum height a person can fall, therefore potentially actually triggering yet another set of OSHA requirements). When there's an unlocked or open gate nearby, we obviously just go through that, but it's not uncommon for the nearest gate to be a mile or more away from where you are. Add to this that carrying a ladder means lugging another heavy object along with your other field equipment, and that ladders tend to get caught on plants, making them hazards to try to carry through dense vegetation - which we routinely move through. The simple fact of the matter is that carrying a ladder on survey actually creates a few new risks, while doing little to eliminate existing ones.
When the use of the ladder during survey was put to a room of archaeologists and wildlife biologists, we all looked at each other, and then at the presenter. We explained the basic problem, and had to spend the next hour describing out work in detail so that she could understand why this was a lousy way to improve safety. At least she listened, but had we actually been consulted about our work before the new safety standards were developed in the first place, more practical and useful standards might have been created.
The simple fact of the matter is that field work does have risks. These risks can be minimized, however, through the appropriate use of PPE, and through the application of both training and some basic common sense. There is always going to be the possibility of injuries, though, no matter how hard we try to prevent them. The problem is that very often we are required to work with rules written by people who don't understand the nature of our work, and therefore don't understand that what makes someone safer in one environment may actually put them in danger in another.
*A more common example: how many times have you heard someone say that they won't use a seat belt because they know of someone who was thrown free from a collision and therefore survived something that smashed the vehicle? You may have noticed that showing someone that the statistical likelihood of surviving a horrible collision because you wore your seat belt is orders of magnitude higher than the odds of you surviving by being thrown free from an accident never seems to sway them. The weird exception will often trump the quieter reality in people's minds. Which is, incidentally, the primary reason why we as a species are doomed.
Wednesday, April 13, 2011
So Long Lew, Glad We Met You...
In a piece of news that comes as a blow to archaeologists, Lewis Binford has died at the age of 81.
Most of my readers are not archaeologists, and so I will need to explain who Lewis Binford was and why this matters.
Lewis Binford was one of the archaeologists who, in the 1960s and 1970s, pushed for new approaches and methods to become the standard model for how archaeology is done. He is often credited with popularizing ethnoarchaeology (the study of living people and the remains that they leave behind to better understand how the peoples of the past generated the material record that we dig up today) and experimental archaeology (the practice of making and using tools similar to those made and used by the people archaeologists study), and he articulated ideas and positions that coalesced into what is now known as processual archaeology (so called because it sought to find the abstract processes that underlay human behavior, looking for basic rules analogous to the laws of physics, rather than simply describe the archaeological record), the dominant school of archaeological thought in North America. The laws of human behavior sought by processual archaeologists have yet to be found, if they even exist, but the quest itself has yielded some wonderful discoveries nonetheless.
Binford is often credited with single-handedly revolutionizing archaeology, but this isn't really true. Many of the positions that he championed were already present, but he helped to make them more popular. And while he may not have created modern archaeology, he was very much a figure-head who had a large role in guiding the course that it took. He was a brilliant thinker, a lousy writer*, and a dominant figure. Like his work or not, it's difficult to think of an archaeologist who is anywhere near as influential as Binford.
He was certainly not without many critics, and many of the criticisms are both strong and valid. Nonetheless, his was a mind to be contended with.
I wrote previously about how the giants of my field are retiring. Binford loomed large even amongst these men. I never met him, but I feel a loss nonetheless. It'll be a good long time before we see another of his kind, and we are the poorer for that.
*He was known for writing papers that his supporters had to "translate from Binford to English" in order to explain them. He was also known for making up words, although these words had a habit of becoming part of our standard technical vocabulary.
Most of my readers are not archaeologists, and so I will need to explain who Lewis Binford was and why this matters.
Lewis Binford was one of the archaeologists who, in the 1960s and 1970s, pushed for new approaches and methods to become the standard model for how archaeology is done. He is often credited with popularizing ethnoarchaeology (the study of living people and the remains that they leave behind to better understand how the peoples of the past generated the material record that we dig up today) and experimental archaeology (the practice of making and using tools similar to those made and used by the people archaeologists study), and he articulated ideas and positions that coalesced into what is now known as processual archaeology (so called because it sought to find the abstract processes that underlay human behavior, looking for basic rules analogous to the laws of physics, rather than simply describe the archaeological record), the dominant school of archaeological thought in North America. The laws of human behavior sought by processual archaeologists have yet to be found, if they even exist, but the quest itself has yielded some wonderful discoveries nonetheless.
Binford is often credited with single-handedly revolutionizing archaeology, but this isn't really true. Many of the positions that he championed were already present, but he helped to make them more popular. And while he may not have created modern archaeology, he was very much a figure-head who had a large role in guiding the course that it took. He was a brilliant thinker, a lousy writer*, and a dominant figure. Like his work or not, it's difficult to think of an archaeologist who is anywhere near as influential as Binford.
He was certainly not without many critics, and many of the criticisms are both strong and valid. Nonetheless, his was a mind to be contended with.
I wrote previously about how the giants of my field are retiring. Binford loomed large even amongst these men. I never met him, but I feel a loss nonetheless. It'll be a good long time before we see another of his kind, and we are the poorer for that.
*He was known for writing papers that his supporters had to "translate from Binford to English" in order to explain them. He was also known for making up words, although these words had a habit of becoming part of our standard technical vocabulary.
Monday, April 11, 2011
Peer Review Blues
So, a while back I wrote that I had submitted a paper for publication and that it was undergoing peer review before it was published in the journal. A couple of months ago, I received an email notifying me that there were too many papers, and that mine was one of the ones being cut. This was disappointing, but given that the over submitals came from well-established researchers, and mine was my first publication, not too surprising. As frustrating as it was, it made the most sense for mine to be cut.
Then, about two week sago, I received an email asking me if I could trim ten pages from my paper so that it could be fit into the journal issue after all. I was doubtful of my ability to do this, seeing as I had already cut it down to 45 pages from 150+ pages. However, after some prompting from my friend (who was also one of the editors), I began to work at it again. However, the peer reviewer had not yet turned their comments in to the journal. So, in order to meet the deadline, I was cutting material out without knowing what the reviewer was going to request be changed...which is not precisely the best strategy, but there wasn't much other option.
Three days before the paper was due, I received the review comments. Most of them were pretty simple, essentially just requesting clarification of a few points, or asking that certain terms be defined more narrowly to allow for a better discussion. No problem. There was one comment that was open to interpretation, and may have been requesting that a larger body of data be added to the study (not reasonable, as the reviewer turned the comments in three days before the paper deadline), or may simply have been intended to inform me of a location where there was more data to be had.
Regardless, I got the changes made, cut as many pages as I could, and sent the paper off. Now, the waiting begins to see if it actually makes publication or gets cut again. If it gets cut, I will submit it to other journals. Now that I've put this much work into it, it's damn well going to get published.
Then, about two week sago, I received an email asking me if I could trim ten pages from my paper so that it could be fit into the journal issue after all. I was doubtful of my ability to do this, seeing as I had already cut it down to 45 pages from 150+ pages. However, after some prompting from my friend (who was also one of the editors), I began to work at it again. However, the peer reviewer had not yet turned their comments in to the journal. So, in order to meet the deadline, I was cutting material out without knowing what the reviewer was going to request be changed...which is not precisely the best strategy, but there wasn't much other option.
Three days before the paper was due, I received the review comments. Most of them were pretty simple, essentially just requesting clarification of a few points, or asking that certain terms be defined more narrowly to allow for a better discussion. No problem. There was one comment that was open to interpretation, and may have been requesting that a larger body of data be added to the study (not reasonable, as the reviewer turned the comments in three days before the paper deadline), or may simply have been intended to inform me of a location where there was more data to be had.
Regardless, I got the changes made, cut as many pages as I could, and sent the paper off. Now, the waiting begins to see if it actually makes publication or gets cut again. If it gets cut, I will submit it to other journals. Now that I've put this much work into it, it's damn well going to get published.
Friday, April 8, 2011
Snow Day
This is a story of me being an idiot and not checking the weather reports. Let it be a lesson to you all.
Yesterday was one of those "joys of fieldwork" days. I was sent out with a fairly simple task, to check on some potential ground disturbance near or within the boundaries of an archaeological site. A simple task - go out, use a GPS unit to map the trees and a few site features for reference, determine whether the removal of the trees was likely to impact the site, and return to make my report.
Then the weather happened.
It has been sunny and warm all of the last week. Uncomfortably warm, in fact. And so when, at the end of the day Wednesday, I was asked to go up into the mountains and check on these potential impacts, I figured it was no problem. The weather was good, the task was simple, and I was prepared.
As I left my apartment this morning, it began to sprinkle. And, as I headed along the highway towards the project area, it began to rain. No problem, I had a raincoat and a notebook made out of some weird mutant paper that is not harmed by water. I was still good to go.
Then I hit the mountain pass, and I noticed that the raindrops hitting my windhield were no longer splattering in a normal fashion. I knew what this meant: snow was coming. And sure enough, as I climbed another 100 feet or so, the rain gave way to snow, splotchy wet snow that would smack against my window at first, but flaky, powdery snow that just flitted past the car as I gained more altitude.
Damn. I didn't have snow chains.
Well, I continued to the site, figuring that, given the tempurature, the snow would quickly melt away from the black asphalt, and this appeared to hold true. I reached the site, and discovered that the GPS unit was not receiving any satellite signals due to a combination of tree canopy and atmospheric disturbance. So, I pulled out a 60-meter tape measure and my compass and got to work getting the bearing and distance to each possible disturbance area from the established site features. The entire time, my boots were covered in snow and my feet were becoming cold and wet (I had left my water-resistant boots at home, thinking that the lighter boots would be sufficient), and the process took a couple of hours to complete. Up until the last twenty minutes, the snow continued to melt off of the road. And then, just as I was finishing up, the snow began to stick, and a crust of white was visible on the road surface.
I have worked in the snow, and I have driven in the snow, but I have not lived in areas with regular snowfall, and as such I am willing to admit that I am largely ignorant of the point at which a layer of snow on the road goes from being a "drive slowly" situation to a "you'd better have chains on that car" situation*. And I didn't want to find out right then or there.
So, I climbed in the car, and headed out of the work site. the problem, though, is that the work site is in a valley, and my office is in another valley, so I was having to gain elevation again, heading into thicker snowfall and denser snow cover on the road, in order to get home.
None of the photos below are from the site (those photos are considered confidential), but to give you an idea of what this was like, the landscape in the area during this time of year normally looks like this:
Yesterday it looked like this:
And apparently the snow brought out the Sasquatch:
*No doubt a reader who lives in a colder climate is laughing at me right now. All I can say is come spend a summer in the place where I grew up. I may not know how to deal with snow, but extreme heat is an old friend. The kind of friend who comes to your hosue uninvited and drinks all of your beer without asking, but an old friend nonetheless.
Yesterday was one of those "joys of fieldwork" days. I was sent out with a fairly simple task, to check on some potential ground disturbance near or within the boundaries of an archaeological site. A simple task - go out, use a GPS unit to map the trees and a few site features for reference, determine whether the removal of the trees was likely to impact the site, and return to make my report.
Then the weather happened.
It has been sunny and warm all of the last week. Uncomfortably warm, in fact. And so when, at the end of the day Wednesday, I was asked to go up into the mountains and check on these potential impacts, I figured it was no problem. The weather was good, the task was simple, and I was prepared.
As I left my apartment this morning, it began to sprinkle. And, as I headed along the highway towards the project area, it began to rain. No problem, I had a raincoat and a notebook made out of some weird mutant paper that is not harmed by water. I was still good to go.
Then I hit the mountain pass, and I noticed that the raindrops hitting my windhield were no longer splattering in a normal fashion. I knew what this meant: snow was coming. And sure enough, as I climbed another 100 feet or so, the rain gave way to snow, splotchy wet snow that would smack against my window at first, but flaky, powdery snow that just flitted past the car as I gained more altitude.
Damn. I didn't have snow chains.
Well, I continued to the site, figuring that, given the tempurature, the snow would quickly melt away from the black asphalt, and this appeared to hold true. I reached the site, and discovered that the GPS unit was not receiving any satellite signals due to a combination of tree canopy and atmospheric disturbance. So, I pulled out a 60-meter tape measure and my compass and got to work getting the bearing and distance to each possible disturbance area from the established site features. The entire time, my boots were covered in snow and my feet were becoming cold and wet (I had left my water-resistant boots at home, thinking that the lighter boots would be sufficient), and the process took a couple of hours to complete. Up until the last twenty minutes, the snow continued to melt off of the road. And then, just as I was finishing up, the snow began to stick, and a crust of white was visible on the road surface.
I have worked in the snow, and I have driven in the snow, but I have not lived in areas with regular snowfall, and as such I am willing to admit that I am largely ignorant of the point at which a layer of snow on the road goes from being a "drive slowly" situation to a "you'd better have chains on that car" situation*. And I didn't want to find out right then or there.
So, I climbed in the car, and headed out of the work site. the problem, though, is that the work site is in a valley, and my office is in another valley, so I was having to gain elevation again, heading into thicker snowfall and denser snow cover on the road, in order to get home.
None of the photos below are from the site (those photos are considered confidential), but to give you an idea of what this was like, the landscape in the area during this time of year normally looks like this:
Yesterday it looked like this:
And apparently the snow brought out the Sasquatch:
*No doubt a reader who lives in a colder climate is laughing at me right now. All I can say is come spend a summer in the place where I grew up. I may not know how to deal with snow, but extreme heat is an old friend. The kind of friend who comes to your hosue uninvited and drinks all of your beer without asking, but an old friend nonetheless.
Wednesday, April 6, 2011
What's in a Master's Degree?
While at the SAAs last week, I got to speaking with people about the interesting phenomonon of archaeologists from the U.S. going over to the U.K. for their Masters degrees. The reason for this is that there are numerous Masters programs in the U.K. that can be completed in a year, whereas the normative time to degree in the U.S. is, depending on the program, between 2.5 and 5 years. The reason for the difference is that the U.S. MA in anthropology typically requires a larger amount of coursework, ideally preparing the student for the wide range of issues that they will confront as a professional archaeologist. As a result, 1-year U.K. Master's degrees are often looked down upon by North American archaeologists. However, I think that this is done without actually considering whether or not U.S. programs are any better.
First off, there's a fair bit of ambiguity regarding how, precisely, one earns a Master's degree. There are, of course, programs that simply offer Master's degrees. Some of these programs are aimed at producing CRM professionals (a great example being Sonoma State's program), while others are research-degrees geared towards moving a student into a PhD program. Someone who emerges from Sonoma State is much more likely to have a firm grasp on how to perform CRM than someone from a research-oriented program, but the person from the research-oriented program may be better able to incorporate new research into the work produced by a CRM firm. So, it's a trade-off.
There are, however, also Master's programs that offer a degree, but which produce neither a CRM professional nor a researcher. These tend to be, as far as I have been able to determine, modeled on MA programs for different trades, but with no real concept of the business of archaeology. As a result, there are people with MAs in archaeology and anthropology who know no more about the subjects than one would expect an advanced undergraduate to.
And then there's the matter of PhD programs that don't offer Master's degrees as such, but will award them to students who have completed the first two-to-three years of training in a PhD program, regardless of whether they complete the PhD itself. My graduate school, UC Santa Barbara, offered both a Master's track and a PhD track, where most students attended the same classes, did the same projects, etc., until the end of the second year, when the Master's students went off to work on their theses, and the PhD students wrote a "data paper" in preparation for working on their dissertations. The majority of the students who don't finish their PhD but are granted Master's degrees are competent, and I have no problem with sharing a title with them.
However, there is always a chance of someone slipping through. I remember one fellow who came in during my third year, I'll call him Gonzo, who was barely capable of tying his own shoes, much less doing independent resaearch or running a project. He managed to squeak through classes, and managed to complete his comprehensive exams (which all of the graduate students were required to take), and then was gently nudged out of the program by the faculty. Upon leaving, he was granted a Master's degree.
Meanwhile, the other Master's students and I had completed all of the classes with a minimum of an "A" in each class, passed our comprehensive exams, designed and executed a research project, and produced a Master's thesis (mine clocked in at over 250 pages) in order earn our degrees. So, even at the same insitution, there were two decidedly unequal ways of earning a Master's degree.
So, there is a lot of variability as far as how well gaining a Master's degree prepares a person for a job as an archaeologist.
With this in mind, the notion that the 1-year Master's programs in the U.K. are somehow inferior strikes me as being rather weird. I believe that I was better prepared than a 1-year student would be, and that someone from Sonoma State is better prepared than me. However, I have met numerous people who were certainly no better off with a U.S. Master's degree from a poor program (or as a consolation prize after missing the PhD) than someone with a 1-year degree would have been. In fact, from what I have seen of the 1-year programs in England, I would place them above some of the U.S. programs that I am familiar with.
First off, there's a fair bit of ambiguity regarding how, precisely, one earns a Master's degree. There are, of course, programs that simply offer Master's degrees. Some of these programs are aimed at producing CRM professionals (a great example being Sonoma State's program), while others are research-degrees geared towards moving a student into a PhD program. Someone who emerges from Sonoma State is much more likely to have a firm grasp on how to perform CRM than someone from a research-oriented program, but the person from the research-oriented program may be better able to incorporate new research into the work produced by a CRM firm. So, it's a trade-off.
There are, however, also Master's programs that offer a degree, but which produce neither a CRM professional nor a researcher. These tend to be, as far as I have been able to determine, modeled on MA programs for different trades, but with no real concept of the business of archaeology. As a result, there are people with MAs in archaeology and anthropology who know no more about the subjects than one would expect an advanced undergraduate to.
And then there's the matter of PhD programs that don't offer Master's degrees as such, but will award them to students who have completed the first two-to-three years of training in a PhD program, regardless of whether they complete the PhD itself. My graduate school, UC Santa Barbara, offered both a Master's track and a PhD track, where most students attended the same classes, did the same projects, etc., until the end of the second year, when the Master's students went off to work on their theses, and the PhD students wrote a "data paper" in preparation for working on their dissertations. The majority of the students who don't finish their PhD but are granted Master's degrees are competent, and I have no problem with sharing a title with them.
However, there is always a chance of someone slipping through. I remember one fellow who came in during my third year, I'll call him Gonzo, who was barely capable of tying his own shoes, much less doing independent resaearch or running a project. He managed to squeak through classes, and managed to complete his comprehensive exams (which all of the graduate students were required to take), and then was gently nudged out of the program by the faculty. Upon leaving, he was granted a Master's degree.
Meanwhile, the other Master's students and I had completed all of the classes with a minimum of an "A" in each class, passed our comprehensive exams, designed and executed a research project, and produced a Master's thesis (mine clocked in at over 250 pages) in order earn our degrees. So, even at the same insitution, there were two decidedly unequal ways of earning a Master's degree.
So, there is a lot of variability as far as how well gaining a Master's degree prepares a person for a job as an archaeologist.
With this in mind, the notion that the 1-year Master's programs in the U.K. are somehow inferior strikes me as being rather weird. I believe that I was better prepared than a 1-year student would be, and that someone from Sonoma State is better prepared than me. However, I have met numerous people who were certainly no better off with a U.S. Master's degree from a poor program (or as a consolation prize after missing the PhD) than someone with a 1-year degree would have been. In fact, from what I have seen of the 1-year programs in England, I would place them above some of the U.S. programs that I am familiar with.
Monday, April 4, 2011
SAA Annual Meeting
So, I spent the weekend at the Society for American Archaeology's annual meeting. This is the reason why there was no post on Friday, and only this abbreviated one today. I will try to get back to regular posting soon.
I missed the first two days of the meeting (damn work!), but here's a few thoughts on what I did see:
- Unlike the Society for California Archaeology's annual meetings, which tend to be a mix of research and Cultural Resource Management folks and have plenty that directly applies to both, the SAA meetings are primarily research oriented, despite the SAAs efforts to include CRM content. This is a mixed deal - on the one hand, it means that I get less professional value from the meeting. On the other hand, it means that I get to take a break from my professional life (and the stresses contained therein) and take a mental vacation, where I remember why archaeology is fun.
- It was very good to see old friends from graduate school (though a bit unnerving to realize that I am the only one who hadn't lost weight post-degree). I hadn't seen most of them in years, and so it was amazing to just get to hear what they have been up to, and to be able to spend a bit of time talking with them.
- One of the things that really struck me is the division between the jack-of-all-traders attitude espoused by CRM archaeologists, and the hyper-specialization espoused by many academic archaeologists. Both have benefits and problems, but while I had always been aware of it, the degree to which this is true really struck me. I talked with people who were shocked and horrified to hear that I have excavated both prehistoric and historic sites, and also with people who were well-trained in many subjects, but only really allowed to practice one of them.
- The first year I attended this conference, it was in Milwaukee, and there was a Mary Kay Cosmetics convention in the same hotel, leading to strangeness. The next year, it was in Salt Lake City, and the bi-annual Mormon convention was going on across the street. This year it was in Sacramento, and there was a cheerleader exposition going on at the same convention center as our conference. Of the three of these, I don't know which was stranger.
- I was again struck, as I pretty much always am, by the fact that so many of the people within academic archaeology are essentially unaware of how small a niche they fill, and that they are not only outnumbered, but vastly outnumbered by CRM archaeologists.
- It was pleasant to hear about archaeology that isn't tied to budget constraints or draconian expressions of regulatory requirements.
So, on the whole, I enjoyed going. I feel strongly tempted to start looking into PhD programs again, though I doubt I will actually go for one. Still, it's nice to find archaeology appealing again.
I missed the first two days of the meeting (damn work!), but here's a few thoughts on what I did see:
- Unlike the Society for California Archaeology's annual meetings, which tend to be a mix of research and Cultural Resource Management folks and have plenty that directly applies to both, the SAA meetings are primarily research oriented, despite the SAAs efforts to include CRM content. This is a mixed deal - on the one hand, it means that I get less professional value from the meeting. On the other hand, it means that I get to take a break from my professional life (and the stresses contained therein) and take a mental vacation, where I remember why archaeology is fun.
- It was very good to see old friends from graduate school (though a bit unnerving to realize that I am the only one who hadn't lost weight post-degree). I hadn't seen most of them in years, and so it was amazing to just get to hear what they have been up to, and to be able to spend a bit of time talking with them.
- One of the things that really struck me is the division between the jack-of-all-traders attitude espoused by CRM archaeologists, and the hyper-specialization espoused by many academic archaeologists. Both have benefits and problems, but while I had always been aware of it, the degree to which this is true really struck me. I talked with people who were shocked and horrified to hear that I have excavated both prehistoric and historic sites, and also with people who were well-trained in many subjects, but only really allowed to practice one of them.
- The first year I attended this conference, it was in Milwaukee, and there was a Mary Kay Cosmetics convention in the same hotel, leading to strangeness. The next year, it was in Salt Lake City, and the bi-annual Mormon convention was going on across the street. This year it was in Sacramento, and there was a cheerleader exposition going on at the same convention center as our conference. Of the three of these, I don't know which was stranger.
- I was again struck, as I pretty much always am, by the fact that so many of the people within academic archaeology are essentially unaware of how small a niche they fill, and that they are not only outnumbered, but vastly outnumbered by CRM archaeologists.
- It was pleasant to hear about archaeology that isn't tied to budget constraints or draconian expressions of regulatory requirements.
So, on the whole, I enjoyed going. I feel strongly tempted to start looking into PhD programs again, though I doubt I will actually go for one. Still, it's nice to find archaeology appealing again.
Subscribe to:
Posts (Atom)