Bob Frank opens his reflections on teaching economics with a discouraging examination of how badly we get our students to understand the really wonderful content of his discipline. Why do educated people think it makes them appear witty to repeat a dumb bromide like “economists know the price of everything and the value of nothing”?  Statistics is in a similar position (“there are lies, damn lies, and statistics”). This kind of joke, based in willful ignorance, is diagnostic of an affective failure, not an intellectual one. Students are afraid of this material, not just bored, as something being done to them that will make them worse people in some way, against which they need to defend themselves. How many people loved their intro stats course, and still remember the eye-opening realization that they had acquired powerful tools with which to understand and improve a complicated, random, changing world?
Statistics and economics university departments are also similar in being tasked with “service” courses for students in other majors and general education introductions, as well as professional apprenticeships for people whose careers will be in creating new methods in the disciplines themselves. As a consumer of the former service-those introductory courses are an input to my teaching production function-I often have occasion to weep, gnash my teeth, and rend my academic regalia, even though I am only hoping for student command of the few big ideas Frank claims should constitute the entire curriculum of an introductory course. But as a disciple of Deming, I discount absolute-scale measures and prefer to manage on the derivative: no matter where we are now, could things be [even] better? how?
My colleague Philip Stark, in our statistics department, is on the job.
But first, a little background. At Berkeley, promotions and tenure for faculty are based on a “case” prepared by the department, comprising (i) all the research the candidate has ever published and stuff “in the pipeline”, and letters from outside scholars critiquing that research (ii) a summary of the candidate’s service on committees and the like, including public service and outreach (op-ed articles, for example) (iii) a letter from the chair asserting that the candidate is fine teacher and teaches many courses, and some sort of summary of student evaluations of courses taught. Under (iii): not classroom visit reports, critiques of assignments and feedback by peers, not video of actual classroom practice; what I said.
We have rules about this, of course, which are under 210-1-d-1 right here
However, I have never seen a promotion case (as member of one of the “ad-hoc committees” that reviews them) that satisfies these rules. Indeed, when I warned our previous chancellor on my last appointment to one of these that I would not be able to vote on the case if the teaching part of the package didn’t approximate the requirements of 210-1-d-1, he immediately removed me from the committee.  So we are promoting faculty on the basis of research we look at and have experts in the field evaluate, and on teaching we do not see, evaluated exclusively by students.
Student evaluations of teaching (SET’s) have many important advantages as a quality assurance mechanism. First, they are extremely cheap, requiring only a quarter-class-session or so of class time with no significant payroll impact; in fact, they get the prof back to the lab for fifteen or twenty extra minutes each semester. Second, they completely protect faculty from engaging with each other about pedagogy, which in my experience is up there next to cleaning the break room on a scale of stuff we will avoid if we possibly can (more on this below). Third, it has never been shown conclusively that outsourcing teaching quality assurance in this way has damaged any core values, neither research productivity nor the record of the football team. Nor parking, I guess.
The foregoing is a strong case, but we have to ask, do good SET’s indicate more learning by students? On the Berkeley Teaching Blog, along with the director of our Teaching and Learning Center, Richard Freishtat, Philip has posted the first and the second of three analyses of what we know about this, and his findings are devastating. Not troubling; not “maybe this needs a little fixing”; devastating to the claim that we are managing the resources society has given us in the way we say we are, for excellence in research and teaching. (If you are a student at Cal, or a taxpayer in California, you should be in the streets with pitchforks and torches. If you are our new chancellor (or our new president), fixing this should be your Job One. If you are in, or paying for, another great research university, better ask some questions before you think you’re OK.) The best part is, he and his colleagues are going at this the right way, trying to find ways to assess teaching effectiveness that will actually lead to more student learning: stay tuned for Part III of their project.
I want to highlight the contrast between our manifest institutional respect for research and real expertise in the research sphere, and on the teaching side of our business. What Philip presents is actually not a secret from most of us. We have all had low SET scores in courses where we have other evidence that the students really learned a lot, and we know about highly rated courses that seem to be a bunch of fluff, and many of us know some of the literature he cites. So continuing to use SETs in this consequential way is behavioral evidence that we do not care enough about teaching to use all our skills and powers to advance it.
Outsourcing to students has an even more toxic effect: I have had high SETs and low ones, and the high ones are much nicer, but the fact is that I have never had any evidence of a type I can respect as a scholar that I am any good at teaching or could become so if I tried. I don’t have craft-skill reassurance either, because I never see anyone else work, and my colleagues never see me. (I sort of remember what my own profs did, back in the day, when I was cluelessly evaluating them.) So allowing SETs to displace real coaching and peer evaluation reinforces the fear of engaging with our peers to learn to teach better that everyone in a high-performance institution always feels. As regards my next few hours of work: I know I can write a paper that people I respect will value (or I wouldn’t have got here in the first place), but I really have no idea what will happen if cook up an innovative new exercise for class and invite a colleague to hang out in my classroom and give me some coaching on it. OK, I do have an idea, rooted in my knowledge that teaching is affectively fraught, and my deep-down sense that I am not the warm, supportive, emotionally competent person I want to be. The sleep of reason breeds nightmares and so does the drought of facts. Deming: “Drive out fear.”
But I have a nice set of powerpoints from last year, and the students didn’t ask any questions I couldn’t answer, and I can stick in a couple of slides from this fascinating paper that just came out in the Journal of Really Arcane Stuff, plus a new joke…speaking of which, here’s one that actually has some basis in our reality: “Teaching is the tax you pay to do your research. Tax avoidance (not evasion) is the duty of a citizen.”
[22/X/13: a followup to this post is here]
One “Statistics is in a similar position (“there are lies, damn lies, and statisticsâ€). This kind of joke, based in wilful ignorance, is diagnostic of an affective failure, not an intellectual one” is waaaaay over-analyzing the joke.
Second, I doubt it would take more than ten minutes to come up with ten misuses of statistics to mislead in any given major newspaper, and that would be before you hit the opinion pages, proving the joke. The point of the joke is not ‘Statistics will make me evil’, it’s ‘Statistics are often used to mislead.’
“Bill Gates walks into a dive bar; on average, everyone in there is now a multimillionaire.”
In fact, it’s a damned good entryway to teaching statistics in a useful and lasting way to people who are taking it because ‘it’s one of the boring requirements I have to take to get my largely unrelated degree’. Use the classic: http://en.wikipedia.org/wiki/How_to_Lie_with_Statistics
In the general sense, yes using SET’s alone, uncontextualized as a metric is dumb, as dumb as leaning on an internet poll for accuracy, but this is what we’ve come to with the ongoing ‘business-ization’ of higher education, :”The customer is always right”.
This is an inevitable consequence of “If only we ran this college like a business, it would cost less and be more effective!”.
When degrees are a product you’re selling, teaching is not nearly as important as keeping the customers satisfied.
I can’t imagine why I would start (“entryway”?) teaching something so important by telling students it’s a tool you can use to lie and mislead! Would you begin an art history course with a bunch of bad paintings and forgeries?
No, that joke is a subconscious license to ignore data and evidence.
Incidentally, I don’t regard students as my customers. My customers are citizens and taxpayers, now and in the future, for whom my students will create more value if I can give them better tools and faith that they really work.
As a non-academic I think the reason is obvious. I don’t think that the pretension to mathematical precision has been beneficial to the liberal arts. Politics, for instance, is not a science and isn’t really amenable to the scientific method. Generally speaking, when I see a “political science” paper chockablock with numbers and charts I’m pretty sure I’m about to be mislead.
Frankly, I think most schools offer a “statistics for dummies” course for much the same reason as Hogwarts offered Defense Against the Dark Arts. And I think any instructor of such a course who was sufficient self aware would probably approach such an introductory course in exactly that spirit.
No, I’d start by treating it as ‘it’s a tool that has been used to mislead, here’s how to use it right.’. Have you READ Huff’s book? I”m not saying that you need to dwell upon the subject, but any general studies course on statistics had BETTER teach how statistics are misused and abuses consciously or not. How DO you decide on the correct statistical method? What happens if you select the wrong one for a set of data? How are you mislead by your mistake?
And yes, beginning an Art History course with stories of forgeries and bad painting is actually an excellent way into Art History for the vast, vast majority of students taking the course which is, again, ‘one of the boring requirements I have to take to get my largely unrelated degree’.
“Here, this is why knowing Art History is important.” Is a hell of a lot more interesting and lending towards self-learning than “Today we look at early paleolithic art as we drone on in our semester-long trudge through the history of art…” to be heard in Ben Stein’s ‘econ teachers voice from “Ferris Bueller’s Day Off”, which, quite frankly was my intoduction to Art History.
People ignore data and evidence as it is, unrelated to the joke. The biggest hurdle in teaching people statistics is showing them how they normally think about the subject is wrong.
There are lots of bad ways to teach art history. Starting by not making it clear right away that the payoff is engaging with wonderful stuff of the type that makes life worth living - that art is an opportunity the students might want to seize, rather than one more problem they don’t want. Possibly by not starting at the historical beginning, with stuff the students have a lot of trouble engaging with, but with something accessible and a question like “where did this come from? How was it possible?”
So why not start a stats course with an interesting, consequential, question that has a counterintuitive answer that can be found using the tools of the course?
Michael, your last sentence above illuminates the reason not to get too hung up on the semantics of the joke, like trying to differentiate between “lie” and “mislead” and “misunderstand.” WE (i.e., the smart folks who read this blog) already understand that sometimes correct answers are counterintuitive. But THEY (i.e., all the others, unwashed as they are) need to learn it.
Consider this progression:
(a) “Unscrupulous people use statistics to lie to us.”
(b) “Careless people use statistics to mislead us, whether deliberately or not.”
(c) “Some people use their misunderstanding of statistics to reach incorrect conclusions, which they then explain to us using the same incorrect understanding of the statistics, and we are thus inadvertantly misled by people who are trying to get it right, but don’t know how.”
(d) “We ouselves, to the extent that we don’t have a good understanding of statistics, will not only be misled by others, but will also mislead ourselves, thinking we know how to understand the data when we actually don’t.”
That progression is sort of a “slippery slope in reverse.” If you can convince your students of statement (d), then showing them that progression will make them realize how important it is to subject alleged statistical reasoning to scrutiny, to protect against (a), (b), and (c) as well as to minimize doing (d) to themselves.
Your students are coming in to the course with that joke, whether you like it or not, firmly engrained in their culture. Acknowledging it and countering it will empower the majority of your students who do not become statisticians to recognize when they are being mislead.
Here you have to stop and consider your classes. You’re teaching Stats 101, a course *lots* of people need to take. The joke is in their culture. I would take the Akido approach and use the strength of that joke against itself.
If it were me (and I am not a professor) I would start the first day of class with just two things written on the blackboard, or powerpoint or whatever futuristic devices you use i Berkeley, in umpty-thousand point type:
“There are lies, damned lies and statistics”
“With great power comes great responsibility”
And start by pointing out how being statistically literate inoculates you from the first, and imposes on you the second.
Statistical analysis IS important and a wonderful tool. Taking many individual data points and deriving insight that could not be gained from any single one is a monumental achievement. John Snow’s map of cholera cases is an immediate, visceral and instantly understood example, even if the analysis is graphical rather than mathematic.
Moreover this is (I believe) a good approach to the flipped classroom (something my college is dealing, and struggling with as well.) Find examples of both, misleading and counterintuitive, and without telling them, get the students to figure out which is which.
Teaching statistics without teaching how statistics are used to mislead is a little like teaching history and skipping over all the wars.
I think we’re converging on the same point, I think it’s Ken is getting at my point as well, students come in with this meme in their heads, and by not using it to leverage your instruction, I think you’re missing a great opportunity to bring them around to your POV.
Ahh, damn lack of ‘edit’ on this blog, and for a ‘l’esprit de l’escalier’ moment, I’ll disclose a personal bias which influenced my John Snow example.
I think one of the most important and informative statistical works of all time, which should be on every academics’ bookshelf and committed to memory, is Tufte’s “The Graphical Display of Quantitative Information”.
It applies equally to art history AND statistics! It’s got what plants want! 🙂
If we could just get that one work into the hands of many more people, the world would be a more literate place.
I had occasion to teach intro statistics earlier in my life, and I took the opposite approach — a value of learning some statistics was to defend yourself against being lied to and misled by less scrupulous others. (The same might be said about economics.)
“Why do educated people think it makes them appear witty to repeat a dumb bromide …”
I’ll attempt an answer to that one. What is economics *for* ? Surely to make the
economy work better. But for most people, the economy hasn’t worked all that well
for a long time: median real income is stagnant, healthcare costs keep rising,
housing costs are high, colleges are exorbitant, GDP growth since 2000 has
been anemic. So what are economists doing ?
Either they’re wrong (and the experience since the crash of 2008 indicates that most
can’t tell how big a crash is even after it has happened); or they’re ineffectual.
On top of that, I think there’s a feeling - which surely an economist can appreciate -
that anyone who really had the necessary skills and knowledge to analyze and predict
economic outcomes would be using those skills to make money, not writing academic papers.
Which is not to say that some of the content isn’t “wonderful”. But if you’re saying
that it’s interesting because it gives insight into the real world - as your
analogy with statistics would suggest - then I think many people doubt that these days.
If you’re saying it’s interesting for its elegance, then sure, but so are chess and
poetry and a host of other disciplines.
A large part of the reason it’s so ineffectual is because so many people know so little about it that they swallow all sorts of crap.
Including politicians, of course. But no matter what kind of craziness
the politicians propose, they can always find a number of highly-qualified
economists to support it. And if the people who *do* know a lot about
economics end up on both sides of every practical question, then that leaves
me highly skeptical about the value of economics as a model of the real world.
I’m only skeptical of those whose models do not reflect the real world, or as DeLong puts it much more succinctly “Marks their views to market”.
A model should have predictive value. If it doesn’t, it’s wrong (in the hands of an honest economist) or propaganda (in the hands of a dishonest one).
It seems to me that throughout the current slump just about everyone’s
models have been too optimistic: first about the possibility of a crash;
then about the depth of the recession; and then about the speed of the recovery.
So I’m highly skeptical of the whole bunch - though in general terms,
the UK and much of Europe have tried austerity, while China has had
stimulative infrastucture spending, and in both cases the results are
in line with Keynesian models. So the rightwingers are completely
disconnected from reality, while the Keynesians just don’t have good
enough data or accurate enough models to overcome their optimism.
There’s also a perception that economists are always trying to tell us how we
*ought* to behave. Which is aggravated by the tendency to indulge in false
advertising by taking rather narrow technical concepts from economics
and giving them names which imply a value judgment, e.g. “free market”,
“free trade”, “efficient”. Disciplines which keep their noses out of other
people’s business, and stick to arcane jargon, are less likely to be the
butt of pointed jokes.
The overreach of economics, especially in the 80s and 90s, has damaged the
brand. And I think it’s a shame, because more recent work on the actual
behavior of (often flawed) markets, “irrational” behavior, agency problems,
externalities etc, are genuine advances. Though it seems to turn out that the
more we know about the real world, the more humility is needed about the chance of
predicting or influencing it.
There is no evidence that economics is studied to benefit the health of economies. The problem with conflating statistics and economics is that statistics is a science, where true knowledge inexorably drives out error, and our understanding monotonically increases. Nobody has to know or care who Gauss or Poisson or Bernoulli or Tukey was, or agree with them about anything other than the basics of math to be forced to concede all that follows from their insights.
Economics, on the far distant other hand, is simply religion in drag, wearing the clothes that became fashionable after the enlightenment, with penis envy replaced with physics envy. The schools of economics, like the denominations of Christianity, offer ample proof that economics is a con game, where the same old doctrinal arguments are endlessly re-fought because that is how a priest wins status in a cult, by proving clever with rhetoric and storytelling.
Just as religion under that name is about persuading the masses to accept the rule of the priests and to give the priests status and power, religion under the name economics is about persuading the masses to give economists status and power, the better to cement the power of those whom the priests worship, the wealthy.
True, there are a tiny handful of thinkers who bear the title economist and yet are honest nonetheless — the Krugman’s, the Stiglitzes, the Hendersons. But what is most interesting about their work is that they are so aware of the limits of the knowledge in the field, and how most of what passes under the name economics is simple fraud.
It’s interesting to note that, with a smart person with no training in seminary, as schools of economics should be called, can use statistics to show whether an economic argument is the standard BS that is the run of the mill in the field, whereas a smart graduate of the Econ seminary cannot apply anything learned there to detecting whether a statistical argument is sound or not.
Maybe. But there are genuinely useful economic insights. It just seems to go pear-shaped
when economists try to get quantitative.
I think the limitations of economics have had an especially pernicious effect on
the management of US companies. Economics has a lot to say about simple commodity
markets, where everybody is offering the same product, but at different prices.
And mostly what it says is that the correct strategy for a producer is to cut
costs and increase volume. And I strongly suspect that’s how the US auto industry
came to produce a whole lot of small badly-designed low-quality cars that no-one wants
to buy, allowing Honda and Toyota, with an emphasis on good design and high-quality
manufacturing, to eat their lunch. It turns out that cars are not a commodity.
And also how we got a host of low-cost PC/laptop manufacturers who concentrated on
buying components cheap, reducing inventory costs, outsourcing assembly etc. And all
those operations are losing money, while Apple builds well-design high-quality products
and sells them at a massive profit. So computers aren’t a commodity either. And the
biggest success in supermarkets ? Whole Foods Market, selling quality produce at a
premium price. So food isn’t a commodity either.
So how does this happen ? Well, the economists have a neat model for the simple case
of commodity markets. And they don’t have a good model for the effects of differentiated
products and high quality. I mean, they know those features are good, in a qualitative
way, but they can’t put a number on what they’re worth. So they go on and on about the
importance of cutting costs, and we get a bunch of outsourcing and layoffs and
under-investment and cost-cutting and crappy design and products that no-one likes.
In contrast, Apple had the great good fortune to be led by someone who had a strong interest
in aesthetics and design and quality and user interfaces, and AFAIK zero training in economics.
Another data point would be the incredible success of the quality-focused ideas
of W Edwards Deming, see http://en.wikipedia.org/wiki/W._Edwards_Deming#Key_principles
Deming’s ideas are widely recognized as a key contribution to Japan’s postwar
economic success. But they’re just about the opposite of what economists
would recommend. Deming’s training was EE/Math/Physics, and then statistics.
It is SO encouraging to find all of you honest, smart bloggers at this site. Speaking of honest, true, smart, kind, etc, this is Kevin Drum’s birthday.
Ferd-Who are you buttering up, and why? (The “economist” in me assumes you have a self-interest at heart 😉
And BTW, thanx for the note on Kevin Drum’s birthday. I sent him a happy birthday email. And also BTW, can you believe that “kid” is 55?
Frankly, your final joke (“Teaching is the tax you pay…”) doesn’t seem to be much of a joke at all. The review process you describe reflects the real values of the University: Senate faculty are researchers first and teachers second (or third). I am assuming, by the way, that by “we” and by “faculty” you mean only Senate faculty, and not the large group of Non-Senate faculty who teach at Berkeley and elsewhere in the UC System. Lecturers, who deliver almost half of undergraduate instruction at the UC, are evaluated solely on the quality of their teaching. They’re also paid far, far less than their Senate colleagues, a fact that also reflects the University’s priorities.
“Lecturers, who deliver almost half of undergraduate instruction at the UC” isn’t correct, and it’s getting less correct.
http://accountability.universityofcalifornia.edu/index/9.3.1
When a university education was cheap, this situation was unfortunate. Today, when students literally mortgage their lives in order to pay for what the university claims to give them, this situation is grossly immoral. Every university professor is a knowing conspirator in a system of fraud on the same moral and financial scale as the mortgage-backed securities fraud that caused the financial crisis.
The university claims to give them an education. What it has always actually given most of them is certification and socialization, with a bit of French and piano on top. (I except engineering from this: real high-level vocational education.) A few of the students-largely those headed for Ph.D. programs, and first-generation college types-take the education seriously. Most know better. They’re there to get a degree.
Yep, the university is a fraud with regard to undergraduate education. But it is one of those pious frauds that is designed to deceive nobody, but rather just make everybody feel better about what they’re doing. Bloix, from some previous comments of yours, I believe you’re an attorney. Consider our own field of practice, especially criminal law or compliance.
People don’t tell hardly any economist jokes compared to lawyer jokes. So I am not sure why the whining here. Buck up, man. Buck up.
But here is the best economist joke I heard. It was from a CFO who said, “Economists are people too stupid to become accountants.”
The joke to me is real because when one sees economists on television and read their op-ed pieces that tell us minimum wage increases are bad because they increase unemployment or cause inflation, I am inclined to agree that most economists are idiots. And the ones I see as experts in employment cases, personal injury cases and the like are also big on garbage in, garbage out. Most economists, with their models that resemble most often garbage in, garbage out, are in dire need of sociology courses so they don’t fall into their usual trap of thinking the rational person is Ayn Rand. That is, to this lawyer who has seen his fair share of economist experts, the heart of the problem.
price of nothing..
because, as far as i can tell, Economist believe that in its majestic equality, the economy provides equally for the rich man and the poor man the opportunity to cheat workers of their wages, or to buy congressman to fix the tax code.
because, despite the overwhelming evidence not just of all of human history, but of every single interaction with another human, economist believe in rational people
because when they publish papers, they don’t have raw data, even when it is just a few numbers, but derived stats
because they honor peoplelike Robert Barro, Harvard, who, if Krugmans book is to be believed, actuallypublished a papaer saying ordinary people adjust their behavor after studying and absorbing news about the fed reserve’s latest actions
because they assert that they know stuff, and every fifty years when the economny crashes and all hteir knowledge turns out to be wrong, they admit failure grudgingly, for a short period
becaue field work like truman bewly yale employment is so rare
because although Kahnemanns prospect theory is superior, they don’t teach it or discard the old theory
well, you and Philip certainly get a C- for bevity
“Statistics is in a similar position (“there are lies, damn lies, and statisticsâ€). This kind of joke, based in willful ignorance, is diagnostic of an affective failure, not an intellectual one.”
This kind of joke is based on excellent insight and like many of the best jokes is funny because it is sad and because it is true. If you could teach a large majority of your student how to *merely* avoid being taken by the five or six most basic misuses/confusions of statistics, you would be doing as great a service as I can imagine one teacher doing in a single course at the university level. If you taught them nothing else at all, you would be improving their ability to be good citizens immensely. That fact that you are resisting this approach is mystifying.
Don’t teach it as “how to trick people with statistics”. Teach it as “how not to keep getting tricked by statistics”.
I’ve taught Stat100 at the University of Illinois, (~2200 students/year) for the past 12 years, and have found the approach that Bruce J and Sebastian H advocate to be very effective.
On the first day of class each semester I ask my students the same two questions:
“Think about all the studies reported in the popular press. Do you generally trust the results of these studies? Do you read the statistics presented in these articles?â€
All but a handful answer “No†to both questions. A discussion follows and the general consensus of the students is that people lie with statistics and it’s best to ignore the studies and trust your own intuition and experience.
Then I ask two more questions:
“Do people lie with words? Have you personally ever lied with words?â€
Everyone answers, “Yes†to both questions.
The point is that lying, in itself, is not the problem; lies in print don’t stop students from reading. The problem is they have no way to detect the lies in statistical arguments like they do in other arguments, so just like Michael O’hare says they feel powerless and mistrustful.
How do you get students interested enough to learn the basic statistical concepts they need to evaluate studies and make informed decisions? Most people are interested in themselves, so in Stat 100 we learn statistics by analyzing our own anonymous survey data. We trust the data since it’s our own. We even do randomized experiments on ourselves. When we get surprising results we don’t dismiss them as lies, we wonder why… we formulate hypotheses and test them. In other words, we act like scientists because we’re completely familiar with and interested in the data.
Most students think it’s fun.
I guess the question I have, which comes up over and over as I confront job search after job search after promotion case:
Why is it that at an R1 university, the minute you mention that someone might have serious pedagogical value, their research value appears to have been diminished? Why is there apparently just one finite pie of excellence — if you are excellent at teaching, then there just isn’t enough “excellent” left over for you to be a superstar in the research realm? Drives me nuts.
And this is gonna get me into trouble, but why is it that so many people think that just because lecturers mostly teach, that they are inherently better teachers than senate faculty?
Questions with no answers, but that’s my two cents worth of a reply.
Hi, Leslea
Well, the presumption in your first observation certainly entails the second one! They aren’t silly, because specialization is not chopped liver; heart surgeons do better if they do a lot of heart surgery. But they have a faulty built-in assumption.
Economics is good for thinking about this, and the key tool is the production possibility frontier, the line in operational space along which you have to trade off one good thing against another. Like quality versus low cost, or research versus teaching for an individual or an institution.
But it’s not enough, because if you only understand it that far, you assume that an organization or a person is actually operating on its PPF, just as students misunderstand market equilibrium as a place the world is at rather than a target it keeps groping toward. No organization is at its PPF (and even if it is, it won’t be next week as organizational learning and technological progress move the PPF outward); managerially, the good bet is always to try to get more of both and pretend the PPF is far away (as it may actually be). If you think you are on it, you will pace back and forth making unnecessary tradeoffs like the tiger pacing along the bars of the imaginary cage.
Hi L. Hlusko,
In my case, teaching thousands of students and writing all my own materials based on the student’s own data, it would be impossible to also do research.
The ability to teach and do research are probably positively correlated, but I don’t think you understand how time consuming and intellectually absorbing it is to teach at the level we’re aiming for.
You could probably teach small classes of upper-level students quite easily. They’re already speaking the same language you are so half the battle is won.
But teaching large intro courses, which was the original point Michael O’Hare raised,( giving students “the eye-opening realization that they had acquired powerful tools with which to understand and improve a complicated, random, changing world”) is an overwhelming job and can’t easily be meshed with research.
Dr. Fireman,
I do not wish to say anything negative about the good work you assuredly do, but I think that Professor Hlusko (http://ib.berkeley.edu/labs/hlusko/cv.php) may have some idea of the effort involved. I find it unfortunate that folks in our esteemed profession seem to find it difficult to understand that there are many ways to remove the epidermis from this particular feline. Some people are very good at balancing research and teaching (and service…) to optimize their work across all areas, and can cross fertilize their various duties, while others find it most fulfilling to focus their labor on one area above others - this is not a problem! it is only a problem when we decide that all faculty must be evaluated using a narrow set of criteria (and this varies by institution - at mine you get much less credit for research productivity than at a UC).
Just to toss in a bit of humor, when I was teaching sections of a large intro to statistics offering, a minor exam question went something like this, “Suppose your friend said {a false statement} about the Law of Large Numbers. Explain the error to your friend and give a correct accounting of the meaning of the Law of Large Numbers.”
One wag answered simply, “I would not have as a friend anyone who cared about the Law of Large Numbers.” One point for chutzpah.