If students are customers, why don’t they do their research?

A discussion I have each semester with new students is whether they consider themselves to be customers or not. The distinction I’m trying to get them to grapple with is that as students, they are themselves a key determinant of how good the “product” ends up being. In other words, they cannot just place their orders and all expect to get the same result in terms of knowledge acquired. While there are certainly some aspects of the relationship between educators and students that are analogous to suppliers and customers, it’s an incredibly poor model to base one’s academic interactions on, as it encourages passivity on the part of the student, as well as a mindset which focuses on the student’s rights, rather than their responsibilities.

Teaching EBMgt: developing better managers, or educating critical thinkers?

On the Facebook group Evidence-Based Management, Laura Guerrero asks:

In terms of the big picture, I wonder what people think in terms of why we ought to teach using EBMgt.

I hear people talk about studies and research. The way they talk about these suggests to me that they do not understand what they are or how to evaluate them. For example, there is a study that says that plastic water bottles leak a substance in to your water and this is bad for you.

Here in Canada, a number of people threw out their water bottles and bought metal water bottles and now city parks want to ban bottled water. A number of stores have stopped selling this type of plastic bottle. I wonder if people have a sense of what ‘bad for you’ means, how this finding was reached, whether they should trust their morning news anchor to deliver scientific news, and so forth.

I wonder if it is my responsibility as a professor-to-be to instruct students on how to be critical thinkers and skeptical consumers of information. In other words, I think that EBMgt is important to develop better managers who will make better decisions. But I wonder if there is also a bigger purpose: to educate critical thinkers.

The students in my Evidence-Based Management course at the University of Cape Town are almost all just out of secondary school, and I suspect that my answer to Laura’s question would be very different if the course was being taught to graduate students in an MBA class. Generally, I’d have to argue that teaching Evidence-Based anything would require the students to have some understanding of what evidence is, when it is needed, and what to do with it. So while we would hope that graduate students know some of this already, we can’t take that knowledge for granted. If your students don’t understand the basics of scientific reasoning, teaching them EBMgt may well end up being a simple installation of various principles that they could proceed to treat as dogma, thereby remaining as uncritical as they were when starting the course. So yes, where students don’t have the knowledge in question, it would be a professor’s responsibility to instruct students on how to be critical thinkers first, before embarking on any discussion of application of principles in critical reasoning, such as EBMgt.

The job is perhaps easier at undergraduate level, such as in my course. There, it’s almost invariably the case that students have not been exposed to the principles of drawing conclusions based on the available evidence, and are quite comfortable with holding contradictory or incoherent beliefs, simply because they have never been exposed to the contradictions or incoherencies. In this context, teaching EBMgt starts with general principles of scientific thinking and critical reasoning, and often ends there too, because as anyone who teaches this material work knows, there is much work to do in terms of undermining the prejudices and lazy thinking habits that permeate the cognitive processes of the average student. It’s only once the fundamentals of reasoning are in place that we can begin to talk, and think, about more complex cases of evidence-based reasoning in professional practice.

A fantastic recent book that I’ll be adding to my course as suggested reading is Ben Goldacre’s Bad Science, which does a terrific job (as regular readers of his blog and Guardian column will know) of highlighting and explaining some of the obvious ways in which our species makes life so much harder for ourselves, through constantly believing the most crazy things simply because we’re too lazy (and often unprepared) to think about them.

Developing better managers is certainly a positive result, but it pales into insignificance when compared with developing better thinkers more generally. Some of those thinkers may go on to be good managers, but in the meanwhile, we’ve also hopefully helped to produce a few good teachers, plumbers, doctors and parents.

They tried to teach my baby science…

they tried to teach my baby science...

Confirmation bias

All of you religious nutters out there probably believe you’ve known this for some time, but I’m discovering that atheist/agnostic students can be just as unreasonable, pig-headed, irrational, rude, lazy and just plain stoopid as any given believer. As a regular participant in a atheist/agnostic discussion forum at my university, infantile debates are raging on vegetarianism and evolution, and some parties to these debates seem to have decided that – once they give up on god, Santa and the Tooth Fairy – their logical fortress can no longer be breached and they no longer have any obligation to even try to present coherent arguments. It’s all very sad and tawdry.

Further work on food?

Modernity equals, to some extent, a situation of a plethora of choices with very little guidance as to which choices to make. Traditional moral scripts fail, yet self-identity still develops through the choices we make. Our relationship to food expresses many of those choices, and to some extent, becomes an application/manifestation of virtue, especially with regard to organic food/slow food/cloned food. Of relevance here: risk society & conspicuous consumption.

Experiment: to what extent do food choices reveal conceptions of personal identity?

Is belief in god rational?

The question of whether belief in god is rational or not seems presume an answer to a prior, and perhaps more important question – namely: do we want belief in god to be rational, as opposed to being fruitful, joyous, beautiful, etc.? To put it another way, it’s long been of interest to me why this contest is often fought in the domain of rationality, where everyone who is not a supernaturalist of some sort agrees that there is no possibility of providing any sort of knock-down argument for belief in god, at least where arguments are understood to follow standard rules of logic, involving non-contradiction, the possibility of refutation, and where conclusions are adopted once they are shown to be the best justified of available alternatives.

Rather, the more compelling arguments in favour of belief in god point to various benefits of believing in god, whether these benefits are social, psychological or moral. While it’s far from clear that any of these other purported benefits hold up to scrutiny, or can’t be purchased at lower cost from other sources, it seems to me that we’d need to adopt a definition of “rational” that is essentially teleological (goal-based), rather than one that aims at truth, for it to be possible for belief in god to be described as rational.

How safe is safe enough? Cloned food and moral panics

Weeks prior to the FDA’s declaration that milk and meat from cloned animals was safe for human consumption, the Wall Street Journal observed that consumers have a history of being cautious in adopting technological innovations in food. Pasteurised milk took years to gain acceptance, and “some consumers and consumer groups still refer to genetically altered foods, like those that contain genetically modified corn or soybeans, as ‘Frankenfood’” (Zhang et al., 2008), more than a decade after such products appeared on the market.

Preliminary thoughts

Much of what I’ve been interested in over the last decade or so has revolved around epistemology, and in particular virtue epistemology – in other words, questions around what it is that we should believe, and how we should form our beliefs. These are normative questions, and raise a whole bunch of issues relating to the extent to which we are in fact able to be rational epistemic agents; what such agents would look like; and whether we would want to be disposed in this way at all.

Am I an idiot?

This was the question I heard a student ask me 10 minutes before his supplementary exam, a week or two ago. Supplementary exam’s, for those not familiar with them, are a second-chance offered to students who end the semester with a final mark of 45%-49%. Seeing as a pass is 50%, the thinking is that they may simply have had an off-day during the initial examination, and deserve a second chance.

Seeing as he would have to repeat the entire semester course if he failed this supplementary exam, and seeing as he knew me as an honest person, and also as one not afraid of speaking the truth about idiocy, it was peculiar that he wanted to hear my answer to that particular time, where you’d presume his state of mind to be somewhat fragile. But the question was asked.

Put a contract out on yourself

It will be interesting to track the success (or lack thereof) of this idea: stickK.com:

On stickK, you draw up an official commitment contract that binds you to achieving a personal goal, be it big or small. By agreeing to this contract, you publicly state your goal and commit to achieving it. Or, if grand public pronouncements aren’t your style, you can tell only people you select. Either way, you’ve committed to a goal and people know about it – so now it’s your reputation at stake!

To make you accountable as you work toward your goal, you file weekly reports on your success. (And don’t even think about lying — because you appoint someone you know as a “referee” to verify the accuracy of your reporting!) You also enlist as many Supporters as you’d like to encourage you, via the website, every step of the way.

If humans functioned as rational economic agents, it should be a roaring success, and lead many of us to find the motivation required to finish those Ph.D’s, stop smoking, eat less spam, or whatever. But as Herbert Simon (and common sense) tell us, while we’re certainly economic agents, we’re also very infrequently rational – often through little fault of our own. StickK provides an interesting thought-experiment, though, in that the first impulse that comes to mind (in my case, at least) is that – if I wanted to – I could quit smoking. But we’re often willing to leave that commitment in the hypothetical realm, and StickK offers a cheap, yet still incentivising, way to put your money where your mouth is. The key economic question that remains, however, is whether self-deception has a larger payoff than achieving one’s goals. For many of us, self-deception is so ingrained that we see little or no alternative to keep taking that payoff, no matter if it’s less than alternate rewards available. So again, we’re left with the essential prerequisite of escaping the circularity of our definitions of self. And this, fellow humans, requires a significant infusion of courage, as well as friends who are willing to tell you the truth.