Category Archives: Activity

H809: Reading 21 Hammersley (2006) Ethnography: Problems and prospects

Some people in Second LifeI came to this paper not knowing anything about ethnography, and, while Hammersley writes clearly and gives a useful overview of the area, I get the impression that it’s one of those ‘everything-you-know-is-wrong’ topics!

We are reading this with a view to exploring the notion of ‘virtual ethnography’, which appears to be the poor sister of ‘real’ ethnography, but Hammersley’s paper shows that much current debate on ethnography is centred on working out just what ethnography is. And this reflection suggests that a redefinition taking into account the blurring boundaries between online and offline activity.

Hammersley rounds up some of the debates within and about ethnography and helpfully offers a definition that is broad enough to be representative while being succinct enough for newbies to understand:

[It is] a form of social and educational research that emphasises the importance of studying at first hand what people do and say in certain contexts. (p4).

It usually involves ‘fairly lengthy contact, through participant observation in relevant settings, and/or through relatively open-ended interviews designed to understand people’s perspectives’. Thus far, I see no great issue with applying this to participants engaged in online activities, although ‘fairly lengthy’ is a problematic term (I note that there has been a shift towards shorter studies given the changing cultures in universities).

I digress (one of the affordances of a blog is the way it allows one to think out loud. I’m writing this not as a means of sharing so much as a way of constructing my own knowledge. Sorry. 🙂  ) We H809 students have our instructions!

13.5: Ethnographic understandings of context

Hammersley describes the tension between observations made at a micro level which are held up to represent a big picture. There is this ‘holistic’ location of the thing being studied but perhaps the thing should be studies in greater detail but at a more local level (micro-ethnography). 

We then have the difficulty of determining whether context is ‘discovered or constructed’. It’s at this stage that I began to remember the discussions of deconstruction and post-modernism from my undergrad years and I also recalled the Sokal Affair. Take this point of view: ‘any attempt by an analyst to place actors and their activities in a different ‘external’, context can only be an imposition, a matter of analytic act, perhaps even an act of symbolic violence‘ [my italics]. Hammersley doesn’t endorse this view, but he does say there is a ‘grain of truth’ in it. My fellow H809 student, John Kuti, sums things up far more succinctly than I do.

13.6: Virtual context

Hammersley points out that in traditional ethnography, great emphasis is placed on ‘the researcher’s participation in, and first-hand observation of, the culture being investigated’. Internet ethnography, however, involves no face-to-face communication, collecting the data online instead. I’m not quite sure again why this is problematic, as I don’t see why physical presence is so vital. With the increasing ubiquity of powerful audio/visual communications technology, surely there is nothing inherent to f2f that wouldin itself devalue online research?*

* As a total newbie to ethnography, I’m aware that I could be in dangerous territory here. Please Prof. Ethnography, don’t hurt me!

Another difficulty of online ethnography for traditionalists is the problem of not knowing what online contributors say ‘beyond what they tell us’ [his italics]. But, as he points out, this is something of a straw man as most online interaction ‘operates in an orderly fashion’ and that ‘participants obviously display enough about themselves through their contributions to be able to understand one another’.

When I think of the amount of data that is freely-available about me online, the relationships that exist only online and the traceable interplay between my profession and recreation, I’d imagine a face-to-face interview would be redundant!

H809: Week 11 round up

The H809 blogosphere has all but vanished! (Head over to James Aczel’s course blog for a round up of the latest entries) The postings have slowed to a crawl since we got over the last TMA with its somewhat complex part 1. Maybe we’re sulking! 😉

Anyway, in week 11 we were to look at three readings, each a published piece of research broadly examining educational technology. We were to read them and think about how ‘new’ the research might be.

Reading 14 is a 2002 paper by Bos et al and it presented a quite interesting examination of how trust is influenced by the the medium of communication. Participants in the study were divided into teams and had to play a ‘social dilemma’ game where the best result emerges when people sacrifice personal gain for the good of the team. They compared trust between team members who played face-to-face, using videoconferencing, phone conferencing and using text. They found that (unsurprisingly) the face-to-face teams established trust best, but video and phone were not far behind. //

I found this study fascinating and well-executed. Having four distinct methods of communication with a broadly similar group of participants should tell us something valuable about the effect of the medium on communication. This of course has implications for instructional designers and online educators. (Applying it to H809, which is entirely text-based, might explain the relative lack of participation?) I’m unsure if the research is ‘new’ as such. The technology is certainly not new and there appear to have been similar studies particularly in the field of business and psychology (Valley, Moag & Bazerman 1996; C Jensen, SD Farnham, SM Drucker, P Kolloc 2000; Fletcher & Major 2006)

Reading 15 from Joinson & Reips (2005) considers the effect of personalised salutations and the seniority of the sender on the response rates to web-based surveys. Ever the eager-beaver guinea pigs (!) OU students received invitations to participate in surveys. Some individuals received generic salutations “Dear Student’ or ‘Dear Open University Student’ and others were more personal ‘Dear John Doe’ and ‘Dear John’. Three studies were carried out to refine the overall study and lots of I’m sure very meaningful things were done with chi-squares and logistic regressions (whatever they are). Advertisement for toilet seats from a company called Dear John (me neither...)

Again, what is being tested here is not so new. The medium (email) is at least ten years old and personalised surveys are not new either. What is new perhaps is the ubiquity of email as a communication tool. Getting an email in 1997 was novel: after ten years years of spam, less so. Therefore their study has relevance and purpose. The sophistication of the technology has increased also, but has this just allowed data to be collected faster rather than offering a new research method? All in all, this was a convincing study given the way it was refined in three phases. I wonder if the participants are truly representative of the general population though (if that is important). OU students are perhaps aspirational, possess goodwill towards the (academic) institution and may be more inclined towards helping. It would be interesting to repeat the process with a more commercial agenda.

Reading 16, Ryokai, Vaucelle and Cassell (2003), concerns the effect of a virtual peer on literacy and storytelling in children. They created a (slightly creepy) virtual character called Sam who appeared to engage young children in modelling storytelling and linguistic devices. They found that the children learned linguistic structures from Sam and they posit that there are potential benefits in employing such tools in developing children’s literacy.Image of Lev Vygotsky.

Initially, I thought this a little gimmicky. My superficial reading of the study led me to think that the paper was less about the research than it was about how novel Sam was. Could they not have used a puppet or a disguised adult? What was the benefit of using a simulated peer? But then I realised that the whole point was to suggest that software such as Sam could be an easily distributed and employed tool in classrooms, giving learners opportunities to learn without the need for close interaction with a teacher. The paper advocates more ICT in the classroom and their study gave them evidence to support that assertion. The main drawback in my mind (and they do acknowledge this) is the very small scale of the study. It was limited to a relatively small group of 5-year-old girls who played with Sam for only 15 minutes. The paper seems a little premature.

Overall, these readings were much easier to get a handle on than previous ones. All three had very tangible aims and outcomes. The work we have done on theoretical frameworks has made it easier to contextualise the papers (even if it is still rather difficult to spell Vygotsky).

3.2 Examining impact – Mason reviews OECD

[I’m engaging in a risky gambit at the moment of trying to compress 30 hours of study into about half the time. Basically, I don’t wish to continue playing catch up as the course progresses and the TMA appears rapidly on my horizon. I toyed briefly with dropping out and getting a refund, but hey, we’re built of sterner stuff round here. Or we just have unrealistic expectations. You choose.]OECD logoA few points in Mason’s review struck me. It seems that based on the data, or rather this data [Mason’s italics], ‘e-learning has failed to emerge as a significant activity or market, although there is evidence that online learning is growing’ (287). Also, elearning has not had the predicted ‘revolutionary’ effect many had predicted. Being familiar with Mason from H808 and indeed the findings of the OECD report is very pertinent to that course, I was surprised to find myself reading her review with a H809 hat on. Essentially, I wasn’t so interested in the findings, I was more interested in her critique of the methodology, that italicised ‘this‘.Learning about questioning and learning to question research is core to the course thus far. From this review, we learn that Mason has doubts about the efficacy of the questionnaire approach to gathering data employed by the OECD because this method ‘would probably never capture the subtleties of slow, personal changes in the processes of teaching and learning’ (287). That does raise the question if these slow, personal changes can be captured and if so can anything meaningful be extracted to be applied in a general way (back to the quandry Wegerif and Mercer hoped to overcome).Mason also makes a point of telling us that the survey has produced findings that she also found: 15 years ago.  So, she reminds us that elearning is best suited to motivated postgraduates who need flexible delivery, and certain courses (Business Studies, Management, IT and Education) use elearning more. Does this indicate that certain characteristics of elearning remain regardless of the changes and developments over the years?We also learn that while elearning is broadly viewed as positive, little ‘substantive internal research evidence’ (288) was presented to support this belief. One could (as I did) see this as a potential weakness were one to attempt to promote elearning, but Mason correctly asks if similar evidence be presented for the efficacy of lecturing?Ultimately, she chimes in with the assertion that governments should cultivate patience and resist the urge to micro-manage change. Given her earlier comment about the difficulty of quantifying the slow nature of change in education, perhaps it suits her to urge patience? Given that the report proffers 15 year old findings, perhaps things should speed up a little? But those comments are more suited to H808, so I’ll move along…   [Edited 7 March 08  on learning that Robin Mason is a woman]

H809 Activity 2.4


1. Why is transcript data to be preferred to the video data for such a visual task?

They did use video, so the question I presume is why did they not present video as evidence in their findings?

  • Practicality
    • Academic research is distributed through text (especially in 1997)
  • Ethics
    • Anonymity and the protection of these children can be better maintained with textual descriptions. Non-verbal, facial methods of communication can be shown on video, but this raises difficult questions technically and ethically.
  • Software analysis text
    • Their chosen software analyses text, which is based on quick searching. To date, a reliable method of searching video has yet to be developed.

2. Is it possible to avoid the use of preconceived categories when analysing this data?

This is a question of linguistics and epistemology. The research is investigating the significance or not of certain linguistic utterances. All language is made of signifiers (as opposed to the signified), and is, therefore, always at some remove from the original thing (emotion, idea, physical thing). Others must attempt to interpret linguistic utterances based on context and knowledge of systems such as languages.

Whenever any form of interpretation in executed, there is a risk of misinterpretation. It is unlikely that this ‘ambiguity’ can ever be fully removed, so it seems a little pointless to do too much second guessing. We could say that these researchers are rational, educated people and as such we can trust their interpretations of the children’s utterances in the contexts in which they were made.
3. What evidence might support this claim?: “In the context of John’s vocal objections to previous assertions made by his two partners his silence at this point implies a tacit agreement with their decision.”

The fact that these are educated, rational academics who are unlikely to wish to mislead or deceive is the strongest point for me.

  • If they have observed that in the previous instances, John objected, and that in this third situation – with all other variables being equal – he remained silent, then their claim is reasonable.
  • However, if they neglected to mention that John was chewing a lollipop, distracted by something else or some other difference, then we could challenge their interpretation.

4. Did you ask yourself if this was true of the control group?

No, because I didn’t think of that. And this is not just because I’m new to this area, but also because one assumes that in a peer-reviewed report/article, such fundamental parts of a piece of research would need to be present in order to be credible.

5. Lack of unambiguous word – how can this be dealt with?

  • Analyse the contextual data
    • Can we identify from intonation, facial expression, physical gestures?
  • Analyse the participant’s linguistic style in similar contexts.
    • Does he frequently leave out these words even though it is clear that he means them?

6. Are you convinced that the study effectively demonstrates the authors’ case that the software gets over the qualitative/quantitative distinction?

I don’t feel confident that I actually understand the way the technology works here. I can see how it could be effective with quantitative data (which needs to be identified by someone), but I’m not entirely certain how it works with qualitative data. OK, one can switch rapidly from, say, a mention of the number of occurrences of a keyword to the specific usage of that word in the context of the transcript, but to my mind this is not qualitative data; it’s raw data. Could one say that this transcript needs some degree of evaluation/processing before it is qualitative?

7. What does the computer add to the analysis?

A way to quantify and categorise data. In this case, as I understand it, the authors identify phrases/linguistic occurrences and these are sorted by the software. The software can count occurrences, place them in context (concordance) and allow fast switching between ‘levels of abstraction’.

Apart from efficiency, one assumes that the computer adds some degree of objectivity to the quantitative data.

8. Computer-based analysis 10 years on?

With the relative ubiquity of electronic texts, the term ‘computer-based’ has become redundant. Similarly, just mining a document for text is not as meaningful as searching for ‘content’, so the name has changed a little.

  • It is used to evaluate documents, especially for presentation by a search engine such as Google Scholar.
  • The commercial potential appears to have been harnessed, with the technique seemingly widely used in Public Relations.

9. How does this paper compare with Reading 1?

Reading 1 was concerned with the effectiveness of technology in learning, particularly the methods of interaction made possible by technology. Reading 2 is more concerned with the use of a technology to enhance the quality of research. So, reading 1 is investigating the effectiveness of a tool while reading 2 is investigating a hypothesis using a tool.

H809 begins

Back to the grindstone with the OU. With H808 put to bed, I had hoped for a less frantic run of things. Alas NUIM got in the way and I had to write up a presentation I gave in Greece a couple of years ago at the European Access Network‘s Annual Conference in gorgeous Thessaloniki.

H809 has, of course, been busily getting about its business for the last three weeks and I’ve not been able to engage as I had wanted to. The first week was OK as it was largely making ourselves familiar with the technology as well as one reading. Happily all the struggle with H808 was at least worth it as one was already au fait with FirstClass, wikis, blogs and podcasts etc.

H809 seems to based around a weekly reading, each illustrating some important aspect of research that us budding research professionals should know. The first reading, by Hiltz and Meinke, dates back to 1989 and tests the merits of using a ‘virtual classroom’ to teach Sociology. I was struck less by the outcomes of the research (both virtual and physical are pretty much the same with some benefits to one over the other here and there) than by the fact that these discussions are still taking place twenty years on. “Is elearning as effective as face-to-face learning?” is the contemporary language, but the song remains the same (especially in my institution).

Anyway, I’m on a major catch up these next few days, so expect a glut of blog entries! Another feature of this module is the expectation that we keep a blog and interact with those of others before bringing our observations back to FirstClass. H809 is a first run module, so it will be interesting if this system works. In H808, when faced with the pressures of the TMAs and the ECA, learners abandoned any optional activities very quickly, so we’ll see how this module copes!