H809 Activity 2.4

concordance

1. Why is transcript data to be preferred to the video data for such a visual task?

They did use video, so the question I presume is why did they not present video as evidence in their findings?

  • Practicality
    • Academic research is distributed through text (especially in 1997)
  • Ethics
    • Anonymity and the protection of these children can be better maintained with textual descriptions. Non-verbal, facial methods of communication can be shown on video, but this raises difficult questions technically and ethically.
  • Software analysis text
    • Their chosen software analyses text, which is based on quick searching. To date, a reliable method of searching video has yet to be developed.

2. Is it possible to avoid the use of preconceived categories when analysing this data?

This is a question of linguistics and epistemology. The research is investigating the significance or not of certain linguistic utterances. All language is made of signifiers (as opposed to the signified), and is, therefore, always at some remove from the original thing (emotion, idea, physical thing). Others must attempt to interpret linguistic utterances based on context and knowledge of systems such as languages.

Whenever any form of interpretation in executed, there is a risk of misinterpretation. It is unlikely that this ‘ambiguity’ can ever be fully removed, so it seems a little pointless to do too much second guessing. We could say that these researchers are rational, educated people and as such we can trust their interpretations of the children’s utterances in the contexts in which they were made.
3. What evidence might support this claim?: “In the context of John’s vocal objections to previous assertions made by his two partners his silence at this point implies a tacit agreement with their decision.”

The fact that these are educated, rational academics who are unlikely to wish to mislead or deceive is the strongest point for me.

  • If they have observed that in the previous instances, John objected, and that in this third situation – with all other variables being equal – he remained silent, then their claim is reasonable.
  • However, if they neglected to mention that John was chewing a lollipop, distracted by something else or some other difference, then we could challenge their interpretation.

4. Did you ask yourself if this was true of the control group?

No, because I didn’t think of that. And this is not just because I’m new to this area, but also because one assumes that in a peer-reviewed report/article, such fundamental parts of a piece of research would need to be present in order to be credible.

5. Lack of unambiguous word – how can this be dealt with?

  • Analyse the contextual data
    • Can we identify from intonation, facial expression, physical gestures?
  • Analyse the participant’s linguistic style in similar contexts.
    • Does he frequently leave out these words even though it is clear that he means them?

6. Are you convinced that the study effectively demonstrates the authors’ case that the software gets over the qualitative/quantitative distinction?

I don’t feel confident that I actually understand the way the technology works here. I can see how it could be effective with quantitative data (which needs to be identified by someone), but I’m not entirely certain how it works with qualitative data. OK, one can switch rapidly from, say, a mention of the number of occurrences of a keyword to the specific usage of that word in the context of the transcript, but to my mind this is not qualitative data; it’s raw data. Could one say that this transcript needs some degree of evaluation/processing before it is qualitative?

7. What does the computer add to the analysis?

A way to quantify and categorise data. In this case, as I understand it, the authors identify phrases/linguistic occurrences and these are sorted by the software. The software can count occurrences, place them in context (concordance) and allow fast switching between ‘levels of abstraction’.

Apart from efficiency, one assumes that the computer adds some degree of objectivity to the quantitative data.

8. Computer-based analysis 10 years on?

With the relative ubiquity of electronic texts, the term ‘computer-based’ has become redundant. Similarly, just mining a document for text is not as meaningful as searching for ‘content’, so the name has changed a little.

  • It is used to evaluate documents, especially for presentation by a search engine such as Google Scholar.
  • The commercial potential appears to have been harnessed, with the technique seemingly widely used in Public Relations.

9. How does this paper compare with Reading 1?

Reading 1 was concerned with the effectiveness of technology in learning, particularly the methods of interaction made possible by technology. Reading 2 is more concerned with the use of a technology to enhance the quality of research. So, reading 1 is investigating the effectiveness of a tool while reading 2 is investigating a hypothesis using a tool.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s