Website for using discursive psychology in research:
http://www.qualityresearchinternational.com/methodology/RRW6pt5Discursivepsychology.php
Casteel Discursive Psychology
Friday, December 4, 2015
Monday, January 12, 2015
Meeting with Angelle- January 12, 2015
Deadlines
Chapter 1- due by Feb. 12
Chapter 2- outline of headings and subheadings due by email Jan. 26
Chapter 1
Full chapter 1 due in 30 days.
national, state focus on eval, teacher reform.
Problem is that teachers are evaluated on is not the same as the professional standards for art educators. Therefore, people are being evaluated under one methodology and are being asked to practice under another methodology. (Or maybe, set of "best practices" instead of the word "methodology" here?) So the voice of teachers in this situation not part of the overall conversation. This is useful for field of instructional practices to hear from these teachers. (Maybe I'm doing critical discourse analysis?)
Look for in the literature "one-size-fits-all" instruments. What do policymakers say about using OSFA rubrics for evaluating teachers? (Something like- we can't make lots of different rubrics nor train evaluators how to use them.) What does ROL say about using OSFA? (For example- the article about music teacher evaluations.) Specifically, do these journal articles say that there is a need for future research about OSFA instruments? Showing both sides of this OSFA debate is part of the statement of the problem.
The FA Portfolio is an attempt to address the needs of art educators in the evaluation process, but it does not address 50% of the teachers summative score (which is observation based.) This also leads to the statement of the problem.
Intro of Chapter 1- background, what is evaluation? what is effective art teacher? Build the case for the problem. keep it simple.
Question- what is effective art teaching?
Chapter 2
Make a list of headings and sub-headings as an outline and email.
Main parts of ROL
I. evaluation
A. supervision vs. evaluation (not a lot about supervision, just what is important to this situation- for example, TEAM is an evaluation model, but has all the parts of supervision. not a lot about supervision here.)
II. one-size-fits-all instruments
A. effective teaching
B. effective art teaching (similarities and differences- focus on differences)
a. unique art education pedagogy/best practices/methods (how is art different than regular classes. example- visual literacy. help readers understand art education so they understand what teachers say in the data.)
III. Framework
A. discursive psychology (definition) (If I reference discourse analysis, refer readers to Chapter 3.)
Chapter 3
methodology- include info on DA. use what I have in Chapter 2 right now, but pare it down. Use enough to understand the methodology, but don't go into lots of details.
Use the two new books (below) to help with Chapter 3. Work on Chapter 3 while waiting on feedback from Chapter 1).
Other Notes
Ordered two books today to help with Chapter 3.
Chapter 1- due by Feb. 12
Chapter 2- outline of headings and subheadings due by email Jan. 26
Chapter 1
Full chapter 1 due in 30 days.
national, state focus on eval, teacher reform.
Problem is that teachers are evaluated on is not the same as the professional standards for art educators. Therefore, people are being evaluated under one methodology and are being asked to practice under another methodology. (Or maybe, set of "best practices" instead of the word "methodology" here?) So the voice of teachers in this situation not part of the overall conversation. This is useful for field of instructional practices to hear from these teachers. (Maybe I'm doing critical discourse analysis?)
Look for in the literature "one-size-fits-all" instruments. What do policymakers say about using OSFA rubrics for evaluating teachers? (Something like- we can't make lots of different rubrics nor train evaluators how to use them.) What does ROL say about using OSFA? (For example- the article about music teacher evaluations.) Specifically, do these journal articles say that there is a need for future research about OSFA instruments? Showing both sides of this OSFA debate is part of the statement of the problem.
The FA Portfolio is an attempt to address the needs of art educators in the evaluation process, but it does not address 50% of the teachers summative score (which is observation based.) This also leads to the statement of the problem.
Intro of Chapter 1- background, what is evaluation? what is effective art teacher? Build the case for the problem. keep it simple.
Question- what is effective art teaching?
Chapter 2
Make a list of headings and sub-headings as an outline and email.
Main parts of ROL
I. evaluation
A. supervision vs. evaluation (not a lot about supervision, just what is important to this situation- for example, TEAM is an evaluation model, but has all the parts of supervision. not a lot about supervision here.)
II. one-size-fits-all instruments
A. effective teaching
B. effective art teaching (similarities and differences- focus on differences)
a. unique art education pedagogy/best practices/methods (how is art different than regular classes. example- visual literacy. help readers understand art education so they understand what teachers say in the data.)
III. Framework
A. discursive psychology (definition) (If I reference discourse analysis, refer readers to Chapter 3.)
Chapter 3
methodology- include info on DA. use what I have in Chapter 2 right now, but pare it down. Use enough to understand the methodology, but don't go into lots of details.
Use the two new books (below) to help with Chapter 3. Work on Chapter 3 while waiting on feedback from Chapter 1).
Other Notes
Ordered two books today to help with Chapter 3.
- Focus Group Practice by Puchta and Potter
- Discourse as Data: A Guide for Analysis by Wetherall, Taylor, and Yates
Tuesday, December 16, 2014
Meeting with Paulus Dec 16, 2014
Reviewed Chapter 1
Chapter 1 builds the case for the study. Start broad with National climate of evaluation and some history, connected to NCLB, move to TN, TEAM, art teacher evaluation. Add in the role of discourse/talk- just a little to begin an introduction to DP. The rest of DP should be in Chapter 2.
Chapter 1 is the "so, what" of the dissertation. Chapter 2 goes into detail based on Chapter 1. Include theoretical framework in Chapter 2.
Since there are only 2 articles on effective art teaching, I will need to look at related fields- teacher effectiveness and other disciplines (other fine arts.) This creates a gap in the literature, but why is this a problem for ed leadership/admin? This goes back to the culture of evaluation, need to identify effective teachers.
Statement of the Problem
Next Steps
Chapter 1 builds the case for the study. Start broad with National climate of evaluation and some history, connected to NCLB, move to TN, TEAM, art teacher evaluation. Add in the role of discourse/talk- just a little to begin an introduction to DP. The rest of DP should be in Chapter 2.
Chapter 1 is the "so, what" of the dissertation. Chapter 2 goes into detail based on Chapter 1. Include theoretical framework in Chapter 2.
Since there are only 2 articles on effective art teaching, I will need to look at related fields- teacher effectiveness and other disciplines (other fine arts.) This creates a gap in the literature, but why is this a problem for ed leadership/admin? This goes back to the culture of evaluation, need to identify effective teachers.
Statement of the Problem
- add citations
- The purpose of this study is to examine how art teachers an effective art educator is.
- The purpose of this study is to examine effective art educators
- Use Creswell's script: The purpose of this study is to (explore/discover/understand/describe) (the central phenomenon) of/for (participants) at/in (research site).
- How do art teachers construct effective art education?
- What aspects of supervision and evaluation do teachers make relevant?
- Move the 3 analytical questions to Chapter 3.
- Use EAFG group that is already in existence (natural talk)
- 3 groups, one elem, one middle, one high. Current teachers in TEAM schools.
- Facilitator keeps group on track. Has starter questions.
- Possible questions: What does a good art class look like? What is a good art program? What is a good art educator network? (Possibly use the NAEA Art Educator Standards to guide this.)
- The TEAM process is described as a supervisory model in that it includes words like "coaching" "pre and post conferences" "growth." However it is an evaluation model in that teachers receive a score that is connected to tenure and their job status.
- Compare what teachers say with the TEAM rubric and the NAEA Art Educator Standards. What aspects of supervision and evaluation do teachers make relevant?
Next Steps
- Review chair's other dissertations.
- Create a Chapter 1/2 outline. These should be similar.
- Read Rachel Gabriel's article on TEAM discourse.
- Read focus group book by Potter and ?
- Create possible facilitation questions- have a pilot group to fine tune facilitation
Wednesday, November 20, 2013
Readings Nov. 14
Hutchby, I. & Wooffitt, R. (2008). Conversation analysis, 2nd edition. Cambridge, MA: Polity Press.
Potter, J. & Hepburn, A. (2012). Eight challenges for interview researchers. In J.F. Gubrium & J.S. Holstein (Eds.) Handbook of interview research (2nd ed.) London, Sage.
Antaki, C., Billig, M.G., Edwards, D. & Potter, J.A., (2003). Discourse analysis means doing analysis: A critique of six analytic shortcomings. Discourse Analysis Online, 1.
Goodman, S. (2008). The generalizability of discursive research. Qualitative Research in Psychology, 5(4) p. 265-275.
Potter, J. & Hepburn, A. (2012). Eight challenges for interview researchers. In J.F. Gubrium & J.S. Holstein (Eds.) Handbook of interview research (2nd ed.) London, Sage.
Antaki, C., Billig, M.G., Edwards, D. & Potter, J.A., (2003). Discourse analysis means doing analysis: A critique of six analytic shortcomings. Discourse Analysis Online, 1.
Goodman, S. (2008). The generalizability of discursive research. Qualitative Research in Psychology, 5(4) p. 265-275.
***********************
Reflection on ATLAS.ti and Technology
I like to rate technology based on two things: is it user-friendly and does it do what I need it to? ATLAS.ti allowed me to organize my documents and the analysis of those documents. It housed an audio file, a handful of pdf's, and my notes (memos) and codes. I was able to link my audio file with my transcription while I created the transcription. So, yes it did what I needed it to do.
However, the program itself is neither intuitive or user-friendly. I had to reference Ann Bennett's notes and the help section every time I worked on my analysis, which considerably slowed down my work. I am quick to learn new technology and consider myself tech savvy. This was a frustrating experience for me. Also, ATLAS.ti is not available on Mac's right now. I had to borrow an ASUS Eee PC laptop. Combining this slow, terribly designed machine with a non-user-friendly program was not conducive to maintaining a calm working environment.
Side note: I tried the iPad app. It was a bigtime failure. The audio recording does not run in the background, so when the energy saver started, the audio turned itself off. I coded some documents and only some of the coding transferred to the PC. I would not recommend the app until some major updates have taken place.
I am very grateful that the Ed Tech people had laptops to borrow. I understand that they can't stock high quality machines. I really depend on using technology to increase the speed and lessen the frustrations of the huge electronic workload that I have to maintain. I'm just giving my opinions on the technology troubles that I've had the last semester. I think you get my point, and I am now moving on to readings.
Readings
Goodman: Wrote 5 truths that discursive work should have to claim generalizability. Mostly linking to rhetorical actions to strategies, and those strategies can be found across a range of contexts doing the same action. I think this is interesting and wonder if there are enough empirical research articles to do this. However, this is helpful to me in that there are not a lot of articles on art teacher evaluation, but if I see that the art teachers use a strategy like "I was just doing x, and then y happened" to accomplish a certain thing, then I can compare this strategy and action with other contexts besides art teacher conversations. My work would then be generalizable and fruitful by providing a new context for that strategy/action. (Does that sound right?)
Antaki, et. al: A practical document as I'm going through analyzing data. The authors warn of 6 mistakes made in data analysis. In my own data, I have tried to use excerpts where interesting actions were taking place. However, the actions that I am choosing to focus on are ones that I have read about, like self-repair. This is a helpful document, and one that I will go back to as I complete my data analysis paper.
Potter & Hepburn: I wish I had read this chapter before I wrote my comps... I would have used this! From other readings over the last two semesters, I understood that the interviewer's questions should be included in transcription, because what they say does something. However, I hadn't thought of the acknowledgement tokens (p. 20, 27) as actions that push a social science agenda (yikes!).
Wednesday, November 6, 2013
Readings Nov. 7
Lester, J.N. & Paulus, T.M. (2011).
Accountability and public displays of
knowing in an undergraduate
computer-mediated communication
context. Discourse Studies, 1-16.
Paulus, T. & Lester, J.N. (2012). Making learning ordinary: ways undergraduates display learning in a CMC task. Text & Talk 33(1), 53-70.
Paulus, T. & Lester, J.N. (2012). Making learning ordinary: ways undergraduates display learning in a CMC task. Text & Talk 33(1), 53-70.
************************
Paulus & Lester's (2012) work is a continuation of the 2011 article on CMC (Computer-mediated communication) in Dr. Smith's nutrition course. In this study, the students were asked to write a blog after a lecture on dietary supplements. Their writing prompt was "What did you learn or how did your understandings change?" (p.59). The authors found three ways that students oriented themselves to learning in this situation: an extreme state, a neutral state, and no learning.
As I'm reading work by Lester and/or Paulus, I understand DP and DA better than at any other point in my studies. (So thank you.) It is a little strange writing a blog about my learning on an article about how students negotiate learning within blogs. I would say that to this point I have used a neutral state to assess my learning by reporting the news. But now, with the addition of the last two sentences, the blog has become reflexive.
For a full review of Lester & Paulus (2011) from July 22nd, click here.
Paulus & Lester's (2012) work is a continuation of the 2011 article on CMC (Computer-mediated communication) in Dr. Smith's nutrition course. In this study, the students were asked to write a blog after a lecture on dietary supplements. Their writing prompt was "What did you learn or how did your understandings change?" (p.59). The authors found three ways that students oriented themselves to learning in this situation: an extreme state, a neutral state, and no learning.
As I'm reading work by Lester and/or Paulus, I understand DP and DA better than at any other point in my studies. (So thank you.) It is a little strange writing a blog about my learning on an article about how students negotiate learning within blogs. I would say that to this point I have used a neutral state to assess my learning by reporting the news. But now, with the addition of the last two sentences, the blog has become reflexive.
Wednesday, October 30, 2013
Readings Oct. 31
Gee, J.P. (2011). How to do discourse analysis: A toolkit. New York: Routledge.
************************
Peer Review- I reviewed Amanda's work and gave her feedback. I opted out of receiving peer feedback. I think that peer reviews can be beneficial.
Data Session- This was a fantastic experience. I enjoyed working with the group to try to understand what was happening in Natalia's recording. It was truly unmotivated looking on my part, and we were able to help provide an outsider viewpoint.
Gee-
Unit 3 consisted of "building" tools focused on how knowledge is socially constructed by discourse participants. Several of these tools continue to use grammatical devices that make my head hurt. Tool 14 is the Significance Building Tool. Gee started to lose me when he went into the details of clauses and phrases in the foreground or background. However, it does make sense to look at word choice to see how people use words like crucial to construct that something is more important.
(p.88) Gee wrote that language constructs the world and proposes 7 tasks that we use in our discourse to build reality: significance, activities, identities, relationships, politics, connections, and sign-systems and knowledge. (I don't agree that the things Gee lists as sign-systems are not language. Maybe I could understand that mathematics are not a language but a system of signs we use. However, he also includes hip-hop and poetry. Is this just his definition of little-d discourse?)
Unit 4 included 11 theoretical tools. I really enjoyed these tools. They make a lot of sense to me. Situated meaning makes me think about the insider and outsider debate. You would need to simultaneously need to recognize and be able to define words or phrases with situated meaning and make sure that you are not taking those words and phrases for granted. (Social Languages Tool sounds like Big-D Discourse?)
I'll give Gee a break. I can tell that some of the tools will help me with data analysis... not all of them, but I will probably use these as a starting point or when I'm stuck in data analysis.
Monday, October 21, 2013
Readings Oct. 24
Gee, J.P. (2011). How to do discourse analysis: A toolkit. New York: Routledge.
***********************
As I read the introduction to this book, I couldn't understand why Trena didn't like Gee's work. Then, I read Unit 1 and 2. The organization is choppy, the content takes a cognitive stance, and the work's focus is grammar. I am not a fan of reading about grammar. My eyes started rolling back in my head. I was making mad and angry faces at the pages. I included mean-spirited expletive notes in the margins. I am not a Gee fan.
However, to follow the rules, here is a summary and a synthesis of the reading. Gee outlines 12 of 27 tools for discourse analysis. I can imagine when I am in the midst of data analysis, perusing these 27 tools to help me look at my data with a fresh lens. However, the tools are repetitive (p. 55, Gee admits that "the why this way and not that way tool" is "not really separate from the Fill in Tool or the Doing Not Saying Tool") and several are based on grammatical structures (stanzas, subjects, and topics and themes.) I believe that I would use The Making Strange Tool. This goes back to critical discourse analysis and taken for granted discourse. I can see the benefit of examining assumptions and what was not in the talk. Also, the Deixis and Intonation Tools seem useful, and I've seen this is some of our other readings.
Part of DA research is that as a constructor of discourse, I am an expert on understanding the action of talk and text. Gee mentions this as well on p. 13 that the task of analyzing discourse is similar to the task of being engaged in discourse. I do not feel like I am an expert at grammar (in the sense that Gee is) and cannot imagine myself delving into participants use of clauses, stanzas, subjects, and predicates. (Although, I can see how some people might do just that.)
Here's hoping that Units 3 and 4 are more big picture like Gee describes, because I don't think the margins of my book can take any more profanity.
Subscribe to:
Comments (Atom)