Exploring videos’ coding in shorter episodes?

Welcome Forums Forums Getting Help with Nvivo – Scroll to end to post a question Exploring videos’ coding in shorter episodes?

  • This topic is empty.
Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #2579
    palunen
    Participant

    Hi,

     

    I am still working on coding my videos showing small group interaction. The videos are 10-40 minutes in length. At the moment, I am able to investigate the changes in interaction structure during sequential sessions in separate videos, but I would like to see the changes in shorter intervals, eg. 5 minute episodes. Is it possible to make queries for shorter parts of the videos or do I have to make new, shorter clips and code them once again?

     

    Hoping to get some advice,

     

    Palunen

    #2896

    Hi Palunen,

     

    I'm really not understanding the question here I'm afraid! What kind of queries are you trying to run? Is it that you just want to break up the source video into smaller clips? If so, this is eminently doable whilst retaining your original coding of the bigger video clip. Can you send mea more detailed description of the processes you are engaged in and I'm sure I will be able to give you simple enough instructions to achieve your desired outcomes. Sorry I cannot give you more information than this just now but I am very unclear from your posting as to exactly what the problem is 🙁

     

    Kind regards,

    #2897
    palunen
    Participant

    Hi Ben,

     

    Thank you for your early reply.

     

    Let's take an example: I have coded a 30 minute video for speakers, as well as for a predefined classification. I can see the references and coverage for the whole 30 min video. What I would like to see now is the references and coverage for every 5 minute episode, so I could see how the video recorded group interaction changes/develops during the video…  It is possible to visualize the coding stripes according to both coding methods (speakers, classification coding) but the number of references per every 5 minutes would be appreciated…

     

    There is also another question: is there any possibilities to see the successions of references, e.g. who speaks more often after a certain person or what kind of category comes after a particular category most often…

     

     

    Best regards,

     

    Palunen

     

    #2898

     

    The answer to both questions is yes and depending how your video has been set up after import, you could even automate the process. Let’s deal with your first question initially. If you insert a customizable column into your video (You can see an example of this in the pre-loaded NVivo tutorial project that comes with the software. If you look in interviews/Ken or interviews/Betty and Paul) where the column was called “speakers” and the name was then inserted as each participant speaks. In the tutorial, the video has been transcribed verbatim in NVivo but you could do this just as easily without the transcription by simply looking at the video and inserting the speakers as the talk. If you right click on Ken and select the autocode option you can, in one click, create a node for each speaker and each speaker’s content will be automatically coded there. Then you can see references and coverage just for that speaker. The same process can also be applied to themes as opposed to speakers.

    To insert a customizable column:

    1.       Go to file/info/project properties

    2.       Click on the audio/video tab

    3.       Select he video sub-tab

    4.       Select the ‘new’ button

    5.       Name your column (speaker for example)

    Your column is now created and will appear in your video when opened.

    To insert the names in the relevant places:

    1.       Open the video and click on the ‘click to edit’ button

    2.       Click on the transcribe button in the media toolbar

     

     

     

     

     

     

     

     

     

    3.       Press play – the video will play and insert the opening time line automatically

    4.       When the speaker changes – press stop (not pause)

    5.       The finishing time line will be automatically inserted

    6.       Enter the speaker identifier in the customizable column

    7.       Press play and a new row will be created and the next time line entered automatically

    8.       Repeat this process for each speaker

    9.       When all speakers have been tagged – use the auto coder as outlined in the opening paragraph to create the codes for each speaker.

    If you don’t want to use the customizable columns and auto coder, you could simply manually code the speakers to their respective codes and you will achieve the same end result.

     

    To answer your second question, you can filter the coding stripes to see whatever codes you want to see. You don’t mention in your posting if you have set up case nodes and linked your background information (demographics for example) to them? If you have then follow these steps:

    1.       Go to view/coding stripes/selected items

    2.       Browse for your case nodes and select all

    3.       Your speakers will be shown as coding stripes

    Equally, you can filter to see specific themes or codes (thematic nodes) using the same steps

    If you have not set up case nodes, you might want to give this serious consideration as you are ruling out an entire layer of possible analysis without them. With one-to-one interviews, this process is simple and takes seconds. With focus groups or source files with multiple participants, follow the earlier steps to create case nodes and then link them to your classifications. See the following tutorial for doing this:

    http://www.qdatraining.eu/people

    I hope this helps to address your problem but if I have been unclear or you have further questions, do come back to me and I will follow up.

    Kind regards,

Viewing 4 posts - 1 through 4 (of 4 total)
  • You must be logged in to reply to this topic.