When writing the scale reliabilities in the materials section for both our study’s findings and previous research, is it necessary to say each time that it was a Cronbach alpha coefficient or can we just say reliability was xx, because in the data analysis section I will state that we used Cronbach alpha – because I’m finding its taking up a lot of words – also in the materials section do we need to describe the demographic questions as well?
It is good practice to use the correct terms whether it be: internal reliability, Cronbach’s alpha, internal consistency, coefficient alpha (just be consistent with whichever you use). Have a look at papers to get an idea as to how you could be more concise here.
In the materials section, you could just write demographic questions were asked and state a few in brackets such as (e.g., age, sex) and mention that questions about technology usage were also included. … ***when mentioning examples only refer to the ones that you actually use in your report.
When writing the procedure is it okay to just say the battery of scales were administered or do we need to rename all the scales again?
You can just use the abbreviated version of the scale names (this should be specified in the materials section – you usually write the full scale name and then in brackets write its abbreviation and the scale authors and year).
For the data analysis section I haven’t gone into specifically how each sub-scale was calculated, I have just stated that they were derived from recommended structures in literature – is that enough detail?
I would mention that to create scales and subscales, continuous measures were calculated into a mean total score to aid interpretation and provide a reference. Search Pallant – See page 90 of the 2020 ebook version
Do we need to create a new power analysis to work out the amount of participants we needed for this study as we are now doing a MANOVA? Or can we use the one from our grant application?
Discussed in class and slides – create new power analysis. I’m pretty sure our sample size is still adequate regardless of the amount of DVs entered. In the Project Description ethics documents (under the ethics folder) you will see an example of how this sentence is written up – it is in the section where I describe the participants/sample size in the doc.
In forming the participant groups there was a bully group that was created – do we leave this group out of the analysis and if so do we explain that in the data analysis section? – Also on this in the participant sections do we need to explain how many participants were in each group – or is this explained more in the result sections as it outlines the prevalence of CB?
Yes, leave the bully-only group out of the analyses by using the -999 to tell SPSS it is not of interest in your report/ hypotheses. No need to explain in data analysis section that changed this to missing, just mention the coding of the bully-victims, victims, and non-involved groups. You might mention something in your future research section about including other participant roles in future (bullies, bystander roles).
The numbers in each group answer our Research Question on prevalence rates so save this for the Results section. We don’t want repetition.
Do we need to recode the CB scale?
what you do need to do, is to calculate the total score from the bully items (20 items) and then create a new variable where you code anyone with a score of 20 as Non-bully group (as they must have scored 1/never to all of the bully items) and anyone 21 and higher as bully (as they must have said they had done at least one of the behaviours once or more). We then might want to be more conservative (behaviours need to be repetitive rather than only done ‘once or twice’ or more to classify as “cyberbullying”) so we need to create another new variable and go through and code for each participant if they selected only 1’s and 2’s (never and once or twice) then they should be recategorised in the non-bully group and anyone who has selected any 3’s or more in the bully group.
We do this process for the victims items too (noting that this has 21 items, not 20 like for bully items).
Once we have both of these recategorized columns we can then put it all together and code who is a bully-victim, victim only, or non-involved.For anyone who is a bully-only we can just leave this as missing (such as -999) so they are not included in analyses (only look at groups that are in line with your hypotheses).
Whilst trying to run the Descriptives> Explore it won’t allow me to run it with String variables.
When data cleaning I recommend deleting the string variables from the data file (save as a new data set labeled “no string” or something like that). It is easier to clean and check amount of missing data and/or impute missing data without these types of variables in the data file. You can copy and past the variables in later on once your data file it ready and depending on if you need to use any of the string variables, based on your analyses.
we no longer need to mention that reverse scoring was done in the data analysis section?
I would mention that it was still done because if a researcher was reading our Method section, it would be good for them to know that there were certain items for the RSES that were reversed scored prior to analysis.
we can delete the old manually reverse-scored variables from SPSS?
we use the variables as they are (with the new values copied and pasted) to derive the total mean scores?
Yes, that is correct.
In one of our meetings you mentioned making our hypotheses specific to our analyses. If I want to hypothesize that a group will have higher scores than another on certain scales, can I use the abbreviated scale names (eg DASS) to state which scale or will I need to fully introduce and reference the scales? Alternatively, if I would have to fully introduce the scales just so I can mention them in my hypotheses, would be it appropriate to just say XXX group will experience more symptoms of XXX (without mentioning which scale specifically measures these symptoms)? I’m just trying to save words as I have already fully explained the scales in my method section.
In your hypotheses you can just use the variable name such as depression, stress, and anxiety, self-esteem, problem-focused, emotion-focused coping (or the specific coping strategies – based on whichever way you are using coping). There should be mention already of the variables throughout your introduction so it shouldn’t come as a surprise to the reader. Then in your Materials section, this is where you describe how these constructs were measured. You might hint at certain measures in the Introduction section. For example, you might discuss cyberbullying measurement in the Introduction section and some of the limitations in the past and introduce that the CBCVS (Doane et al., 2016) was developed for emerging adult samples. This would probably be the only measure that would be worth discussing in the Introduction because it has been such a contentious issue in the cyberbullying literature and we spent some time trying to find something suitable for emerging adults. With regard to the other measures, when mentioning past studies you might just drop in something like “XXXX found that emerging adult victims had higher levels of depression and anxiety (as scored by the DASS-21) compared to non-victims.” It would then be clear to the reader that if this scale might also be used in the current study and with good reason as it appears to be used in similar types of studies in the past. However, I wouldn’t worry too much about this past point – the main one I think is important for discussion in the Intro is the cyberbullying measurement.
In the participant section I have included information about the original and revised sample size (including why the sample size was revised). Would I need to repeat this in the data analysis section as this pertains to the removal of data? Or, would you recommend simply stating the revised sample size in the participants section and then explaining the original sample and removal of participants in the data analysis section?
Yep – your last point is spot on. Just simply state original and revised sample sizes in the participants and explain in the data analysis section.
In Pallant, she recommends reporting reliability like this: “XXX has a reported reliability of XXX (citation(s) here). In our study, the reliability was found to be XXX”. Would we need to do this in the method section when describing the reliability of the scales and subscales, or is the published reliability fine (referenced of course)? If we need to report it like Pallant, can we report the reliability for the full scale? If we have to report the reliability for each subscale and explain it, I feel like this will eat a lot of valuable words.
The way that Pallant has reported is the way I usually go, though if you are really struggling with words you could put the values from our study in the Descriptive Statistic table in the Results (mean, standard deviation, cronbach alpha). You can then just refer in the materials section to Table 1 (for example) for Cronbach alpha’s values for scales found in current study.
Make sure you include a reference for the reported reliability of previous studies (see red font above in your example). You report the reliability only for the scales or subscales you are using. If you are not looking at the the DASS total score (which you shouldn’t be because this is not how the scale was developed) but rather only look at subscales, then the subscales are what you should report. As I mentioned in the meeting on Wednesday, for the B-COPE you can just do a range that the reliability for the B-COPE subscales ranged from XX (put lowest reliability value) – XX (highest reliability value). For any subscales that have low reliability on this measure you then might want to discuss what Pallant says about an alternative way to establish reliability, particularly when scales only have 2 items.
Also in relation to the reliability, in the data analysis section, how would you recommend reporting what we did? Is “reliability analysis was conducted for subscales of interest and was found to be XXX” okay? There are a lot of subscales that we ran the analysis on so I’m just trying to find a concise way of stating what we did.
Here could you mention what the criteria for good reliability is. This can be brief as you would have already discussed what the values are so no need to repeat information.
I can’t seem to find a reference that supports not replacing missing data on the CSES because of its nature. Are you able to provide one?
Don’t worry about a reference for this. We can’t replace missing data as we don’t know what these individuals have experienced – it wouldn’t be appropriate.