PSY20016 Social Psychology Report 2 Sample

Assignment details

For this assignment, you will explore the public perceptions of artificial intelligence (AI) technology, procedural justice theory, and trust. The study used for this report examines people’s acceptance of the use of AI in a legal setting. Understanding public perceptions of AI technology holds significant importance as those public perceptions directly impact the adoption and implementation of AI. People's attitudes towards AI can be influenced by factors like media coverage, cultural beliefs, and personal experiences. If the public perceives AI as biased, unfair, or unpredictable, they may exhibit resistance to its implementation and utilisation.

Procedural justice theory centres around people's perceptions of fairness in decision-making processes. Relational theories of procedural justice (e.g. Tyler & Lind, 1992) posit that individuals are sensitive to how they are treated when interacting with authority figures. When treated with respect and provided a voice—meaning the opportunity to express opinions, concerns, and perspectives during decision-making—they are more likely to accept the outcomes of a decision. In the context of AI, people would probably be more accepting of its implementation if the AI system is explained to them, and if they are offered the chance to ask questions about the AI system, voice concerns, and have those concerns addressed openly and respectfully. In the medical domain, for example, satisfaction with AI systems is higher when patients are provided with explanations as to how AI works than when those explanations are absent (e.g. Alam & Mueller, 2021).

Trust constitutes another critical aspect of procedural justice theory. In the context of AI, trust can be achieved through the involvement of humans alongside AI in decision-making processes. Human-in-the-loop is a system grounded in the belief that human-AI teams yield better results compared to either humans or AI working independently. Moreover, human oversight in AI decision-making has the potential to increase transparency and accountability, both of which are integral to procedural justice theory. Reviews focused on human relative to machine decisions indicate mixed findings, with human decisions trusted in some contexts, and machine decisions in other contexts (Starke et al., 2020). However, very little research has directly assessed the impact of adding a human (i.e. 'human-in-the-loop') on satisfaction with AI systems. Preliminary evidence suggests people are likely to be more satisfied with AI systems with human input (human-in-the-loop) than AI systems without human input (Aoki, 2021).

Therefore, the current study examines people’s acceptance of the use of AI in a legal setting:

• Does public acceptance of AI in the legal context depend on the respectful treatment of people who are impacted by AI systems?

• Does public acceptance of AI differ depending on whether AI decision-making involves human oversight (human-in-the-loop) in AI systems?

• Does the effect of respectful treatment on acceptance of AI differ depending on whether there is human oversight (human-in-the- loop) in AI systems?

Report format

The report is worth 50% of your final grade in the unit and will be marked out of 100. The breakdown of marks (out of 100) is given as follows.

Your report should also include a title page, abstract, and references. The abstract (6 marks) should be no more than 150 words in length. Marks will also be awarded for overall integration (8 marks) and references and presentation (10 marks). The references you have been given should be sufficient to begin your literature review. However, you should explore the relevant literature further and include additional references in your report

The research report should be a maximum of 2700 words, with +/- 10% flexibility on the word count. This word limit does NOT include:

o title page

o tables

o figures

o references.

The word count DOES include the abstract. As an approximate guide, use the following:

o Abstract (150 words)

o Introduction (approximately 900 words)

o Method and results (approximately 900 words combined)

o Discussion (approximately 900 words)
(Note that this guide already includes some of your 10% buffer on the word count, so try to stick to this limit.)

Your assignment should be typed and double-spaced using a standard 12-point font. Use APA formatting throughout, including for tables and figures. You may place your tables and figures within the results section (i.e. no need to use the manuscript submission convention of placing tables at the end of the article). Do not attach the materials, questionnaire or SPSS printouts of your results to your Research Report. As your formatting guide, refer to the APA Publication Manual. It is recommended that you read all essential readings and start to have a clear idea of what arguments you are going to make.

 

Solution

Introduction

Real-World Problem

It is for this reason that there is a need to understand how the public views artificial intelligence especially in legal practice as understanding that perception will determine how effective any AI system being implemented will be. Modern legal actions have thus been involving AI systems in coming up with appropriate actions, for instance, in determining the right sentence to give a certain offender or in giving a recommendation on the right course of action in a certain event known as predictive policing. As these systems form part of the justice system, it is in the interest of the public to ensure that the systems are fair and accurate. As seen through procedural justice, which is key to the uptake of AI in legal domains, it is the fairness of the processes that lead to decisions (Aoki et al. 2021). This implies that people will endorse and have the tendency to remain satisfied with an unsuitable decision if they believe that the process that led to the decision was fair. In the context of AI, what the term procedurally just means therefore is that those systems need to follow processes arriving at decisions that are done publicly, with respect, and the processes done in a manner that is as extensive as possible. This means engaging people in decision-making processes and respecting them and their opinions during these processes. They are required to influence confidence back to it and suppress the opposing factors where AI systems are in question in legal settings for university assignment help.

This leads to the “human in the loop” notion which also fortifies the concept of procedural justice by prioritizing human monitoring of the decisions made by AI. In this approach, the believed risk adversity of an AI indicates that even with conventional decision-making where the AI makes decisions on its own, there is still intervention from humans. The involvement of humans ensures that the possible impacts of automated decision-making are fought and the bias that the algorithm may possess is dealt with (Alam, & Mueller, 2021). Therefore, human input is important in order to take into consideration the cultural aspects and ethical choices that an algorithm may not capture. Therefore, both procedural justice and human-in-the-loop ideas are instrumental in identifying the right way of designing and implementing AI in legal settings so as to have the community voice in the systems. They help establish the trust in the population that the development of these systems is equitable and that some specific AI technology can be used to benefit the masses.

Theoretical Problem

Some previous research on what the public expects as regards the role of AI in the legal arena discussed several aspects of reliability, impartiality, and the feasibility of controlling human supervision of AI decisions. Procedural justice has been studied by some authors as the idea that the perceived justice of the procedures before decisions determine the public attitude towards the given decisions (Witkowski et al. 2024). Other studies have also shown that when people perceive that they have been paid adequate attention when decisions are being made then they will go to any extent to take whatever decision even if it is unfavorable to them.

Therefore, the term human-in-the-loop has been discussed in the context of studying trust in AI systems. It is a method that, apart from using the best algorithms, also involves the involvement of the human side so that the ethical aspects or other specific conditions are taken into account (Starke et al. 2022). Earlier research has suggested that wherein supervision by human intervention can enhance the perceived fairness of AI systems due to possibility to being blamed for any decision made and passing biased decisions. However, past research has some shortcoming which makes gaps in the current knowledge. Most of the works have considered the theoretical proposals and failed to examine the realistic application of such concepts in legal settings. Furthermore, it is revealed that studies that do not sufficiently examine the relationship between procedural justice and human-in-the-loop concepts to shape public trust in various AI usage. Therefore, there is a need to have studies that examine these dynamics in other legal contexts and determine the effect of varying degrees of human involvement and perceived procedural justice on the public.

Aims

The aim is to explore how procedural justice and human oversight impact public acceptance of AI systems in legal contexts.

Research questions

- Does public acceptance of AI in the legal context depend on the respectful treatment of individuals impacted by AI systems?

- Does public acceptance of AI in the legal context vary based on whether AI decision-making involves human oversight?

- Does the impact of respectful treatment on AI acceptance differ depending on the presence or absence of human oversight in AI systems?

Methods

Design

The 2 x 2 between-groups experimental design calls for two independent variables and one dependent variable. The first independent variable, IV_Treatment, has two levels: respectful, with an opposite of no respect. The second independent variable, IV_HIL (Human-in-the-Loop), also has two levels: with and without human oversight (Tyler, 1989). This design enables assessment to be made on the main effects of each independent variable and the interaction.

Each of the four groups represents a unique combination of the two independent variables and is randomly allocated. Respectful with human oversight, respectful without human oversight, respectful without human oversight, and respectful without human oversight. Artificial intelligence system satisfaction may be measured using DV_Satisfaction. This design analyses the relationship between IV_Treatment and IV_HIL to investigate how respectful treatment and human supervision affect AI adoption.

Participants

The study recruited the participants from Social Psychology students from week 1 tutorials, and after removing the unusable data sets it arrived at 116 participants. The participants’ gender shows that there were many females at 83 while the males were 29, the non-binary were 03, and one participant did not state their gender. The age distribution ranged from 18 to 65, with a mean age of 25.24 years with a standard deviation of 9.53 years. The sampling technique used in this study was convenience sampling whereby students from the available tutorials were used (Waldman, & Martin, 2022). This approach was useful in the collection of data but certain biases may be seen with the kind of sample used, meaning the studies may only be relevant within such a group of students and no other populations. The generalizability of the findings should also take into account the fact that the referenced samples have been relatively homogeneous.

Materials/Measures

Among the scenarios given to the participants were those on AI decision-making in legal affairs with differences in the level of respect during the process as well as human intervention. These factors were put into consideration while developing the scenarios to determine the public acceptance of AI. They were asked questions that evaluated their satisfaction with the AI decisions based on the experience they faced in terms of respect and non-respect of human oversight (Love et al. 2022). Validating participants’ attentiveness to the scenarios performed attention checks Understanding the impact of the manipulation checks assessing the level of the respectful treatment and human-in-the-loop conditions. These measures were aimed at confirming the experimental conditions and at making the study as free as possible from external interferences.

Procedure

The research followed a systematic approach as outlined in the research objectives. Participants were randomly assigned to one of four experimental conditions based on a 2 x 2 between-groups design, which manipulated two independent variables: in terms of respectfulness (high vs. low) and human supervision (present vs. absent). In the study, each participant was presented with an AI decision-making case in legal settings but this was dependent on the assigned condition. Finally, after reading the scenario, they answered a number of questions concerning their level of satisfaction with the AI decision made in this study which acted as the dependent variable (Lwamba et al. 2022). Control measures namely attention and manipulation checks were employed in a bid to ascertain the receptiveness of the participant and the appropriateness of the varying experimental manipulations. Data analysis was conducted through the independent samples t-tests to examine the difference in satisfaction levels between different conditions: respecting treatment and human control. The analyses were meant to find out if human oversight affected participants’ satisfaction with the AI decision or if the amount of respect they received affected their satisfaction with the decision made by AI.

Results

 

Figure 1:

Satisfaction

Notes: The bar plot as shown below displays the frequency distribution of the level of satisfaction (DV_Satisfaction). Above each bar is the satisfaction score which can range between 4 and 24 and the height of each bar represents the number of participants giving that particular satisfaction score. The distribution may seem somewhat skewed towards the higher levels of satisfaction, with more people in this category. The plot reveals the difference in the responses of the participants to stress on how the satisfaction with the outcome of the AI decision was distributed among the sample. It is useful when it comes to analysis of the overall satisfaction trend as presented in the study.

 

Figure 2

Distribution of Participants by Respectful Treatment Condition

Notes: The bar plot depicts the frequency of participants across two conditions including “No respectful treatment” and “Respectful treatment”. The bars in this illustration represent the number of participants in each condition, for example, the number of participants in the study who underwent Respectful treatment and the number of participants in the study who did not undergo Respectful treatment. This graph also reveals a slightly higher variability of the participants across the groups in the respectful treatment condition compared to the non-respectful treatment groups, which helps to understand the distribution of the independent variable (IV_Treatment) in the given study.

 

Figure 3

Distribution of Participants by Human-in-the-Loop Condition

Notes: The bar plot displays the frequency of participants across two conditions and two sets of bars are labeled as “No Human in the Loop” and “Human in the Loop.” The bars indicate the number of participants in the no oversights and human oversight conditions. The plot also shows that there were more participants in the “Human in the Loop” condition as opposed to the “No Human in the Loop” condition, thus displaying the distribution of the independent variable (IV_HIL) within the work.

Evaluation of Descriptive Statistics

The descriptive statistics presents simple statistics of the measures used in the study. DV_Satisfaction was used as the dependent variable and has a mean score of 14.67 with a standard error of 4.69 showing that participants were moderately satisfied at the event. The independent variables, IV_Treatment and IV_HIL are dummies with a mean of 0.55 and 0.56, respectively, indicating that there was no significant difference between the two groups in terms of being treated with respect and given conditions where the human in the loop has to be considered.

Results for research question 1

The independent samples t-test findings reveal the difference in the satisfaction of those exposed to or not exposed to responsible treatment. Levene’s test for equality of variances tests results are significant p = 0.678. Based on the t-test, it is apparent that there is a statistically significant difference in satisfaction between the two groups (t(114) = -2.017, p = .046). The respondents who were treated with respect expressed a higher level of satisfaction (M = 15.45, SD = 4.38) compared to those who were not (M = 13.71, SD = 4.91). This indicates that the mean difference is equal to -1.74 further highlights that increased respect sets up higher satisfaction with a 95% confidence interval of -3.45 to -0.03.

Results for research question 2

The independent samples t-test was run to compare the effect of human oversight on satisfaction scores with DV_Satisfaction. The ‘No Human in the Loop’ participants were compared to the ‘Human in the Loop’ participants and showed signification differences between them. In particular, the group with “Human in the Loop” showed a higher level of satisfaction, M = 15.55, SD = 4.65, compared to the group with “No Human in the Loop”, M = 13.55, SD = 4.54, t(114) = -2.331, p = .022. This indicates that positive inputs from humans especially in the form of oversight improve satisfaction levels.

Results for research question 3

It appears that the effect of receiving proper treatment is not the same for AI acceptance when human control of AI is present or absent. The one-sample t-test results demonstrate that satisfaction levels (DV_Satisfaction) are significantly higher when there is human oversight (M = 15.55) as opposed to no human oversight (M = 13.55). This means that there is a positive correlation between receiving polite treatment and the acceptance of the AI if there is human intervention during the decision-making process. On the other hand, lack of human interaction leads to a reduction of the impact of assertiveness on satisfaction and this implies that human interaction is very important in increasing acceptance of AI.

Discussion

Summary of results

The results of this study respond to the research questions and hypotheses set up in the introduction, namely to reveal how the aspects of dignified treatment and human supervision affect the acceptance of AI systems. The first hypothesis proposed was that of respect, claiming that it would increase acceptance of AI technology and that this is more likely should human oversight be involved, as part of procedural justice and the human-in-the-loop framework. The results largely lent support to this hypothesis and showed that subjects reported higher levels of satisfaction (DV_Satisfaction) when the program afforded them a sense of being treated with respect while at the same time being overseen by human operators. This finding supports previous findings that indicated that respect and human touch are two of the most important factors leading to trust and acceptance of AI systems. It seems possible to consider the evidence in favour of the hypotheses rather soundly, as the statistically sophisticated analysis of differences in the satisfaction level depending on the conditions of the experiment has shown. More specifically, the moderation role that human beings play in the decision-making process of an AI system further boosted the impact of perceived respect on satisfaction. This implies that the above-outlined relations are probably real psychological processes and not methodological products. It is also want to note that all the results of the comparison made by different statistical tests including t-tests are quite stable and credible.

Nevertheless, if the hypotheses were not supported, there might be several possible reasons concerning the following statements. Some contextual factors may include sample size, the decision of how to define ‘respectful treatment’, or characteristics of the AI scenarios described in the survey that may have impacted the results. Moreover, participants’ past knowledge about AI, or self-acceptance of technology could have interacted with the effects, thereby producing variability in the findings (Mani & Goniewicz, 2024). Such considerations can be effectively used to stress the necessity of the careful analysis of the general context of certain research together with attention paid to the general design of the study. The study results can be deemed as highly relevant to the initial hypotheses and hold implications for the further development of AI acceptance models with a major focus on the importance of respecting the individual’s dignity and having a human supervisor.

Limitations of This Study

This study’s limitations include sampling, method, and the specific scope of scenarios employed in the current investigation. First, the study was conducted using a specific demographic sample and therefore necessarily has some limitations in terms of generalizing to the population. This restricts the applicability of the outcomes of the experiment for different-aged participants, people of a different cultural background, or those with diverse levels of pre-existing familiarity with some AI (Tulcanaza-Prieto et al. 2023). Second, the scenarios used, though rather realistic as to the contact between people and AI, might have failed to capture the entire spectrum of AI usage scenarios, thus limiting the external validity of the findings. Third, it is also possible that the measurement of the constructs of the disrespectful treatment and human oversight during the operation was not comprehensive enough, which influenced the results.

Future Work

Future studies should try to overcome these shortcomings by having a larger and more heterogeneous sample to generalize their results. It might also be beneficial in future research to use more variation of AI scenarios or a variation of the experimental approaches that may offer a better perspective on how different environments impact AI adoption (Wakunuma & Eke, 2024). Additionally, future research can improve the measures used to assess the aspects of Respectful Treatment and Human Oversight, including the use of qualitative research or follow-up time frames that could better gauge the temporal effects of the intervention with greater strength. It is also further appreciated that the other variables, for instance, participants’ experience with AI solutions or their demographic characteristics, might provide even more meaningful information concerning AI acceptance.

Conclusion

Therefore, the present study offers theoretical contributions to the area of respect and human supervision in the acceptance of AI which can help to improve the propositions of the hypothesis towards improving the user satisfaction in the acceptance of AI. These differences observed in the level of satisfaction across the experimental conditions imply that the aspects are critical in generating trust and acceptance of AI systems. Therefore, while the present study has made some progress in addressing the research question, it is also important to recognize the study’s limitations as a sign for future studies that may be able to replicate and build upon these results. 

References

Alam, L., & Mueller, S. (2021). Examining the effect of explanation on satisfaction and trust in AI diagnostic systems. BMC Medical Informatics and Decision Making, 21(1), 178. https://doi.org/10.1186/s12911-021-01542-6

Aoki, N. (2021) The importance of the assurance that “humans are still in the decision loop” for public trust in artificial intelligence: Evidence from an online experiment. Computers in Human Behavior, 114, 106572. https://doi.org/https://doi.org/10.1016/j.chb.2020.106572

Jennifer, V. J., Dembrower, K., Strand, F., & Grauman, Å. (2024). Women’s perceptions and attitudes towards the use of AI in mammography in sweden: A qualitative interview study. BMJ Open, 14(2) doi:https://doi.org/10.1136/bmjopen-2024-084014

Love, K., Nadeem, F., Sloan, M., Restle-Steinert, J., Deitch, M. J., Sohail, A. N., . . . Sassanelli, C. (2022). Fostering green finance for sustainable development: A focus on textile and leather small medium enterprises in pakistan. Sustainability, 14(19), 11908. doi:https://doi.org/10.3390/su141911908

Lwamba, E., Shisler, S., Ridlehoover, W., Kupfer, M., Tshabalala, N., Nduku, P., . . . Snilstveit, B. (2022). Strengthening women's empowerment and gender equality in fragile contexts towards peaceful and inclusive societies: A systematic review and meta-analysis. Campbell Systematic Reviews, 18(1) doi:https://doi.org/10.1002/cl2.1214

Mani, Z. A., & Goniewicz, K. (2024). Transforming healthcare in saudi arabia: A comprehensive evaluation of vision 2030’s impact. Sustainability, 16(8), 3277. doi:https://doi.org/10.3390/su16083277

Starke, C., Baleis, J., Keller, B., & Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2). https://doi.org/10.1177/20539517221115189

Tulcanaza-Prieto, A., Cortez-Ordoñez, A., & Chang, W. L. (2023). Influence of customer perception factors on AI-enabled customer experience in the ecuadorian banking environment. Sustainability, 15(16), 12441. doi:https://doi.org/10.3390/su151612441

Tyler, T. R. (1989). The psychology of procedural justice: A test of the group-value model. Journal of Personality and Social Psychology, 57(5), 830–838. https://doi.org/10.1037/0022 3514.57.5.830

Wakunuma, K., & Eke, D. (2024). Africa, ChatGPT, and generative AI systems: Ethical benefits, concerns, and the need for governance. Philosophies, 9(3), 80. doi:https://doi.org/10.3390/philosophies9030080

Waldman, A., & Martin, K. (2022). Governing algorithmic decisions: The role of decision importance and governance on perceived legitimacy of algorithmic decisions. Big Data & Society, 9(1). https://doi.org/10.1177/20539517221100449

Witkowski, K., Okhai, R., & Neely, S. R. (2024). Public perceptions of artificial intelligence in healthcare: Ethical concerns and opportunities for patient-centered care. BMC Medical Ethics, 25, 1-11. doi:https://doi.org/10.1186/s12910-024-01066

Fill the form to continue reading
Would you like to schedule a callback?
Send us a message and we will get back to you

Highlights

Earn While You Learn With Us
Confidentiality Agreement
Money Back Guarantee
Live Expert Sessions
550+ Ph.D Experts
21 Step Quality Check
100% Quality
24*7 Live Help
On Time Delivery
Plagiarism-Free
Get Instant Help
University Assignment Help

Still Finding University Assignment Help? You’ve Come To The Right Place!


CAPTCHA
AU ADDRESS
81 Isla Avenue Glenroy, Mel, VIC, 3046 AU
CONTACT