1. Purpose
1.1 This policy outlines the responsible and ethical design, development, deployment, use, and management of artificial intelligence for the purposes of learning, teaching, research, and operations at the University.
2. Scope and application
2.1 This policy applies to all students, staff, vendors, contractors, sub-contractors, and affiliates who are involved in the design, development, deployment, use, or management of artificial intelligence for learning, teaching, research, or operational purposes. This policy includes all individuals who work with artificial intelligence regardless of their role.
2.2 This policy applies to all artificial intelligence applications used by the University and must be considered when designing, developing, deploying, using or managing any AI-enabled application.
3. Definitions
3.1 Refer to the University’s Glossary of Terms for definitions as they specifically relate to policy documents.
4. Policy statement
4.1 The University embraces the potential of artificial intelligence to advance its vision, mission, and strategic goals, provided that it is designed, developed, deployed, used, and managed in a responsible and ethical manner. The University’s principles for responsible and ethical artificial intelligence should focus on a human-first approach, ensuring that artificial intelligence is beneficial for humans, society, and the environment, upholds human agency, rights, diversity, and does not result in unfair disadvantage, bias, or discrimination.
5. Principles
5.1 The University’s principles for the responsible and ethical use of artificial intelligence are adapted from the Australian Artificial Intelligence Ethics Framework (AAIEF).
5.2 Principle 1: Human-centered values, societal and environmental wellbeing - Artificial intelligence systems must be designed, deployed, developed, used, and managed with respect for human rights, diversity, individual autonomy, and must benefit individuals, society, and the environment.
5.2.1 This principle ensures that the design, development, deployment, use, and management of any artificial intelligence systems used for University purposes must enable, protect and align with human and community values. Any design, development, deployment, use, and management of artificial intelligence system should enable an equitable and democratic society by respecting, protecting, and promoting sovereignty, human rights, diversity, human agency and individuals autonomy, while also protecting the environment.
5.2.2 All people interacting with artificial intelligence systems should retain full and effective control. Artificial intelligence systems should not undertake actions that threaten individual autonomy or diverge from their disclosed purpose.
5.2.3 Artificial intelligence systems should augment, complement, and empower human cognitive, social, and cultural skills. All the recruitment of University personnel involved with designing, developing, deploying, using, or managing artificial intelligence systems must be in accordance with the Recruitment, Selection and Appointment - Operational Policy.
5.2.4 Artificial intelligence systems used for internal business purposes, can have broader impacts on people, social and environmental well-being. Those impacts, both positive and negative, should be accounted for throughout the artificial intelligence system’s lifecycle, including impacts outside the organisation in accordance with the Sustainability - Operational Policy and Sustainability - Procedures.
5.3 Principle 2: Fairness - Artificial intelligence systems must be inclusive and accessible and must not involve or result in unfair discrimination against individuals, communities, or groups.
5.3.1 This principle ensures that any design, development, deployment, use, and management of artificial intelligence systems for University purposes are carried out in a fair and inclusive manner throughout their entire lifecycle.
5.3.2 All artificial intelligence systems should be user-centric, created to enable all individuals interacting with them to access the related products or services. This includes both appropriate consultation with stakeholders, who can be affected by the artificial intelligence system throughout its lifecycle and ensuring people receive equitable access and treatment.
5.3.3 The University must take measures to ensure that artificial intelligence-driven decisions comply with anti-discrimination laws and align with Anti-Discrimination and Freedom from Bullying and Harassment - Operational Policy.
5.3.4 All artificial intelligence systems used for University purposes must be inclusive, accessible, and considerate of diversity and inclusion, avoiding cultural or potential prejudicial stereotypes and unfair discrimination in accordance with Equity, Diversity and Inclusion - Operational Policy.
5.4 Principle 3: Privacy protection, security and lawfulness - Artificial intelligence systems must respect and uphold privacy rights, data protection, and ensure data security. The design, development, deployment, use, and management of artificial intelligence must comply with all applicable state, federal, international, and jurisdictional laws or regulations related to learning, teaching, research, and operational activity.
5.4.1 This principle ensures that the University respects privacy and implements processes for data protection when using artificial intelligence systems. This includes establishing proper data governance and management for all data used and generated by an artificial intelligence system throughout its lifecycle and in accordance with Data Governance - Operational Policy and linked Procedures as well as Privacy and Right to Information - Operational Policy and linked Procedures.
5.4.2 This principle also ensures that the University implements appropriate data and system security measures so that all artificial intelligence applications used for University learning, teaching, research, and operational activity comply with national and international law. This includes the identification of potential security vulnerabilities and ensuring resilience to adversarial attacks. Security measures must account for unintended applications of artificial intelligence systems and potential abuse risks, with appropriate mitigation measures.
5.4.3 The design, development, deployment, use, and management of all artificial intelligence-driven technology must comply with all applicable ICT, cybersecurity, privacy, data, and information governance polices, as well as any relevant learning, teaching, and research policy documents, in addition to legislation or regulations relevant to the jurisdiction in which it operates.
5.4.4 All artificial intelligence used for University purposes must observe data security, sovereignty, and cultural sensitivity in all its forms, including as it relates to Aboriginal and Torres Strait Islander peoples. This includes their right to maintain, control, protect, and develop their cultural heritage, traditional knowledge, and traditional cultural expressions, as well as their right to maintain, control, protect and develop their intellectual property over these in accordance with Intellectual Property - Academic Policy.
5.5 Principle 4: Transparency, explainability and accountability - The design, development, deployment, use, and management of artificial intelligence systems must be transparent and include responsible disclosures so individuals understand when they are significantly impacted by artificial intelligence or engaging with an artificial intelligence system. Responsible parties throughout the artificial intelligence system’s lifecycle must be identifiable and accountable for the outcomes of these systems, and human oversight of artificial intelligence systems must be enabled.
5.5.1 The University ensures transparency in the design, development, deployment, use, and management of artificial intelligence systems through the responsible disclosure of data to key University stakeholders. All responsible disclosures related to the design, development, deployment, use and management of artificial intelligence systems for University purposes must be provided to the relevant stakeholders in a timely manner and must include reasonable justifications for artificial intelligence systems outcomes. This includes information that helps people understand outcomes, such as key factors used in decision-making.
5.5.2 This principle ensures individuals can identify when they are engaging with an artificial intelligence system (regardless of the level of impact) and can obtain a reasonable disclosure regarding the artificial intelligence system’s purpose.
5.5.3 This principle acknowledges that University departments and individuals are responsible for the outcomes of artificial intelligence systems that they design, develop, deploy, use, and manage. Mechanisms must be in place to ensure responsibility and accountability for artificial intelligence systems and their outcomes, both before and after their design, development, deployment, use, and management. The department or individual accountable for the decision should be as identifiable to the extent necessary. They must consider the appropriate level of human control or oversight for each artificial intelligence system or use case.
5.5.5 Artificial intelligence systems that have a significant impact on an individual’s rights should be accountable to external review. This includes providing timely, accurate, and complete information to independent oversight bodies for review purposes.
5.6 Principle 5: Reliability, safety and contestability - Artificial intelligence systems must be reliably operated in accordance with their intended purpose. When an artificial intelligence system significantly impacts a person, community, group, or environment, there must be a timely process for people to challenge the use or outcomes of the system.
5.6.1 Throughout their lifecycle, artificial intelligence systems should operate reliably in alignment with their intended purpose. This includes ensuring artificial intelligence systems are trustworthy, reliable, accurate, and reproducible as appropriate.
5.6.2 When an artificial intelligence system significantly impacts a person, community, group, or environment, a timely process should be available to allow people to challenge the use or outcomes of the system. Particular attention should be given to vulnerable persons or groups in accordance with Working with Vulnerable People (including Child Protection) - Academic Policy.
5.6.3 Artificial intelligence systems should not pose unreasonable safety risks and should implement safety measures proportionate to the magnitude of potential risks. These systems should be monitored and tested to ensure they continue to meet their intended purpose, and any identified issues should be addressed with ongoing risk management in accordance with the Risk Management - Governing Policy and Risk Management - Procedures.
5.7 Principle 6: Academic integrity - The University is committed to the professional and ethical use of artificial intelligence, including generative artificial intelligence (GenAI) technologies, in learning, teaching, and research, alongside the implementation of actions plans associated with these obligations.
5.7.1 This principle ensures that curricula are designed to constructively align learning and teaching activities, as well as assessments, with intended learning outcomes. Student learning is scaffolded throughout their program at the appropriate AQF level to support the development of disciplinary knowledge and essential graduate attributes, enabling students to act ethically and with integrity.
5.7.2 The design, development, deployment, use, and management of artificial intelligence in learning, teaching, and research is grounded in the principles of responsible and ethical use of artificial intelligence. The University has created the Artificial and Academic Integrity Plan with the following areas of action:
(a) Future-focused curricula
(i) The design, development, deployment, use, and management of artificial intelligence aim to ensure that University graduates develop the skills required to utilise artificial intelligence technologies responsibly and ethically in professional practice.
(ii) This is achieved through the programmatic curricula design, which scaffolds student learning in artificial intelligence skills at the appropriate Australians Qualifications Framework (AQF) level throughout their program of study, in accordance with the Coursework Curriculum - Academic Policy and linked Procedures.
(b) Academic, research, and professional integrity
(i) The University’s graduate attributes recognise the need for graduates to understand how to harness the power of artificial intelligence technologies in responsible and ethical ways.
(ii) The curriculum must ensure that learning and teaching activities and assessment are constructively aligned with intended learning outcomes. Student learning should be scaffolded throughout their program at the appropriate AQF level to develop disciplinary knowledge and the graduate attributes essential for ethical and integrity-based actions, in accordance with the Assessment: Courses and Coursework Programs - Academic Policy.
(iii) Researchers must understand the terms of use for artificial intelligence, including data, privacy, content responsibility, and legalities. They must ensure all research outputs are their own and properly cite the use of artificial intelligence when applicable. Additionally, researchers must verify that artificial intelligence-generated content is accurate, unbiased, and free from offensive or confidential material. All researchers must ensure that their research is in accordance with the
Responsible Research Conduct - Academic Policy.
(c) Inclusive and safe digital spaces
(i) Students must have equitable access to approved artificial intelligence technologies that support their learning in a secure environment and receive appropriate training in the safe use of these technologies.
(d) Professional development and educational support
(i) Academic staff must have opportunities to participate in targeted professional development activities to enhance their skills in embedding artificial intelligence literacy in the curriculum, designing authentic assessments to assure student learning outcomes, and reducing the risks of academic misconduct. Staff should also be trained in identifying and addressing suspected academic misconduct related to the unauthorised or inappropriate use of artificial intelligence in accordance with
Student Misconduct – Procedures and the Responsible and Ethical use of Artificial Intelligence Technologies - Guidelines.
(e) Monitoring, review, and continuous improvement
(i) The University must monitor program performance with explicit reference to embedding artificial intelligence literacy in the curriculum and implementing contemporary teaching and assessment practice to reduce the risks associated with unauthorised use of artificial intelligence tools by students, in accordance with the University Reviews - Academic Policy.
6. Authorities and responsibilities
6.1 As the Approval Authority, Vice-Chancellor and President approves this policy in accordance with the University of the Sunshine Coast Act 1998 (Qld).
6.2 As the Responsible Executive Member the Chief Operating Officer can approve procedures and guidelines to operationalise this policy. All procedures and guidelines must be compatible with the provisions of this policy.
6.3 As the Designated Officer the Chief Data Officer can approve associated documents to support the application of this policy. All associated documents must be compatible with the provisions of the policy.
6.4 This policy operates from the last amended date, will all previous iterations policies related to the ethical and responsible use of artificial intelligence are replaced and have no further operation from this date.
6.5 All records relating to the responsible and ethical use of artificial intelligence must be stored and managed in accordance with the Records Management - Procedures.
6.6 This policy must be maintained in accordance with the Policy Framework – Procedures and reviewed on shortened 2-policy review cycle.
6.7 Any exception to this policy to enable a more appropriate result must be approved in accordance with the Policy Framework – Procedures prior to deviation from the policy.
6.8 Refer to Schedule C of the Delegations Manual in relation to the approved delegations detailed within this policy.
END