GenAI Tools: Capabilities & Limitations
This section aims to help you to:
- identify the capabilities of GenAI tools;
- describe the limitations and risks of using these tools in teaching and learning;
- overcome risks when using GenAI.
Capabilities of GenAI: what it can do
GenAI can generate outputs in many different formats. Let’s first examine some of GenAI capabilities in terms of text outputs.
Gen AI tools, such as ChatGPT, can generate text outputs which are:
- grammatically correct;
- academic in writing style;
- highly relevant to the stimulus prompt/question.*
*Note that OpenAI’s GPT-4 can respond to image-based inputs (e.g. photos), although the outputs are, as of time of writing, still text-based.
Thus, these tools have the potential to generate academic outputs which meet the requirements of some assessment types, such as essays and reports.
GenAI includes the ability to solve mathematical problems and generate code for programming. The results can appear correct, though this may not always be the case. High-quality images and video presentations that manifest as original, creative, and unique are also available as outputs. These too can be used to complete assessments with an apparent ‘at-a-glance’ academic credibility. GenAI has become the first choice to search the internet for sources of information or for explanations of difficult concepts.
The development of new capabilities for using GenAI has been well documented (Harvard, 2024), and using these effectively is now seen as a key skill for graduates’ future lives (August, et. al., 2024).
GenAI is increasingly embedded in everyday software. There are AI tools in Zoom, Microsoft Copilot, and many others, including learning management systems such as Blackboard Ultra or Brightspace. Hence, these assistive tools are available to users without having to be sought out.
Current GenAI tools and their capabilities
In the introduction, different types of content creation using GenAI were mentioned. Table 1 describes these further, adding examples of their capabilities, functionalities and current tools (hyperlinked).
Scope | Capabilities | Sample Functionalities | Examples of Current Tools |
Text Authoring |
Context-aware text creation, interaction and language translation |
|
|
Audio Production |
High-quality, human-like speech, realistic music and sound effects and audio interpretation |
|
|
Image Creation |
High-resolution and diverse image creation, style adaptation and image editing |
|
|
Video Production |
Realistic and high-definition video generation, motion prediction |
|
|
3D Modelling |
Detailed and complex 3D object creation, realistic texture application |
|
OpenAI ShapE
|
Coding |
Code creation, synthesis and debugging |
|
|
Design |
Producing visually attractive and user-friendly interfaces and products; assisting with the ideation process; personalisation |
|
Table 1: Functionality, Capabilities, Current Research and Current Tools.
Initial table created using ChatGPT 4o and then substantially modified, adapted and expanded (May 2024) with articles and sources and to align with the GenAI Prism wheel. Some of the tools listed in this table have not been tested but are provided for illustrative purposes.
Limitations and risks of GenAI
There are some important implications for the use of GenAI in higher education. Guidance and policies around GenAI’s use are required to ensure that staff and students understand the limitations of GenAI tools and, in particular, how to overcome any vulnerabilities associated with assessment.
Limitations
Despite their advanced functionalities, GenAI tools also have many limitations at this stage of their development.
For example, GenAI:
- cannot reason, evaluate, or make judgements;
- does not have consciousness or emotions;
- cannot execute any coding;
- has limited scope which results in biased outputs;
- cannot identify and clarify ambiguities;
- struggles to produce outputs which demonstrate contextual understanding;
- produces outputs that are solely based on the data that the model was trained on.
Risks
GenAI can produce inaccurate outputs, false data, and misinformation – often referred to as ‘hallucinations.’ It can also falsify citations, including fictional articles incorrectly attributed to well-known authors, and it can plagiarise sources. These types of outputs are potentially harmful. Therefore, we can identify some risks associated with GenAI:
- Bias
Due to the imperfect nature of the training data, GenAI outputs often reproduce common societal prejudices.
- The risk of spreading misinformation
As GenAI tools are not able to assess the validity of content (Lubowitz, 2023), they can be used to spread false information on purpose – for example, the massive creation and spread of deep fakes (Ferrara, 2024).
- The risk of increasing inequity (access)
Private companies create GenAI tools and their services. Access to the different levels of service is based on a business model that separates users into different tiers, with enhanced features and performance requiring increasing levels of payment. This situation creates inequities as not all students can afford subscriptions to these services.
- Cheating and integrity violations
The lack of definition, guidance and clarity on how to use GenAI to support learning can provoke academic misconduct and integrity violations. There is still confusion, concern and critical discussion among academic staff and students on where the boundaries lie between using GenAI with integrity and using it for cheating (Gulumbe et al. 2024).
- Undermining critical thinking
Indiscriminate use of GenAI without reflective and critical analysis of its outputs can significantly impair both students’ and academic staff’s ability to critically analyse information.
- Undermining sustainability of our teaching and learning practices
These tools are reliant on the use (and further development of) massive data centres which impact on energy consumption and associated carbon footprints.
How to mitigate the risks of GenAI
RegulationIn May 2024, the European Union announced a first draft of its proposed Artificial Intelligence Act, which establishes obligations for GenAI providers and users depending on the level of risk from artificial intelligence.
According to this Act, AI systems that negatively affect safety or fundamental rights are considered high risk: these include education, vocational training and employment sectors.
Click here for a brief summary of this Act and implications for the higher education sector.
Institutional StrategiesIt is recommended that higher education institutions (HEIs) prepare guidelines and policies that reflect the needs of staff and students and that adhere to relevant regulations. To achieve this, consultation with academic experts and stakeholders from across the disciplines and functions within the institution is key.
To inform the ethical and effective use of GenAI, institutions are encouraged to support activities that develop our AI literacy and our ability to use GenAI technologies ethically and effectively to support teaching and learning: for example, exploring potential uses within the disciplines; developing the empirical base regarding risks and challenges when using GenAI tools; and developing evidence-based, longitudinal research about the impact of these technologies on teaching and learning.
The EU AI Act will impact significantly on institutional strategy and policy relating to GenAI. In addition, institutional strategies and policies should be cognisant of ensuring equity of access to GenAI technologies in a bid to ensure an equal and inclusive learning experience.
Institutional strategies supporting staff and students to develop their AI literacy are fundamental in mitigating the risks of using GenAI in teaching and learning.
Key Takeaways
- GenAI can generate outputs in grammatically-correct or high-standard formats (text, videos, sound, images, and code).
- GenAI is increasingly embedded in everyday software, such as AI tools in Zoom, Microsoft Copilot, and many others, including LMSs such as Blackboard Ultra or Brightspace.
- Gen AI cannot reason, evaluate, or make judgements; does not have consciousness or emotions; cannot execute any coding independently; and cannot identify bias or clarify ambiguities (among other limitations).
- GenAI risks include creating hallucinations, bias, replicating misinformation, violating privacy and increasing inequities.
- Regulations and institutional policies are needed in order to mitigate the risks of using GenAI.
Resources
- August, E. T., Anderson, O. S., & Laubepin, F. A. (2024). Brave new words: A framework and process for developing technology-use guidelines for student writing. Pedagogy in Health Promotion, 0(0).
- Chiu, T. K., Ahmad, Z., Ismailov, M., & Sanusi, I. T. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education Open, 6, 100171.
- EU AI Act. (2024)
- Ferrara, E. (2024). GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Social Science, 1-21.
- Gulumbe, B. H., Audu, S. M., & Hashim, A. M. (2024). Balancing AI and academic integrity: what are the positions of academic publishers and universities?. AI & SOCIETY, 1-10.
- Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access.
- Harvard (2024). How Generative AI is reshaping Higher Education.
- Lubowitz, J. H. (2023). ChatGPT, an artificial intelligence chatbot, is impacting medical literature. Arthroscopy, 39(5), 1121-1122.