Using AI for Higher Ed Marketing? Beware the AI Bias

By: Sarah Russell Jun 01, 2023

Using AI for Higher Ed Marketing? Beware the AI Bias

Higher education marketing is built on data and content. Video clips, blog posts, and ad copy are all essential elements – as well as big data that guide content creation and ad placement. The rise of AI tools and chatbots like ChatGPT promises to make the process of planning, creating and placing content more efficient. While technology can certainly do faster, there are some potential pitfalls that marketers can’t ignore. Most notably, the risk of AI bias.

What is AI Bias?

When the data or content delivered by AI systems shows prejudice for or against a certain group of people. For example, if a user asked an AI image generator to produce pictures of college students and all of the portraits were of white women, this would show a bias toward Caucasian faces.

Algorithmic bias grows out of societal biases. It most commonly happens when the AI is trained on biased data. For example, if the AI was trained on paintings from 17th-century Flemish artists, that limited pool would likely create a bias toward the people most likely to appear in their paintings — wealthy, Europeans.

Larger sets of training data can help but may not solve the problem. An AI trained on all of the content of the internet since its inception is likely to come across both fact and fiction. Not everyone who posts content online is concerned with being fair, accurate, or respectful of others.

Data processing is perhaps one of the most valuable ways AI can be used. Artificial intelligence can process large volumes of data much more quickly than a human could manage. Using AI with big data can reveal patterns and insights that would otherwise be difficult to spot.

This may seem like a straightforward application with little room for bias, but that may not be the case. Biases may exist in either the source data, the algorithms used to process it, or both. Anyone using AI should be alert to the potential challenges.

Challenges and Limitations of AI

Artificial intelligence can make content creation and data processing much easier, but it has challenges and limitations like any other tool. Some of the challenges of AI include:

  • AI is only as “smart” as its training material. If the inputs are biased or reflect outdated attitudes, the outputs probably will too.
  • Data used to train AI may be older and, therefore, out of date. For example, ChatGPT was trained on information from September 2021 and earlier.
  • AI can sometimes hallucinate, the word commonly used when an AI tool creates data to fill a gap.

That last one needs some unpacking. When a person hallucinates, they see things that aren’t there. When an AI hallucinates, it makes a projection based on known information, resulting in false or misleading results. As the New York Times noted, if you ask an AI chatbot like ChatGPT when two famous people met, the AI might describe the circumstances—even if there’s no evidence the meeting ever happened.

Other challenges caused by AI may include:

  • Making fact-checking more difficult. An AI tool doesn’t usually cite its sources, which makes backlinks and fact-checking more difficult.
  • Potential copyright infringement. Information without citations may come from copyrighted sources.

How Guard Against AI Bias in Content for Higher Ed Marketing

Higher ed marketers may wonder what they can do to avoid AI bias. After all, they are not the ones designing and training the software. Bias is already baked in before they start using it. The solution is not necessarily to completely avoid the use of AI—as the technology develops, that may become next to impossible. However, higher ed marketers can stay alert to instances of bias.

Marketers can use their knowledge of real-life biases to guess where technology might go astray. Racism, sexism, and ageism are all known problems in society, which means they’re likely to appear as algorithmic biases. Assumptions about gender, sexual orientation, beauty, health and religion are all likely areas for concern as well.

Just as a thoughtful writer shares their work with people who have different backgrounds and perspectives before publishing an article, marketers should also check the work of AI content creation tools.

Diverse internal teams and partners can also help address this problem by providing human insight into AI-developed content and datasets. Speaking up about biases in AI also helps by letting developers know there is still work to be done. The good news is that AI bias is a known problem. Many developers are working to reduce bias in their programs.

At Education Dynamics, we are data-driven but human-focused. We use AI tools to support UX and Conversion Rate Optimization but always apply human expertise to guide those tools. Download Leveraging AI to Supercharge Your Enrollment Marketing, A Playbook for Higher Education Marketers for more ways to guard against AI bias and enhance your content generation strategy or contact us today to find out how our education marketing experts can put people-powered AI to work for your college or university.