Navigating AI at George Mason

George Mason alum distance learning from home. Photo by Charlotte Cantwell/Creative Services

The idea of Artificial Intelligence (AI) may have started as science fiction, but its development has been in the works for decades. AI is software designed to think and learn like people, using algorithms and computer programs to perform tasks that usually require human intelligence. It simulates the mind’s ability to reason and can analyze information faster than humans.

There is still much to learn about AI and its capabilities. Still, AI is undoubtedly the future, and many are already using it in their day-to-day activities (potentially without even knowing it!) That’s why Information Technology Services (ITS) is here to help the George Mason University community better understand AI and how to utilize it best.

As technology advances, so do associated risks. It is imperative to understand the implications of using AI while working and handling protected data at George Mason. To address this, we consulted our ITS security experts for their insights on this emerging technology.

Learning AI

“There is traditional AI, which recognizes patterns, and Generative AI, which creates patterns,” said Noor Aarohi, director of IT Risk and Compliance.

An example of Traditional AI is the IBM Deep Blue, a chess-playing system that defeated a reigning chess champion in 1997. Other examples include self-driving cars, smart home devices, and virtual assistants such as Siri and Alexa. “The earliest form of Traditional AI is reactive, which lacks memory,” shared Curtis Mcnay, former director of IT Security.

Generative AI, also known as Limited Memory AI, is reactive and includes memory learning, McNay said. The category encompasses the Large Language Models (LLM) responsible for generating poetry, visual arts, music, fake images and more.

When using Generative AI, it is important to recognize its limitations. It can provide inaccurate or biased information, struggle with context, and lack common sense. It can also misunderstand a user’s intent, and provide ambiguous responses when prompts are not clear.

George Mason AI Tools

For all George Mason users, Microsoft (MS) Copilot for the Web is available for use as well as Zoom AI Companion.

Copilot can assist George Mason faculty, staff, and students with their daily work – providing efficiency, creativity, and support. Other benefits of using Copilot include getting assistance with generating content, analyzing or comparing data, summarizing documents, learning new skills, drafting an email, generating images, and answering complex questions.

Zoom AI Companion is a generative AI digital assistant delivering powerful, real-time capabilities to help users improve productivity and work together more effectively. The AI Companion currently works with Zoom Meetings, Recordings, and Whiteboards.

When using AI, it is important to ensure your prompts are clear and specific. When possible, give concise examples of the style or format you need, and provide context to limit the response. AI can also be helpful in various ways, including assisting with conversations, enhancing web searches, and aiding with scheduling, reminders, and to-do lists.

AI Risks & Safeguards

The ability of AI to produce realistic items highlights why it is important to recognize the increased risks associated with its use. Scammers leverage AI to develop phishing emails that seem authentic, fake people’s identities, and generate disinformation for social media posts that seem credible. People who fall victim to phony AI offerings may unknowingly provide information, which can lead to financial loss or identity theft. These implications can extend beyond the individual, potentially affecting organizations and their communities.

When logged into your George Mason Microsoft account, you have the ability to chat with an AI agent, which can answer questions, generate content, or help you with tasks using publicly available online data. However, before utilizing Copilot Chat, make sure that a green shield with a checkmark is in the top right corner of your window. When you hover over the shield, a pop-up with data protection information should appear. Once confirmed, you can get started learning how to make the most of Copilot.

At George Mason, AI software that uses Protected data must be integrated with our systems or have a user interface that is reviewed by the Architecture Standards and Review Board (ASRB). The ASRB ensures all software complies with university policies and security requirements. Students, faculty, and staff using AI must follow guidelines that have been outlined by ITS and the Commonwealth of Virginia:

  • AI software must be approved by the ASRB
  • Highly Sensitive and Restricted data should not be entered into Generative AI or LLM tools to gather information, for prompts, to query, or for other uses
  • Departments using AI tools, such as ChatGPT, must have internal procedures for managing accounts including who will own the account, who can access it, and how access will be revoked when access is no longer needed
  • Outputs must be monitored for factual errors and biased or inappropriate information and statements
  • AI tools must comply with data protection regulations and have the proper controls in place to protect sensitive information
  • Risk assessments must be conducted to identify vulnerabilities and privacy risks, which is a function performed through the ASRB evaluation process

For more details on administrative AI guidelines and policies at George Mason, visit ITS Guidance on Using AI and University AI Guidelines.

If you have questions or concerns about AI, contact the IT Security Office at [email protected] or IT Risk and Compliance at [email protected].