
ChatGPT, Bard, CoPilot, Alexa, and Siri are just a few examples of how artificial intelligence (AI) is quickly becoming a part of our daily lives, both personally and professionally. While these tools are great at making our lives more efficient, it's essential for us to stay informed and aware of best uses and practices.
Pub Info is working with IT to break down the new AI section of the Acceptable Use Policy over a series of Spotlight articles. Part 1 in the March/April edition discussed the Confidentiality and Labeling sections (9.d and 9.f). In Part 2, we cover the Risks section (9.g), which itself has four subsections.
AI Policy Dissected - Risks
We asked IT to help us find real-world examples to illustrate each part of the AI Risk policy. The AILM references below stand for “artificial intelligence language model.”
9.g Risks: The use of AILM has inherent risks that users should be aware of. These risks include, but are not limited to:
i. Accuracy: There is a risk that AILM may generate inaccurate or unreliable information. Users should exercise caution when relying on AILM generated content and should always review and edit responses for accuracy before utilizing the content.
Example - In Feb 2024, lawyers from a national law firm, Morgan & Morgan, were sanctioned by a judge for citing eight nonexistent cases, at least some of which were generated by AI. When AI creates fake or misleading information, this is known as “hallucinations.”
ii. Confidentiality: Information entered into ALIM may enter the public domain. This can release non-public information and breach regulatory requirements, customer or vendor contracts, or compromise trade secrets.
Example - In 2023, Samsung allowed engineers at its semiconductor group to use AI to help find and fix issues with their coding. However, in doing so, the engineers entered confidential and proprietary information into ChatGPT, which made that information available to others outside Samsung.
iii. Bias: AILM may produce bias, discriminatory, or offensive content. Users should use AILM responsibly and ethically, in compliance with company policies and applicable laws and regulations.
Example - When Amazon experimented with AI in its recruiting tool, they found it was biased against women. This is because the historical data used to train AI to recruit for technical positions was disproportionately from men. The AI tool went so far as to rank applicants who attended all-female universities lower than other those who attended other universities.
iv. Security: AILM may store sensitive data and information, which could be at risk of being breached or hacked.
Example - In 2024, researchers testing an AI service tool provided by Slack, a popular workplace communication tool, found it could be manipulated into sharing data from private channels and communications. In other examples, such as Canada Air and a Chevy Dealership in California, customers were able to trick an AI tool into providing substantial discounts. Anyone looking for a $1 Chevy Malibu or a $25 flight to Vancouver?
Bottom line is, if you are using AI for work-related purposes or thinking about it, please remember to contact the IT Department for assistance and guidance.