April 9, 2024
Good morning, everyone!
This week’s critical vulnerabilities:
Patch All the Things! |
Word of the Week: Confabulation (and why it's a problem in AI)
When machine-learning researchers use the term hallucinate (which they still do, frequently, judging by research papers), they typically understand an LLM's limitations—for example, that the AI model is not alive or "conscious" by human standards—but the general public may not. So in a feature exploring hallucinations in-depth earlier this year, we suggested an alternative term, "confabulation," that perhaps more accurately describes the creative gap-filling principle of AI models at work without the perception baggage. In human psychology, a "confabulation" occurs when someone's memory has a gap and the brain convincingly fills in the rest without intending to deceive others. (“Hallucinating” AI models help coin Cambridge Dictionary’s word of the year
For some reason, everyone wanted to talk to me about AI this week. So guess what? You get to read about AI this week!
I am very concerned about AI for two primary reasons that are causing real-time problems:
(1) Release of proprietary, confidential or otherwise sensitive information. I've written before 1 about the dangers of employees feeding sensitive company information to ChatGPT, along with individuals divulging private health information, just to name two examples. You need to understand how these generative AI services work:
Everything you type into something like ChatGPT is stored in its "brain," and used to help it respond to another question in the future.
This past week, I read that the U.S. House of Representatives is removing and blocking Microsoft Copilot (AI app) from “all House Windows Devices” after the Office of Cybersecurity determined that it risked “leaking House data to non-House approved cloud services” 2
(2) They make stuff up. Generative AI has learned how to confabulate! I've read numerous examples over the past year, like the two attorneys in New York who were sanctioned by the Court for submitting a brief generated in part by ChatGPT and which included references to six non-existent cases.3
Most recently, I read that the "MyCity" chatbot, developed by New York City to inform its residents on city laws and regulations, is giving out false information:4
But a new report from The Markup and local nonprofit news site The City found the MyCity chatbot giving dangerously wrong information about some pretty basic city policies. To cite just one example, the bot said that NYC buildings "are not required to accept Section 8 vouchers," when an NYC government info page says clearly that Section 8 housing subsidies are one of many lawful sources of income that landlords are required to accept without discrimination. The Markup also received incorrect information in response to chatbot queries regarding worker pay and work hour regulations, as well as industry-specific information like funeral home pricing.
I'm not saying you shouldn't use generative AI services at all. (Although, I don't. At all. Just saying.) Be sure you undertand how they work, and the risks you are taking.
Stay cyber safe this week!
Glenda R. Snodgrass
grs@theneteffect.com
(251) 433-0196 x107
https://www.theneteffect.com
For information security news & tips, follow me!
Security Awareness Training Available Here, There, Everywhere!
Thanks to COVID-19, lots of things went virtual, including my employee Security Awareness Training. Live training made a comeback a few months
ago, but many organizations are retreating. No worries. Wherever you and your employees may be, I can deliver an interesting and informative training session in whatever format you prefer.
Contact me to schedule your employee training sessions. They're fun! ☺
TNE. Cybersecurity. Possible.
Speak with an ExpertHave Cyber Security News & Tips delivered weekly to your inbox.
Get Instant AccessContact
The Net Effect, L.L.C.
|
Resources
CMMC Newsletter Whitepapers Articles Videos Interviews |