The security threat of AI/GenAI is very, very real in so many ways, it may be unsurvivable eventually. I received a client inquiry for my photography – seemed very, very real (and still could be). But instead of my contact form, it came directly to my DandyHeadshots email. Considering I’d never interacted with this person nor their firm in my inbox prior – and they didn’t mention being referred to me – this was odd.
Their address was in Indy and they seemed real, not too much odd about the interaction (other than wanting to see “client galleries,” which are private and for clients, and not a common practice for prospects to ask for).
I replied with the basic info they wanted, and she asked about scheduling a phone call. We made the time, and I provided a phone number (already in my signature). I arranged my schedule around the call and… nothing.
I waited, then sent an email (she gave no phone number), saying I was ready to take her call.
It’s 24 hrs later, no-show.
I looked at her email signature and ran a Sucuri site check on the domain and got a “Unable to scan your site. Host not found.” I then checked out the firm on Google, and while the firm clearly exists, the firm’s tld (top-level domain) is a “.com,” whereas the link I was given was a “.us.” Still, this could be a legit redirect for marketing purposes, but also a red flag.
I still don’t know what happened, but this and trickier interactions are what we can expect from the scams coming. Point is, this very likely could’ve been a scam to gather data from me – namely – my phone number (though there are easier ways). It’s no effort at all to devise an AI bot for data mining that can carry on a convo and succer in people at alarming rates. I’m getting them all the time at my Etsy shop now. Nothing and no one, no matter how savvy, will stop them. Is this really what we want from technology? A growing and unstoppable security threat that will eventually penetrate anything with ease and cause paranoia with everyone (we’ve already got a a lot of that)? Every interaction with someone will be highly suspect, even convos with your best friends (is this really my buddy?).
Cybersecurity is simply a timeless cat-and-mouse game that’s existed from the start. Eventually, the cat wins. With biometrics (a HORRIBLE authentication method that “experts” devised) being the last known frontier of authenticating real identity, will we eventually be forced offline or voluntarily leave because of the chaos and lack of security? With every new technology, we must weigh the pros and cons. Are we better with or without the technology? And BTW, REAL experts would’ve never devised such a fraudulent and obviously dangerous authentication method using fingerprints, retina scans and the rest… inherent, extremely personal traits we can’t change out like a password. This goes to another article for another time about the false cult of expertise.
We must be careful how much power we reward ourselves. This, regarding the power and accessibility of our technology. For instance, it used to take skill to be a hacker. With the blessings of GenAI, basic criminals now stand a chance of hacking the most protected databases. Of course, once AI’s various applications are out there, it ain’t going back for those who’ll use it nefariously. But for commercial use? The majority of AI use can and should be buried by public outcry and demand, and I’d encourage people to be proactive in speaking out against its use for deepfakes, deceptive practices, and the replacement of human talent. It’s the race to the bottom to ONLY benefit a few at the top, while permanently displacing most people in everything from security, trust, and the ability to make a living. We all need to do our research but don’t fall for the trap.