ChatGPT is already causing problems.
The text generator from Open AI is imperfect at creating factually accurate or interesting writing, but it can create relatively appropriate text for just about any prompt in no time at all. That's pretty remarkable. And even with a bevy of built-in filters, that can be pretty dangerous as well.
Perhaps unsurprisingly, people have already found uses for the tool that are less than ideal. Anything that can create content out of whole cloth can likely create something dangerous. It's just a matter of time.
We've collected six of the scarier — or at least questionable — uses that folks have already found for the app. And, keep in mind, this is all before the app goes fully mainstream and while it's still in its infancy.
1. Creating malware
ChatGPT creating malware is legit scary. Not really because malware is anything new, but rather because ChatGPT can do it endlessly. AIs don't sleep. Infosecurity Magazine wrote that "cybersecurity researchers were able to create a polymorphic program that is highly elusive and difficult to detect." Now, to be clear, I am no cybersecurity expert. But basically the researchers could use the app to create code for malware, then use the app to create variations on that code to make it hard to detect or stop.
"In other words, we can mutate the output on a whim, making it unique every time," researchers told Infosecurity Magazine.
2. Cheating in school
OK this is less scary, but more predictable. A tool that can make text about anything...literally perfect for a kid trying to cheat in school. And kids love cheating in school. Professors have already said they've caught students in the act while school districts across the country have banned the app. It's hard to see this trend slowing, however, and likely AI will just become another tool kids are allowed to use to learn.
3.Using it to spam on dating apps
OK, maybe spam isn't the perfect word, but people are using ChatGPT to chat with matches on Tinder. People are letting AI take over the legwork of a conversation. While it's not necessarily frightening, it's pretty unsettling to think you could be chatting with an app instead of a potential partner. It's dating devolved into growth hacking.
4. Taking jobs of writers
This is just one example but uhhhh, should I be worried about my job?
Tweet may have been deleted (opens in a new tab)
5. Phishing and scamming
It's tougher to prove that this one has already happened, but it stands to reason that ChatGPT, or similar AI writing tools, would be perfect for phishing. Phishing messages are often easy to spot because of their broken language, but with ChatGPT that can all be fixed. Experts have said this is a pretty clear use case for the app.
To see if this was actually plausible, we at Mashable asked it to fix the shaky English in a real scam email, and not only did it perform a nice cleanup in seconds, in one of the outputs it started blackmailing the fictional reader without being asked to.
6. It can fool job recruiters
Everyone knows the struggle of applying to jobs. It's a seemingly endless, often demoralizing process that hopefully ends with the reward of a good job. But if you're out there applying to jobs, you might be missing out on your dream gig to an AI app. A consultancy firm reportedly found that applications written by ChatGPT beat out 80 percent of humans. It's easy to see how ChatGPT would nail all the keywords that will catch recruiters eyes or, more likely, pass through the filters set out by HR software.
via Tech News Digest
No comments:
Post a Comment