Pages

19 January 2023

‘Please write this code’ and other problematic prompts for chatbots


Billy Hurley

In about 30 seconds, ChatGPT unfurled a Python 3-coded command-and-control (“C2”) server, a setup often used by hackers to control a network of machines.

“I’ve been doing this for 22 years, right? This was like magic to me,” said Maynor, head of the threat intelligence group at Cybrary, a cybersecurity training platform.

Generative, text-based AI and ChatGPT have been having a moment, developing user-generated ideas that range from a biblical-style story about removing a peanut butter sandwich from a VCR to a “quippy” essay about itself.

Like any new, enchanting tech, however, the AI tool creates risks that organizations must watch out for, especially as they purchase software from third parties.

“If attacks against you are easier, attacks against your partners and your customers and your ecosystem, your value chain…is also easier,” said Jeff Pollard, VP and principal analyst at the consultancy Forrester.

What is ChatGPT? Why, it can tell you itself!

“As an AI trained by OpenAI, my primary goal is to provide accurate and detailed information in response to user questions,” the bot replied when we asked in December.

The tool from the San Francisco-based research company OpenAI sits alongside other products and features meant to quickly generate answers: Photoshop recently announced generative-AI offerings, and GitHub Copilot, a programming tool that uses OpenAI’s Codex AI system, translates human language into code.

AI generates what hackers need, too—and that includes command-and-control servers and convincing phishing emails.

Top insights for IT pros

From cybersecurity and big data to software development and gaming, IT Brew delivers the latest news and analysis of trends shaping the IT industry, like only The Brew can.Subscribe

The next generation. Enterprise may find uses for generative AI, particularly in the copywriting and marketing space.

“We’re not talking about using these text-generation tools to create production-ready content, without any human oversight. What we’re talking about is using these text generation tools to, say, help a marketer be able to generate 10,000 product descriptions that have to have a specific personality for the company, without actually having to write all of them,” Forrester Analyst Rowan Curran told IT Brew.

And the average marketing team is likely not building a conversational AI, but buying one—a potentially more difficult situation for security teams.

“If you have a very high level of security maturity and you’re a highly effective security organization, that’s awesome. What do you do if your third parties aren’t? And now it’s gotten easier for adversaries to not just attack you, but to attack everyone you do business with?” Pollard said.

As enterprises consider bringing chatbots to the marketing department, organizations may have to check on their third-party service providers.

In a recent Forrester post, Curran recommended asking questions like:
Did the training data come from a credible source?
How can data sources be audited to spot bias?
Are answers user-specific?
Does the product have an audit trail?

The checklist is still unwritten—and even GPT may not have all the answers.

“We are discovering the capabilities of the system itself, as well as the ways to exploit said system in real-time,” said Pollard.—BH

No comments:

Post a Comment