Fake IT resumes, misinfo, and more • The Register

Fake IT workers possibly linked to North Korea, Beijing-backed cyber operatives, and Russian malware slingers are among the baddies using ChatGPT for evil, according to OpenAI’s latest threat report.
The AI giant said it quashed 10 operations using its chatbot to conduct social engineering and cyber snooping campaigns, generate spammy social media content, and even develop a multi-stage malware campaign targeting people and organizations around the globe. Four of the 10 campaigns were likely of Chinese origin, and OpenAI banned all of the ChatGPT accounts associated with the malicious activities.
These included accounts linked to “multiple” fake IT worker campaigns, which used the language models to craft application materials for software engineering and other remote jobs.
“While we cannot determine the locations or nationalities of the threat actors, their behaviors were consistent with activity publicly attributed to IT worker schemes connected to North Korea (DPRK),” the report said [PDF]. “Some of the actors linked to these recent campaigns may have been employed as contractors by the core group of potential DPRK-linked threat actors to perform application tasks and operate hardware, including within the US.”
In addition to using AI to create fake, US-based personas with fabricated employment histories (as has been previously documented by OpenAI and other researchers), some of the newer campaigns attempted to auto-generate resumes.
Plus, OpenAI detected indicators of operators in Africa posing as job applicants along with recruiting people in North America to run laptop farms — along the lines of an Arizona woman who was busted for her role in raking in millions for North Korea while allegedly scamming more than 300 US companies.
Russian trolls, malware devs
Other banned accounts originated from Russia, and the AI company’s threat hunters caught them doing the usual election trolling, in this case, using the chatbot to generate German-language content about the country’s 2025 election. The spammers used a Telegram channel with 1,755 subscribers and an X account with more than 27,000 followers to distribute their content, in one instance xeeting: “We urgently need a ‘DOGE ministry’ when the AfD finally takes office,” referring to the Alternative für Deutschland (AfD) party.
The Telegram channel regularly reported fake news stories and commentary lifted straight from a website that the French government linked to a Russian propaganda network called “Portal Kombat.”
In one of the more interesting operations: OpenAI banned a cluster of accounts operated by a Russian-speaking individual using ChatGPT to develop Windows malware dubbed ScopeCreep and set up command-and-control infrastructure:
The criminal then distributed the ScopeCreep malware via a publicly available code repository that spoofed a legitimate crosshair overlay tool (Crosshair X) for video games.
The malware itself, developed by continually prompting ChatGPT to implement specific features, included a number of notable capabilities. It’s written in Go, and uses a number of tricks to avoid being detected by anti-virus and other malware-stopping tools.
After the unsuspecting gamer runs the malware, it’s designed to escalate privileges, harvest browser-stored credentials, tokens, and cookies, and exfiltrate them to attacker-controlled infrastructure.
Despite their successful efforts in using the LLM to help develop malware, the info-stealing campaign itself didn’t get very far, we’re told. “Although this malware was likely active in the wild, with some samples appearing on VirusTotal, we did not see evidence of any widespread interest or distribution,” OpenAI wrote.
Chinese APTs abusing ChatGPT
Perhaps unsurprisingly, nearly half of the malicious operations likely originated in China.
The bulk of these used the AI models to generate a ton of social media posts and profile images across TikTok, X, Bluesky, Reddit, Facebook, and other websites. The content, written primarily in English and Chinese, with a focus on Taiwan, American tariffs and politics, and pro-Chinese Communist Party narratives, according to the report.
This time around, however, Chinese government-backed operators used ChatGPT to support open-source research, script tweaking, system troubleshooting, and software development. OpenAI noted that while this activity aligned with known APT infrastructure, the models didn’t provide capabilities beyond what’s available through public resources.
All of these now-banned accounts were associated with “multiple” unnamed PRC-backed hackers, and used infrastructure operated by Keyhole Panda (aka APT5) and Vixen Panda (aka APT15).
In some of the more technical queries, the prompts “included mention of reNgine, an automated reconnaissance framework for web applications, and Selenium automation, designed to bypass login mechanisms and capture authorization tokens,” the research noted.
The ChatGPT interactions related to software development “included web and Android app development, and both C-language and Golang software. Infrastructure setup included configuring VPNs, software installation, Docker container deployments, and local LLM deployments such as DeepSeek.” ®