Network of ghost GitHub accounts successfully distributes malware

Check Point researchers have unearthed an extensive network of GitHub accounts that they believe provides malware and phishing link Distribution-as-a-Service.

Set up and operated by a threat group the researchers dubbed as Stargazer Goblin, the “Stargazers Ghost Network” is estimated to encompass over 3,000 active accounts, some created by the group and others hijacked.

“The network distributed all sorts of malware families, including Atlantida Stealer, Rhadamanthys, RisePro, Lumma Stealer, and RedLine,” they found.

The set-up

Threat actors are always coming up with new ways to deliver malware without getting detected by the victims, security software, and organizations whose offerings and assets they are (mis)using.

“Previously, GitHub was used to distribute malicious software directly, with a malicious script downloading either raw encrypted scripting code or malicious executables,” Check Point researcher Antonis Terefos explained.

“Threat actors now operate a network of ‘Ghost’ accounts that distribute malware via malicious links on their repositories and encrypted archives as releases.”

How are they keeping the network running despite GitHub’s efforts to flag and suspend offending accounts and delete malicious repositories?

The threat actors are using a variety of tricks. As mentioned before, malicious files or archives are password-protected to stymie scanning solutions.

Another trick is to divide “responsibilities” between various accounts: some accounts serve phishing templates with malicious download links to external websites or malicious repositories, others provide the image for the phishing template, and others still serve the malware (as a a password-protected archive in a Release).

GitHub accounts malware distribution

Accounts in the Stargazers Ghost Network fill various roles (Source: Check Point Research)

This makes it easier for the threat actor to get back to business as usual when that third category of accounts gets banned: they just update the link in the first category of accounts to point to a new download site or a new active malicious release.

Finally, some ghost accounts perform a variety of other actions – such as starring, forking, and subscribing to malicious repositories – to make the other accounts appear legitimate to potential victims and to GitHub. These activities seem to be automated.

Malware distribution via 3,000 GitHub accounts

“In a short period of monitoring, we discovered more than 2,200 malicious repositories where ‘Ghost’ activities were occurring,” Terefos shared.

During four days in January 2024, the Stargazers Ghost Network distributed the Atlantida stealer to more that 1,300 victims.

“The malicious links to the GitHub repositories were possibly distributed via Discord channels. The repositories targeted various types of victims who wanted to increase their followers on YouTube, Twitch, and Instagram and also contained phishing templates for cracked software and other crypto-related activities,” he added. (The lures are always something that many users look for.)

Based on advertisements for the service found on dark web forums, the network has been up and running from since July 2023, and possibly even earlier, on a smaller scale.

“We believe that Stargazer Goblin created a universe of Ghost accounts operating across various platforms such as GitHub, Twitter, YouTube, Discord, Instagram, Facebook, and many others. Similar to GitHub, other platforms can be utilized to legitimize malicious phishing and distribute links and malware to victims through posts, repositories, videos, tweets, and channels, depending on the features each platform offers,” Terefos noted.

GitHub has already taken down over 1,500 repositories and related GitHub accounts, but in June 2024 there were still over 200 unique repositories pushing malicious links.

“Future Ghost accounts could potentially utilize Artificial Intelligence (AI) models to generate more targeted and diverse content, from text to images and videos. By considering targeted users’ replies, these AI-driven accounts could promote phishing material not only through standardized templates but also through customized responses tailored to real users’ needs and interactions,” he concluded.

OPIS OPIS

OPIS

Don't miss