Social media impersonators are social engineering ploys which mimic a brand’s or individual’s image and messaging, hijack their social presence and launch a scam or cyber attack. The ZeroFOX Research Team used the ZeroFOX Platform to detect social media impersonators across 6 different networks: Twitter, Facebook, Instagram, LinkedIn, Google+ and YouTube. The detection algorithms leveraged a machine learning-based approach that applied natural language processing and image recognition to measure the relative similarity between an impersonating profile and its would-be victim.
In an upcoming white paper, we highlight some of the most common and dangerous tactics. Here, we preview just a few examples whose common endgame is to phish their victims. In this post, we take a deep dive into three observed social media impersonator tactics, techniques and procedures (TTPs): verification phishing, paid advertisement phishing, and customer support phishing. Our goal is to raise social media security awareness and to share intelligence about the type of digital risks that businesses can expect to combat in the year ahead.
TTP #1. Verification Phishing
On nearly every major social network, the genuine accounts of popular brands and celebrities will almost always be adorned with a verification badge, usually in the form of a blue checkmark adjacent to the profile picture. The social networks use verification badges to help their users differentiate between genuine accounts and fake accounts attempting to exploit the real account’s popularity. Brands and social media influencers understandably scramble to earn the coveted checkmark to boost their authority and encourage user engagement. To satisfy the demand for verified account from both users the popular accounts that they follow, the social networks have established ways to apply for verification. They review the account on a case-by-case basis and decide whether or not to bestow the account with a hard-earned badge.
As we have learned time and time again on the internet, where there are users scrambling for anything, there will be cyber criminals to take advantage of them. The jockeying for verification, introduced as a means to reduce fraudulent activity, has proven to be a readily exploitable method of cyber attack. Scammers and phishing accounts imitate the networks themselves, claiming to be the authentic verification help account, directing would-be-verified-users to all sorts of malicious payloads. We report some example impersonators pretending to be official support accounts, ostensibly to build credibility for delivering a phishing link (Figure 1).
Figure 1: The verification phishing scam. A. The real account B. The impersonator account.
The authentic Twitter user @verified (Figure 1A) spreads a URL containing information about how other users can officially get their accounts verified. Its impersonator (Figure 1B) uses the same default image, similar background image, and a deceptive @HeIpSupport username with a homoglyph uppercase “i” replacing the lowercase “l.” The account laid dormant for 4 years before starting to phish, but now actively engages by posting and liking often, following other users, and following back similar accounts spreading malicious URLs that claim to help user verify accounts.
Twitter was not the only social network where this type of behavior was observed. Verification phishing scams proliferated across most popular social networks offering the verified badge feature, including Facebook, Instagram and YouTube (Figure 2).
Figure 2: Verification phishing scams exist across popular social networks. A. A Facebook Page verification impersonator. B. An Instagram verification impersonator. C. A YouTube video that instructs viewers to a Twitter verification impersonator.
In the next example, an Instagram cyber criminal advertises a method to “Get Verified” by clicking the link in their description (Figure 2B). Like the Facebook example, this profile also displays a blue verified badge in the default profile picture. It also has “verifiedbadge” within the username in an attempt to gain credibility. It then cleverly organizes its last 9 (6 shown) posts to create a single image collage of a phone that enticingly illustrates what the user should expect to see when they follow the description URL.
Even YouTube videos are used to advertise verification phishing accounts (Figure 2C). In this example, the name of the video is “How To Get Verified on Twitter!” The video is a screenshare of a Windows Desktop that opens Notepad and types out a list of steps to follow in order to get verified, including engaging with a specific Twitter user. This indirect trap allows them to fly under the radar and avoid violating Twitter’s Term of Service in public view.
Figure 3: Resolved verification phishing URLs redirect to external websites that mimic the login screens for A. Twitter, which is the destination of the phishing link in Figure 1B, and B. Instagram, the destination of the phishing link in Figure 2B.
The similarity between the phishing websites and the authentic social network login screens conceals the phishing attempt and lures victims, expecting to log back into the site, into disclosing their username and password (Figure 3). Once their credentials are divulged, the accounts are hijacked and the perpetrator can try to use the harvested password to login to other social networks, email providers or online banking websites owned by the victim. The perpetrators inevitably target accounts with a sizeable following — not enough to be already verified, but enough to warrant the user to seek out the verification application. Such accounts are often medium sized business, large businesses late to join social media, social media influencers or other rising celebrities. This often proves to be the sweet spot for account hijackings, another reason stealing account credentials can be so lucrative. As such, the entire attack — from footprinting to attack to payload to damage — can occur end-to-end on the social network.
Other reports have recently touched on Verification scams, especially on Twitter, though ZeroFOX’s research shows the problem goes beyond one network. Our automated analysis has identified the malicious activity in higher volumes and across more channels than previously suggested.
TTP #2. Paid Advertisement Impersonators
Another way for a cybercriminal to ensure their attack is viewed by a huge number of potential victims is to use paid promotion, which broadcasts the phishing link to wider audiences. Promotion is a service offered to social media marketers to display an ad to more users than just their followers, and is the basis for revenue for most social networks. Scammers using this method are taking a huge risk; the social networks review ads before they are posted and the scammer may have their entire account banned if the network deems their purposes to be nefarious. Scammers must invest extra time and energy ensuring their promoted content will dupe the network’s filters.
Figure 4: Paid advertisement impersonators. A. Twitter paid ads are “Promoted”, and this one impersonates the authentic @verified account to broadcast a phishing link, like in Figure 1. B. Instagram paid ads are “Sponsored”, and this one impersonates the RayBan brand to broadcast a retail scam.
Figure 4A shows a verification scammer promoting their tweet. For those cyber criminals that slip through the cracks and have their malicious activity approved by the social networks, the payoff can be huge. In Figure 4B, a website offering counterfeit sunglasses at a too-good-to-be-true discount is promoted on Instagram. The more they are willing to pay, the more the networks will distribute the post.
TTP #3. Customer Support Impersonators
The proliferation of social media users has revolutionized modern customer support. Gone are the days of waiting on hold over the phone. From product complaints, to account security issues, to undelivered packages; customers publicly express their discontent by directly mentioning the company’s social media account. Companies have responded by forming rapid response teams whose dedicated purpose is to address such customer inquiries. But they aren’t the only ones. Impersonators have latched on to the inherent trust that customers place in these support accounts, too (Figure 5).
Figure 5: Customer support phishing impersonators target NatWest Bank. A. The authentic NatWest customer service Twitter account. B. An impersonating account that replies to individual customer complaints by delivering phishing links.
Other than being verified, the differences between the real account (Figure 5A) and it’s two impersonators (Figures 5B-C) are negligible to the human eye. These impersonators disguise themselves as the authentic NatWest account in an effort to hijack innocent customer interactions and dupe them into clicking on a phishing link (Figure 6). Customers with bank accounts self-identify themselves by mentioning the authentic NatWest account alongside a personal question, and the impersonator then uses this publicly posted PII as a one-stop-shop for victim acquisition.
Figure 6: Examples of customer support phishing impersonator interactions. Conversation between a customer and their bank’s official support account is hijacked by an impersonator, who redirects the customer to a phishing link.
The link redirect destination closely mimics the bank’s actual login page (Figure 7A) and URL.
Figure 7: Phishing payload click-through redirects to a credential harvesting phishing website.
While the examples we’ve shown thus far depict impersonators being caught red-handed, they’re not always so easy to spot in plain sight. Because users report posts containing phishing URLs and other type of malicious activity, impersonators are under constant pressure to shift gears to try to avoid being caught and banned. Impersonators can take advantage of the fact that social network content can be manipulated after-the-fact. Any post can be edited or deleted, profile fields can be modified, and friends can be unfriended at any time. These allow impersonators to cover their tracks following successful attack execution (Figure 8).
Figure 8: A sleeper impersonator lies dormant after conducting an aggressive phishing campaign to steal Apple ID credentials. A. The perpetrating Twitter account as of January 12, 2017. B. The same account as of December 19, 2016 looks completely different, suggesting the perpetrator is changing its displayed content in an effort to avoid being reported and subsequently banned. C. The phishing website redirected to the URL in the tweets from B.
Social media impersonators are an excellent case study for the back-and-forth battle between cyber criminals, social networks and the users caught in the middle. In our new digital lives where people are free to assume others’ identities and perpetrate malicious activity in their name, brands and individual people are increasingly at risk of financial and reputational losses. For those cyber criminals that slip through the cracks, the payoff can be huge.
Security on social media is rapidly becoming a top issue for security and risk teams. Find out how ZeroFOX’s automated technology can help by visiting zerofox.com/platform.