Q2 OSINT News: the DPRK’s Crypto Rackets, a Deepfake Tidal Wave, Florida Man Scammed, and More…

A warm welcome back to our OSINT digest! This is where we bring together all the stories concerning open-source intelligence and cybersecurity that have caught our eye and rework them in an engaging and accessible style. And the second quarter of 2023 has been eventful, to say the least!

In this issue, we discuss North Korea's latest schemes for laundering the cryptocurrency gains from their colossal cyberattack operations, rising concerns over the deceptively persuasive power of deepfakes and AI-generated media, the story of a Florida man who lost $480 to a pig-butchering scam, and more!

Inside North Korea’s Crypto Rackets

As a rogue state, North Korea faces huge problems finding the foreign currency it needs to finance its shadow imports and many other international exchange activities. Their solution? Large-scale crypto heists conducted via cyberattacks, which—according to Infosecurity Magazine—account for around 50% of the DPRK’s foreign currency income.

It is reported that North Korean advanced persistent threats (APTs) contain something like 10,000 operatives, all conducting cyberattacks for financial gain. And we’re talking big sums. Hacker’s in state employ have taken as much as $618M in a single heist, while the DPRK’s cumulative theft proceeds for 2019 alone amounted to $2B, according to the UN.

The DPRK has directed a lot of their hacking attention to their unfortunate neighbor South Korea, from whom the former has lifted $1.2B to date. While the hacking tactics are legion, a huge amount of phishing has been achieved by creating an ‘evil twin’ of South Korea’s most popular search engine and portal, Naver, which is practically indistinguishable from the original.

Naver received more than double the users of Google

While North Korea has proved itself to be very effective at stealing crypto, this has in itself presented a challenge. With blockchains consisting of open data, seasoned professionals such as Chainalysis are continually tracing stolen assets. And so, the DPRK faces the possibility of having its funds frozen if it doesn’t find a way to legitimize them.

This is where a new chapter in this story begins. It seems that the DPRK has indeed found—and is implementing—a workaround for this situation. In a sort of digital analog of Al Capone’s laundromats, North Korean APTs are instead using crypto mining services as a way to legitimize their illicit proceeds.

The idea behind this is that each ‘old’ digital coin has its history publicly recorded on a blockchain, making it difficult to hide stolen crypto. But a brand new coin is a blank slate, and few mining services check the source of the crypto they receive in payment. So, by using their stolen crypto to pay for mining services, these APTs can effectively exchange ‘dirty’ crypto for that which is freshly mined and ‘clean’.

A Brave New Deepfake World

While generative AI is still basically in its infancy, security specialists are becoming increasingly concerned about the public’s exposure to a digital realm flooded with authentic-looking and hard-to-disprove fake imagery.

Up to now, fake news items have typically been straightforward textual constructs containing dubiously reliable or logical information. But thanks to AI, regular citizens, political activists, and criminals can all harness models such as DALL-E, Midjourney, ChatGPT, and ElevenLabs, to create ever more convincing fakes, destabilizing the veracity of the informational space.

In particular, WIRED magazine has expressed concerns over the upcoming 2024 U.S. presidential elections. With rival candidates pulling every plug to get an edge over the competition, we could see a flood of political deepfakes designed to discredit particular candidates or sabotage their campaigns. The Verge has already claimed that the Republicans have launched an AI-generated attack ad against Biden.

While a discerning eye can in most cases distinguish between a real and constructed image, the cursory style of modern media consumption means the overall effect on opinion could still be significant. Moreover, an influx of fake media could also undermine the faith people have in more reliable outlets.

An AI-generated depiction of Trump’s arrest that went viral on Twitter

So, what can be done? Well, authentication marks such as c2pa crypto signatures or even more standard crypto watermarks, could help preserve authenticity by protecting images against copying and doctoring. We can also fight AI with AI—special algorithms can be created to sweep the informational space and flag up fake or composite content.

We also shouldn’t forget that AI-generated media can go beyond disruption of the informational landscape—it can be used in a more focused way to defraud companies and ordinary people. AI-generated voices can now replicate that of a CEO to extract funds from an organization, or a family member to extract bail-outs from relatives. One of ElevenLabs’ deepfake systems was able to break through banks' voice recognition security systems.

💡
So, how can we avoid a generative-AI dystopia? Open data has huge potential for the detection of fake media. The future of OSINT products may not only lie in the number of sources covered, but in the ability to distinguish human-made from AI-generated content. In fact, open-source intelligence tools and techniques have already proven effective against ChatGPT-involved scams, fakes, and attacks.

Florida Man Losses $140 to Pig-Butchering Scam on OkCupid

The ‘Florida Man’ has become a meme in the US. The term is synonymous with an individual caught up in some bizarre incident which surfaces on the local news. Well, the Florida man has now made his debut appearance in the Social Links digest.

We’ve already discussed pig-butchering scams in a previous digest. And this one really is a prime example of just how costly they can be. The crooks gained the Florida man’s trust after several months of messaging via the dating app OkCupid, and then offered him an ‘investment opportunity’ via a fake investment website. Eventually, our credulous Florida man sent over $480k before realizing he couldn’t claim his ‘profit.’

Cybernews magazine was able to deconstruct the crypto flow of the money squeezed out of our unfortunate Florida man. Surprisingly, the funds ended up on a Binance account, even though it was laundered through a classic obfuscation scheme, and the crypto exchange boasts a strict AML policy.

Victim to Binance in six small steps

Further Reading

Relatives of Shooting Victims Sue Major Social Media Platforms

Following May’s shooting incident in Buffalo, USA, the relatives of the victims filed a lawsuit against popular social media, including Facebook, for not hindering the spread of white supremacy radical ideas. They are claiming that popular networks such as Facebook and Reddit have become origins of radicalization, demanding stricter control over extremist content.

WIRED Release Data Breach Guide for 2023

The people over at WIRED have put together a great guide to data breaches. It includes a history of the issue, its contemporary characteristics, and a fascinating projection for the future development of the global cyber-security problem.

TechCrunch Champion OSINT for Due Diligence

OSINT has just received a serious plug from the well-known high-tech media outlet TechCrunch. In an article dedicated to how due diligence processes are falling behind the pace in an accelerating M&A market, the publication discusses the importance of OSINT. Key benefits considered include the additional flexibility and cost-effectiveness the discipline can bring to the due diligence process.

Mammoth Impersonation Campaign Spells Havoc for over 100 Brands

A colossal brand impersonation campaign, originating in June 2022, has recently come to light. In total, the operation boasted over 3000 web domains and 6000 sites, whose primary purpose was to lure shoppers into entering their card details to buy a pair of trainers or other apparel from their favorite brands. The sites were classic copycats, which looked official enough to trick consumers.

New Technology for Automated Dark Web Mining unveiled

Researchers from the Korea Advanced Institute of Science and Technology (KAIST) and data intelligence company S2W, have joined forces to deliver DarkBERT. Based on natural language processing (NLP), this new model has been pre-trained on Dark Web data and may prove an indispensable tool for cybersecurity professionals who need to plumb the depths of the internet to extract cyber threat intelligence (CTI).

Term of the Quarter: Evil Twin

The concept of the ‘evil twin’ has been long established in popular culture, but it’s now taken on a significant meaning in the parlance of cybersecurity. The term is now used among cybersecurity experts to denote any kind of ploy where hackers create a highly convincing copy of a popular resource through which they can steal personal data.

Once users log in or connect, the hackers can extract all kinds of data or take over a device, while the victim isn’t even aware that they’ve hit something unofficial and malicious. It all started with a trick centered around fake public Wi-Fi access points that copied the authentic ones. When victims connected, the doors were left wide open for the hackers.

Fact of the Quarter

Responsible for $3.3B in losses in 2022 alone, pig-butchering scams have hit an all-time high. In just a few years, this new breed of con has already overtaken classic business email compromise scams.

Clash of the titan scams

And that rounds off our digest for the second quarter of 2023. We hope you enjoyed it. Keep your ear to the ground for all things OSINT by subscribing to our blog.

💡
With scams and cybercrime going through the roof, you need to stay protected. Book a free demo with us and find out how SL Professional can help safeguard you and your organization against costly data breaches.