Monday, July 4, 2022

Scams and cryptocurrency can go hand in hand – here’s how they work and what to watch out for

Scams and cryptocurrency can go hand in hand – here’s how they work and what to watch out for

Yaniv Hanoch, University of Southampton and Stacey Wood, Scripps College

When one of our students told us they were going to drop out of college in August 2021, it wasn’t the first time we’d heard of someone ending their studies prematurely.

What was new, though, was the reason. The student had become a victim of a cryptocurrency scam and had lost all their money – including a bank loan – leaving them not just broke, but in debt. The experience was financially and psychologically traumatic, to say the least.

This student, unfortunately, is not alone. Currently there are hundreds of millions of cryptocurrency owners, with estimates predicting further rapid growth. As the number of people owning cryptocurrencies has increased, so has the number of scam victims.

We study behavioral economics and psychology – and recently published a book about the rising problem of fraud, scams and financial abuse. There are reasons why cryptocurrency scams are so prevalent. And there are steps you can take to reduce your chances of becoming a victim.

Crypto takes off

Scams are not a recent phenomenon, with stories about them dating back to biblical times. What has fundamentally changed is the ease by which scammers can reach millions, if not billions, of individuals with a press of a button. The internet and other technologies have simply changed the rules of the game, with cryptocurrencies coming to epitomize the leading edge of these new cybercrime opportunities.

Cryptocurrencies – which are decentralized, digital currencies that use cryptography to create anonymous transactions – were originally driven by “cypherpunks,” individuals concerned with privacy. But they have expanded to capture the minds and pockets of everyday people and criminals alike, especially during the COVID-19 pandemic, when the price of various cryptocurrencies shot up and cryptocurrencies became more mainstream. Scammers capitalized on their popularity. The pandemic also caused a disruption to mainstream business, leading to greater reliance on alternatives such as cryptocurrencies.

A January 2022 report by Chainanalysis, a blockchain data platform, suggests in 2021 close to US$14 billion was scammed from investors using cryptocurrencies.

For example, in 2021, two brothers from South Africa managed to defraud investors of $3.6 billion from a cryptocurrency investment platform. In February 2022, the FBI announced it had arrested a couple who used a fake cryptocurrency platform to defraud investors of another $3.6 billion

You might wonder how they did it.

Fake investments

There are two main types of cryptocurrency scams that tend to target different populations.

One targets cryptocurrency investors, who tend to be active traders holding risky portfolios. They are mostly younger investors, under 35, who earn high incomes, are well educated and work in engineering, finance or IT. In these types of frauds, scammers create fake coins or fake exchanges.

A recent example is SQUID, a cryptocurrency coin named after the TV drama “Squid Game.” After the new coin skyrocketed in price, its creators simply disappeared with the money.

A variation on this scam involves enticing investors to be among the first to purchase a new cryptocurrency – a process called an initial coin offering – with promises of large and fast returns. But unlike the SQUID offering, no coins are ever issued, and would-be investors are left empty-handed. In fact, many initial coin offerings turn out to be fake, but because of the complex and evolving nature of these new coins and technologies, even educated, experienced investors can be fooled.

As with all risky financial ventures, anyone considering buying cryptocurrency should follow the age-old advice to thoroughly research the offer. Who is behind the offering? What is known about the company? Is a white paper, an informational document issued by a company outlining the features of its product, available?

In the SQUID case, one warning sign was that investors who had bought the coins were unable to sell them. The SQUID website was also riddled with grammatical errors, which is typical of many scams.

Shakedown payments

The second basic type of cryptocurrency scam simply uses cryptocurrency as the payment method to transfer funds from victims to scammers. All ages and demographics can be targets. These include ransomware cases, romance scams, computer repair scams, sextortion cases, Ponzi schemes and the like. Scammers are simply capitalizing on the anonymous nature of cryptocurrencies to hide their identities and evade consequences.

In the recent past, scammers would request wire transfers or gift cards to receive money – as they are irreversible, anonymous and untraceable. However, such payment methods do require potential victims to leave their homes, where they might encounter a third party who can intervene and possibly stop them. Crypto, on the other hand, can be purchased from anywhere at any time.

Indeed, Bitcoin has become the most common currency requested in ransomware cases, being demanded in close to 98% of cases. According to the U.K. National Cyber Security Center, sextortion scams often request individuals to pay in Bitcoin and other cryptocurrencies. Romance scams targeting younger adults are increasingly using cryptocurrency as part of the scam.

If someone is asking you to transfer money to them via cryptocurrency, you should see a giant red flag.

The Wild West

In the field of financial exploitation, more work has been done to study and educate elderly scam victims, because of the high levels of vulnerability in this group. Research has identified common traits that make someone especially vulnerable to scam solicitations. They include differences in cognitive ability, education, risk-taking and self-control.

Of course, younger adults can also be vulnerable and indeed are becoming victims, too. There is a clear need to broaden education campaigns to include all age groups, including young, educated, well-off investors. We believe authorities need to step up and employ new methods of protection. For example, the regulations that currently apply to financial advice and products could be extended to the cryptocurrency environment. Data scientists also need to better track and trace fraudulent activities.

Cryptocurrency scams are especially painful because the probability of retrieving lost funds is close to zero. For now, cryptocurrencies have no oversight. They are simply the Wild West of the financial world.The Conversation

Yaniv Hanoch, Associate Professor in Risk Management, University of Southampton and Stacey Wood, Professor of Psychology, Scripps College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tuesday, June 28, 2022

Google’s powerful AI spotlights a human cognitive glitch: Mistaking fluent speech for fluent thought

Kyle Mahowald, The University of Texas at Austin College of Liberal Arts and Anna A. Ivanova, Massachusetts Institute of Technology (MIT)

When you read a sentence like this one, your past experience tells you that it’s written by a thinking, feeling human. And, in this case, there is indeed a human typing these words: [Hi, there!] But these days, some sentences that appear remarkably humanlike are actually generated by artificial intelligence systems trained on massive amounts of human text.

People are so accustomed to assuming that fluent language comes from a thinking, feeling human that evidence to the contrary can be difficult to wrap your head around. How are people likely to navigate this relatively uncharted territory? Because of a persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.

Thus, it is perhaps unsurprising that a former Google engineer recently claimed that Google’s AI system LaMDA has a sense of self because it can eloquently generate text about its purported feelings. This event and the subsequent media coverage led to a number of rightly skeptical articles and posts about the claim that computational models of human language are sentient, meaning capable of thinking and feeling and experiencing.

The question of what it would mean for an AI model to be sentient is complicated (see, for instance, our colleague’s take), and our goal here is not to settle it. But as language researchers, we can use our work in cognitive science and linguistics to explain why it is all too easy for humans to fall into the cognitive trap of thinking that an entity that can use language fluently is sentient, conscious or intelligent.

Using AI to generate humanlike language

Text generated by models like Google’s LaMDA can be hard to distinguish from text written by humans. This impressive achievement is a result of a decadeslong program to build models that generate grammatical, meaningful language.

Early versions dating back to at least the 1950s, known as n-gram models, simply counted up occurrences of specific phrases and used them to guess what words were likely to occur in particular contexts. For instance, it’s easy to know that “peanut butter and jelly” is a more likely phrase than “peanut butter and pineapples.” If you have enough English text, you will see the phrase “peanut butter and jelly” again and again but might never see the phrase “peanut butter and pineapples.”

Today’s models, sets of data and rules that approximate human language, differ from these early attempts in several important ways. First, they are trained on essentially the entire internet. Second, they can learn relationships between words that are far apart, not just words that are neighbors. Third, they are tuned by a huge number of internal “knobs” – so many that it is hard for even the engineers who design them to understand why they generate one sequence of words rather than another.

The models’ task, however, remains the same as in the 1950s: determine which word is likely to come next. Today, they are so good at this task that almost all sentences they generate seem fluid and grammatical.

Peanut butter and pineapples?

We asked a large language model, GPT-3, to complete the sentence “Peanut butter and pineapples___”. It said: “Peanut butter and pineapples are a great combination. The sweet and savory flavors of peanut butter and pineapple complement each other perfectly.” If a person said this, one might infer that they had tried peanut butter and pineapple together, formed an opinion and shared it with the reader.

But how did GPT-3 come up with this paragraph? By generating a word that fit the context we provided. And then another one. And then another one. The model never saw, touched or tasted pineapples – it just processed all the texts on the internet that mention them. And yet reading this paragraph can lead the human mind – even that of a Google engineer – to imagine GPT-3 as an intelligent being that can reason about peanut butter and pineapple dishes.

The human brain is hardwired to infer intentions behind words. Every time you engage in conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings and beliefs.

The process of jumping from words to the mental model is seamless, getting triggered every time you receive a fully fledged sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.

However, in the case of AI systems, it misfires – building a mental model out of thin air.

A little more probing can reveal the severity of this misfire. Consider the following prompt: “Peanut butter and feathers taste great together because___”. GPT-3 continued: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also smooth and creamy, which helps to offset the feather’s texture.”

The text in this case is as fluent as our example with pineapples, but this time the model is saying something decidedly less sensible. One begins to suspect that GPT-3 has never actually tried peanut butter and feathers.

Ascribing intelligence to machines, denying it to humans

A sad irony is that the same cognitive bias that makes people ascribe humanity to GPT-3 can cause them to treat actual humans in inhumane ways. Sociocultural linguistics – the study of language in its social and cultural context – shows that assuming an overly tight link between fluent expression and fluent thinking can lead to bias against people who speak differently.

For instance, people with a foreign accent are often perceived as less intelligent and are less likely to get the jobs they are qualified for. Similar biases exist against speakers of dialects that are not considered prestigious, such as Southern English in the U.S., against deaf people using sign languages and against people with speech impediments such as stuttering.

These biases are deeply harmful, often lead to racist and sexist assumptions, and have been shown again and again to be unfounded.

Fluent language alone does not imply humanity

Will AI ever become sentient? This question requires deep consideration, and indeed philosophers have pondered it for decades. What researchers have determined, however, is that you cannot simply trust a language model when it tells you how it feels. Words can be misleading, and it is all too easy to mistake fluent speech for fluent thought.The Conversation

Kyle Mahowald, Assistant Professor of Linguistics, The University of Texas at Austin College of Liberal Arts and Anna A. Ivanova, PhD Candidate in Brain and Cognitive Sciences, Massachusetts Institute of Technology (MIT)

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Monday, June 20, 2022

Giving away my cyber security books to empower the next generation of professionals

The cyber security industry is struggling with a severe lack of talent right now, and even though this is one of the most exciting fields to start a career, many people are encountering barriers in gaining the initial knowledge, or trying to figure out if cyber security is right for them.

So effective immediately, I'm giving away all three of my cyber security books with any LeanPub Reader Membership.

Three great books, one low price. This bundle includes...

  • "Death by Identity Theft", a guide to protecting you and your family against identity theft;
  • "Hacking of the Free", a guide to digital threats to our elections
  • "Cyber Security: Rules to Live By", an introductory primer to cyber security concepts

As a cyber security professional with over fifteen years experience, I couldn't be happier than LeanPub has enabled this opportunity for its authors and readers. Authors are still compensated for their work, and the number of books available to readers at an extremely low price point exponentially increases as more authors join the cause.

This is an exciting time for the cyber security industry and tech industry as a whole. LeanPub is helping break down barriers of entry for technology careers, and the timing of this shift is perfect. With LeanPub, we can truly help empower the next generation of cyber security professionals.

Ken is a Cyber Security professional with over 15 years of experience.  All opinions are his own, and do not reflect the opinions of his employer or clients.

The Unacceptable Downgrade: Why GPT-5 Forced Me to Cancel My OpenAI Subscription

xAI's Grok-3 might not be perfect but it happily generated this image for me. For quite some time now, OpenAI's GPT-4o mini model ha...