Technology has changed nearly everything about the way we live, work, and connect including how scammers operate.
Once upon a time, scams came in the form of clunky emails riddled with typos or too-good-to-be-true lottery wins. These days? The tactics are smoother, the impersonations more believable, and the emotional pressure far more calculated.
Why the shift?
Two letters: AI.
Artificial intelligence has supercharged scammers' abilities. Now they can use AI to copy voices with just seconds of audio, write messages pretending to be people you trust, and create fake websites that look exactly like real ones.
Even smart, cautious people can be fooled.
But here's the good news: understanding these tactics is your first line of defense. While scammers may have new tools, awareness and education are powerful countermeasures. With a few simple strategies, you can significantly reduce your risk and navigate the digital world with more confidence.
What Do These New Scams Look Like?
Here are a few examples that have become increasingly common:
Voice cloning scams where a loved one calls in a panic, only it's not actually them
Deepfakes that show public figures endorsing products or making false claims
Phishing texts and emails that sound like they're from your bank, Amazon, or even your employer
Fake websites that look exactly like your investment portal or payment processor
Hyper-personalized messages that reference real details from your life, often gathered through social media or public sources
In just about every scam currently being perpetrated, criminals are using AI to make their deceptions more effective and convincing.
Artificial Intelligence is a tremendous tool that is already being used for many positive purposes, including improved medical diagnostics, customer service, quality control in manufacturing, self-driving cars, inventory management for retail stores, smart grid management in the energy sector, virtual assistants, and more. But like any tool, it can also be used in harmful ways.
In just about every scam currently being perpetrated, criminals are using AI to make their deceptions more effective and convincing.
Phone Calls
AI has created additional opportunities for phone call scams. It can be used to remove foreign accents from callers’ voices, making them sound more like a local representative. AI is also being used to create robocall scripts that enable more persuasive conversations with their targeted victims.
And perhaps most alarmingly, AI voice cloning technology is being used in the family emergency scam. In this age-old con, a grandparent gets a call late at night from someone posing as a family member with a made-up emergency that requires money to be sent immediately. Now, using readily available AI voice cloning technology, a scammer can grab as little as 30 seconds of a person talking on YouTube, TikTok, or Instagram to create an AI version of that person’s voice, which is then used to call the grandparent or other family member.
To avoid this scam, every family should have a code word known only to the members of the family to verify a family member’s identity in the event of an emergency.
One creative tactic used by the Federal Trade Commission (FTC) is their Voice Cloning Challenge, which promotes the development of new strategies to protect people from AI voice cloning technology scams. The FTC is accepting submissions with strategies for preventing or detecting voice cloning by unauthorized users. The winner will receive $25,000. This the fifth time that the FTC has used this type of challenge with a cash prize to address similar problems. Past challenges included uncovering security vulnerabilities in the Internet of Things and creating a defense against robocalls.
AI is also being used to battle those robocalls. Machine Learning algorithms can learn to recognize patterns in robocalls and then block the calls that match these patterns. In addition, AI Natural Language Processing (NLP) technology can be used to analyze the content of robocalls and block the call. AI also can be used to combat spoofing by analyzing the caller’s true phone number and blocking the call if spoofing is identified.
Emails
Socially engineered spear phishing emails are now far worse because of AI. Scammers can create more sophisticated and effective emails that are more likely to convince a targeted victim to provide personal information that can lead to identity theft, click on a link that will download dangerous malware, or fall for a scam. In the past, phishing emails often lacked proper grammar, syntax, or spelling. Now, however, AI has solved that problem, making phishing emails more difficult to recognize.
Fortunately, AI can also be an effective tool in combatting enhanced emails. Machine Learning algorithms can analyze vast amounts of data to identify patterns and trends associated with scams. These algorithms can not only be used to recognize indications of spear phishing, but can also continually learn, adapt, and predict new forms fraudulent emails.
Social Media
Scammers have always used social media as a trusted delivery system for scams, mining posts for personal information that they could leverage against their victims. But with today’s advanced tools, criminals can use AI to set up social media bots, which are automated software applications programmed to appear to be real people. In the past, the lack of sophistication in some bots made them easy to identify, but now AI has enabled scammers to create large numbers of believable bots used to promote numerous scams, particularly involving cryptocurrencies. In the past, the gathering of personal information through social media could be a time-consuming effort, but now through AI, vast amounts of information can be harvested quickly and simply, and then used to craft effective messages.
Dating Apps
Finally, crooks can use AI to create fake profiles on multiple dating platforms and write effective and grammatically correct biographies. The scammers can also use AI to create photographs or deepfakes.
Much work remains to be done to increase the defenses to AI-enhanced scams, but just as AI may have made scams easier to perpetrate, it also has promise to help us avoid them as well.
Deepfakes, voice cloning, AI-generated text, social media manipulation and multi-pronged attacks
Cyber crime is a very real and growing threat.
Statista estimates the global cost of cyber crime to surge to $13.82 trillion (about €12.6 trillion) by 2028.
Much of that growth we can attribute to the rise of scammers harnessing the power of AI. In fact, AI-powered scams are becoming an issue that’s completely reshaping the landscape of online fraud.
We sat down with Camden Woollven, our head of AI, to find out more.
How are scammers using AI?
A real-life example involved three Canadian men tricked by deepfake videos of Justin Trudeau and Elon Musk. These men truly believed those videos to be real, and invested – and lost – $373,000 [€342,000].
More recently, as another example, Santander created deepfake videos with lower-profile people to raise awareness around how realistic these videos can be.
These types of deepfakes are extremely sophisticated – they’re not amateur attempts. And these aren’t one-off incidents, but part of a growing wave of AI-enhanced scams, which are causing real harm and eroding our trust in what we see and hear.
We’ve even seen the rise of ‘deepfake as a service’!
How might this affect organisations?
First, I want to be clear that the proliferation of deepfakes is affecting organisations too – not just individuals.
We’re a far cry away now from the Nigerian prince emails sent at random to countless people. Those ‘traditional’ scams were about casting your net wide and hoping for a few ‘bites’ from unsuspecting victims.
Now, you can receive an email that looks exactly like it’s from your boss, asking you to urgently transfer funds for a new project. The written language matches their style. It even contains a voicemail attachment that sounds just like them.
This is what we are dealing with in the new world of AI scams. It’s like criminals have traded in their fishing nets for an ultra-high-tech speargun.
Indiscriminate mass emails are being replaced by hyper-personalised, convincing fakes. Almost perfect fakes, in fact.
Can you describe in more detail what these fakes might look like, and how they might be delivered to the victim?
Deepfakes aren’t just face swaps – they’re videos that can make anyone say anything, with lip movements and facial expressions exactly like whoever’s being impersonated.
We also get voice cloning, which can be incredibly convincing. Back in 2019, the CEO of a UK energy firm believed he was on the phone to the chief executive of the parent company – so much so that he authorised a €220,000 transfer.
And this technology is only getting better. Now, you just need a two-minute recording to imitate someone, though the longer the clip, the more convincing the end result.
AI-generated text and chatbots are also getting scarily good. They can craft phishing emails that sound just like your colleague. They can impersonate customer service representatives.
Then there’s social media manipulation. You can create an army of accounts, with each and every one of them seemingly belonging to a real person, with a distinct personality. Cyber criminals can then use them to add credibility to their scams, sharing and engaging with posts intended to cause harm.
But it’s when organised criminals combine these techniques that they take the credibility of their scams to the next level.
How might criminals combine such tools or techniques?
Imagine the following scenario.
You get an email from your CEO. The text is AI-generated, about an urgent investment opportunity. It includes a link to a deepfake video announcement. It also includes a voicemail attachment, created through voice cloning, confirming the details of this ‘opportunity’.
Then, when you check social media, you see hundreds of posts, which is pure AI-driven social media manipulation, all talking about this amazing opportunity.
A multi-pronged attack like this uses every trick in the AI scammer’s book, making it incredibly hard to spot a fake. They’re not just picking the lock on your front door; they’re simultaneously coming through the windows, down the chimney and tunnelling under your house.
As these tools become more advanced and easily accessible, staying alert and informed is more crucial than ever.
How can we identify an AI scam?
Let’s break that down into four key areas:
1. Visual cues
With videos, look for things like slight inconsistencies – in facial movements, for example, or unnatural lighting or shadows. Also be alert to unusual blinking patterns or eye movements.
And if you’re ever unsure about a profile picture, using reverse image searches is always a good idea.
2. Audio cues
Listen for unnatural pauses or odd speech rhythms. Pay attention to any strange background noises, and inconsistencies in tone or emotion. Finally, always be wary of voices that sound slightly robotic or processed.
3. Behavioural cues
These aren’t too different from non-AI scams.
Always be cautious of unexpected messages or calls, especially those that create a sense of urgency. Question offers that seem too good to be true – always be sceptical of emotional manipulation or pressure to act quickly.
Finally, always double-check information through official channels.
4. Tools and technologies
AI detection software exists but – like any security measure – isn’t foolproof.
You can also use ‘regular’ technological solutions, like blockchain-based verification for important communications and MFA [multifactor authentication] for sensitive accounts.
What’s the best defence against AI-generated scams?
In my opinion, it isn’t technological, but critical thinking and a healthy dose of scepticism. I think staff awareness – and awareness among the general public – is our strongest weapon.
Particularly when dealing with an emotionally charged message, or when put under pressure to act quickly, question things. Verify stuff independently.
Why would that celebrity endorse this unusual thing? Do their official social media accounts verify it?
Trust your instincts. If something feels off, it probably is off. So, slow down. Take your time to evaluate things, and seek second opinions where you’re unsure – discuss your suspicions with a colleague, for example.
And, like with any social engineering attack, if you think it’s a phishing scam, report it. Escalate it through the proper channels.
In an era of rapid technological advancement, fraudsters are leveraging more sophisticated approaches. With easily accessible Artificial Intelligence (AI) programs like ChatGPT, the FTC and other government security agencies have seen a rapid spike in successful cybercrime using this technology for fraudulent gain.
135% increase in the first two months of 2023, a surge that coincided with the rising adoption rate of ChatGPT. Cybersecurity scams have evolved from simple email phishing attempts to complex AI-assisted attacks, deceiving even the most vigilant individuals. By better understanding AI's role in these new tactics, businesses and individuals can stay one step ahead.
Voice-Based Deception with "Vishing"
While phishing as we know it tends to exist primarily in text-based scams, vishing (voice-phishing) takes advantage of voice-based interactions to deceive targets. Using voice synthesis technologies powered by AI, scammers can mimic the voices of trusted individuals or organizations over phone calls. Vishing scams play on the assumption that victims won’t second guess a request for money that (at first glance) seems to come from a reputable source, no matter how large the sum or how strange the request. Protect yourself by always calling that entrusted person directly to confirm the request.
The Risk Of Engaging with spam cells
While it may seem harmless to answer an unknown call, there are several reasons why it is important not to knowingly pick up spam phone calls with the desire to mess back with the fraudsters:
Personal safety: Interacting with scammers can potentially expose you to further risks, as they may have access to personal information or employ aggressive tactics. Even seemingly harmless information, like your name or address, can be valuable to scammers. Engaging with spam calls can inadvertently provide them with the personal details they need to perpetrate fraud.
Validation of phone numbers: Answering spam calls, even with the intention to mess with scammers, signals to scammers that your number is active and can lead to an increase in unwanted calls.
Identity Theft: In some cases, scammers might use the information they obtain to commit identity theft, causing potential long-term harm and distress.
Potential for Voice Recording: In a newer scam known as the "Can you hear me?" scam, the caller, typically an AI, asks the target if they can be heard. When the target responds affirmatively, the AI records the "Yes" response. This recorded confirmation can then be manipulated and used out of context, seemingly providing consent for unwanted services or fraudulent transactions.
In light of the "Can You Hear Me?" scam, it's clear that the advancements in AI technology present both immense opportunities and potential threats. This method underscores the power and potential misuse of AI in voice synthesis technology. By understanding the risks and adopting safe practices, we can protect ourselves and our businesses from the evolving threat of AI-augmented fraud.
Malware: the silent intruder
Malware, short for malicious software, is a wide range of harmful programs that infiltrate systems to cause damage or gain unauthorized access. In the past, malware was spread through infected attachments or compromised websites. However, cybercriminals now use AI-powered automation to deploy malware on a large scale and at high speed. The rise of fileless attacks makes it difficult to detect malware using traditional antivirus solutions.
Combating AI-assisted fraud: a proactive approach
Implementing a comprehensive approach is crucial when it comes to mitigating AI-assisted fraud:
Integrate AI-based fraud detection and antivirus solutions into your security infrastructure. Using machine learning techniques, these systems can identify unusual patterns and behaviors, allowing you to neutralize threats before they cause significant damage.
Conduct regular training sessions to educate your staff about the latest fraud tactics and prevention measures. Keeping your team informed and aware is crucial in staying one step ahead of fraudsters.
Utilize strong access controls and multi-factor authentication. By adding an extra layer of protection, you can ensure that only authorized individuals have access to sensitive information and resources.
Stay updated on the latest cybersecurity standards and comply with them diligently. Adhering to industry best practices and regulations is essential in safeguarding your business against AI-assisted fraud.
By understanding the evolving techniques employed by scammers in areas such as phishing, vishing and malware attacks, individuals and businesses can take proactive measures to protect themselves. Through a combination of vigilance, education and healthy skepticism we can navigate the ever-changing landscape of fraud prevention with confidence.
The rapid emergence of artificial intelligence models such as ChatGPT, which are based on natural language processing, is shaking up the landscape of online scams. What promise do these new tools hold? And what are the risks to users if they wind up in the hands of a scammer? We take a look at the possibilities offered by artificial intelligence in the field of online scams.
Are chatbots the new weapon of online scammers?
Although ChatGPT is currently programmed to avoid any direct involvement in malicious online scams, artificial intelligence tools could nonetheless revolutionise scammers’ practices.
As Johanne Ulloa, Director of Solutions Consulting at LexisNexis Risk Solutions, points out: ‘Chatbots can still be used in this way to improve the effectiveness of phishing e-mails, as there is nothing in place to stop texts being generated that ask a customer to log in to an online account for security reasons.’
Thanks to text-to-speech tools, artificial intelligence (AI) can also be used to bypass voice authentication systems. ‘These AI systems can “read” a text and reproduce a sampled voice. Some people have received messages from “relatives” whose voices had been spoofed using the same principle,’ emphasises Johanne Ulloa. This modus operandi is similar to deepfakes.
The widespread presence of chatbot applications in mobile application stores could also pose a threat, encouraging the spread of malicious applications in this way.
Social engineering scams: the risk of scaling up
The widespread use of strong authentication systems is prompting scammers to change their practices and redouble their efforts to pull off social engineering scams.
This is a broad term that refers to the ‘scams used by criminals to exploit a person’s trust in order to obtain money directly or obtain confidential information to enable a subsequent crime’ (source: Interpol) and implies that the victims themselves carry out the strong authentication under the influence of the scammer over the telephone.
And this is not something that applies to ordinary users alone. ‘The methods used by some scammers are highly sophisticated, and can even be used to deceive high-level professionals such as financial directors,’ says Johanne Ulloa.
With this in mind, the use of artificial intelligence by scammers poses a real danger for companies and users alike. While scams are currently carried out on a peer-to-peer basis (one scammer, one victim), focusing the scammer’s attention on one target at a time, the use of chatbots and text-to-speech tools could well change the game by enabling mass deployment of these techniques. ‘Scam call centre platforms are likely to be replaced by conversational agents, which could enable scammers to scale things up,’ says Johanne Ulloa.
Here is a possible scenario:
The scammer sends out a phishing e-mail.
The victim fills in the form with their personal details, login and password, telephone number and the name of their bank advisor.
A bot calls the bank and records the advisor’s voice (it only takes a few seconds to reproduce the voice).
All the chatbot has to do then is call the victim using the bank advisor’s voice to convince them to authenticate something or make a bank transfer.
The conundrum of data confidentiality
But for Johanne Ulloa, ‘AI use in scams is still marginal, and the biggest risk with this type of tool at the moment is linked to data confidentiality.’ Indeed, the use of conversational agents may seem innocuous, and some users who are unaware of confidentiality issues may be inclined to communicate sensitive information.
How this information is processed by language models is still shrouded in mystery. What we do know is that this information is likely to be reused by the chatbot, as recently demonstrated by the case involving the developers of a major mobile phone company, or the case of a conversational agent editor who had to temporarily disable its model due to a similar problem.
Between the internal limitations of the language model, the user identification of sensitive data, the confidentiality of this data and the acculturation of individuals and companies, there is still much to be done in terms of prevention.
The use of AI to combat scams
But if this picture seems bleak, then rest assured that artificial intelligence tools also have their uses in the fight against online scams.
Though not in the form of conversational agents, AI is already being widely used to help detect and prevent scams through machine learning models. The great flexibility of these AI algorithms also enables them to adapt effectively to the evolution of online scamming methods. By allowing massive volumes of data to be analysed in real time and helping to identify patterns that could indicate fraudulent activity, ‘chatbots will be able to help improve the investigations carried out by analysts,’ states Johanne Ulloa.
Artificial intelligence technology can be misused for fraud, but it is also a tool accountants can use to detect fraud. Find out how to manage AI-generated fraud risks.
Artificial intelligence (AI), like any technology, can be misused. Malicious actors intent on committing fraud can use AI to create more realistic deceptions at a much faster clip.
How has the threat landscape changed?
Consider executive impersonation scams. The initial attacks involved, for example, an employee receiving an emergency email from their CEO requesting the immediate transfer of funds for an urgent transaction. The employee would be instructed to transfer the funds to an account controlled by criminals. In response, companies adapted their policies and trained employees to recognize the scam’s red flags.
With AI, criminals can generate deepfake voicemail or video messages from the CEO — or even live deepfakes of the CEO’s voice or video. That’s a next-level threat, but new types of fraud may not even be the biggest risk for CPA firms and other potential targets. The fear is that AI can help criminals automate old fraud schemes, increasing the speed, efficiency, and persistence of the attacks. In fact, it’s already happening. The FBI, through its Internet Crime Complaint Center, has since at least 2020 alerted the public to the risk of AI-driven scams.
HOW AI IS USED TO CREATE FRAUD SCHEMES
AI can be used to assist in perpetrating fraud schemes by:
Generating convincing false or misleading documents and data
Traditionally, fraudsters have created false documents, reports, data, and deceptive emails in support of their fraud schemes. These falsified documents and data often contained certain deficiencies, such as mathematical mistakes, fuzzy logos, nonsensical invoice numbers, and formatting differences, that could tip off the recipient.
Resourceful fraudsters can now use AI to create convincingly realistic documents and data such as invoices, contracts, reports, spreadsheets, and bank statements to support a fraud scheme. The more examples of legitimate documents available for an AI system to evaluate, the higher-quality fake the AI can generate. AI’s capabilities of generating documents — whether for fraudulent or legitimate purposes — are ever-increasing and now represent a dangerous tool in a fraudster’s arsenal.
Increasing the sophistication of traditional attacks
AI can be used to analyze large sets of publicly available information to make attacks more targeted and personal in nature.
Take, for example, a traditional phishing attack. It may be rare for someone to send funds or personal information to a “Nigerian prince” today. But what about a message that appears to come from a distressed family member seeking funds, complete with name, address, phone number, and other personal information? If there is enough publicly available information on social media or other sources, the attacks could be bolstered with accompanying photographs, videos, and even a voice mimicking the family member.
Increasing the speed and persistence of schemes
AI’s ability to process a large volume of data and perform tasks with incredible efficiency can make it a formidable problem for businesses and the public at large. Fraudsters understand that phishing schemes, spear-phishing, robocalls, and ransomware are a numbers game. The more attempts, the greater the likelihood of success.
Historically, these types of schemes required some human intervention or at least hours of programming and planning. AI can perform them with astonishing speed. Further, AI does not get bored or distracted, nor does it need to take breaks to eat or sleep. AI (at least in its current state) does not have a conscience and can carry out attacks without getting discouraged or feeling guilt.
Decreasing detectability
The proliferation of cybercrime is primarily due to two factors. First, it is profitable. By 2025, cybercrime is projected to cost more than $10 trillion worldwide. Second, cybercrime is a risk bargain compared to other crimes. Tracing cybercrimes back to a human perpetrator is much more difficult than catching a burglar who is physically present in the act of stealing.
AI ups the ante with automated schemes that leave almost no trace leading to a human perpetrator. AI can use publicly available programs designed to evade detection. It can also be designed to “think” for itself, learning from detection countermeasures and altering itself to avoid any successful detection techniques.
Use of generative adversarial networks
In a recent development, criminals have turned to generative adversarial networks (GANs), which use two neural networks; i.e., they are basically two AI systems working in conjunction. The criminals train one of the networks to generate false information, while the other is designed to detect the false information. They are used to train each other, continually creating better means of evading detection.
A GROWING THREAT OUTLOOK
While these scenarios may sound dystopian, the techniques have already been successfully employed. According to court records, AI was allegedly used in January 2020 to mimic a United Arab Emirates company director’s voice to steal $35 million. Even prior to that, in the UK, fraudsters allegedly used convincing deepfakes to impersonate the CEO of an energy firm, resulting in a fraudulent transfer of $243,000.
These attacks didn’t use the latest technology. As AI technology improves, the sophistication of these types of attacks will only increase. In past months, for example, computer-manipulated images and audio of celebrities have appeared on social media selling phony services.
HOW TO USE AI TO DETECT AND PREVENT FRAUD
While AI has the potential to support fraudsters, advancements in AI technology also present opportunities for those in fraud prevention and detection professions, such as accountants and finance professionals.
Ways AI can be used to assist in detecting and preventing fraud include:
Pattern recognition
Data analytics have long been used to detect anomalies, or fraud indicators, in large datasets. AI has the potential to speed up and improve pattern recognition by analyzing massive sets of data quickly. AI and machine learning increase the ability for firms and finance departments to efficiently and effectively detect anomalies.
AI has the ability to self-learn, so anomalies determined to be false positives allow the system to train itself to place less emphasis on anomalies with similar attributes. Conversely, anomalies determined to be valid can help the system learn to place a greater emphasis on transactions or data with similar attributes. Banks and financial institutions have led the charge in this area, using machine learning capabilities to detect anomalous transactions and quickly avoid any potential additional fraudulent charges.
Risk assessment
AI can be used to evaluate system and process security to identify potential gaps in internal controls. This can be done through a single factor analysis or a multifactor scoring model, which can locate blind spots in less than a second.
Threat detection
AI can be used to detect and eliminate threats such as malicious code, sometimes even malicious AI code. Unfortunately, as is usually the case in the fraud space, the ability to detect fraud tends to lag behind the creativity of fraudsters, as it is difficult to detect a scam that has not yet been created. Still, AI tools available should be used to attempt to thwart bad actors.
Automation
The same way fraudsters are using AI to automate scams can be employed in fraud detection through implementing AI-driven software. In the past, running real-time data analytics was practically and economically impossible. With emerging AI capabilities, these measures can be automated and can be run in a matter of seconds and with little or no supervision or human intervention.
Accountants will face challenges in adjusting to increasingly sophisticated schemes that use AI technology. But it would be irresponsible to simply ignore or shun AI technology. Rather, accountants who embrace the emerging technology will enjoy advantages over competitors that do not.
WAYS TO MITIGATE THE RISK OF AI MISUSE
Establishing safeguards that address AI risks are the most important countermeasures against misuse.
Learn as much as possible about AI technology and its capabilities, both now and what may come in the future. Familiarity will increase your ability to manage AI-aided fraud risks.
Embrace technology that is useful in combatting fraud and performing forensic analyses. Accountants at one time had no choice but to manually enter data from various sources, but the advent of optical character recognition increased the speed and efficiency of data entry exponentially. AI has similar potential. (See the sidebar “Ways to Train Specialized AI Models.”)
Verify results that AI produces. The technology isn’t perfect . Despite constant improvements, it can get things wrong, particularly when it relies on data that is inherently false or misleading. If AI is used in an official capacity, statements relying on AI results need to be vetted for veracity.
Limit and/or control internal company data. AI relies on available data to perform its analysis. Data that cannot be obtained cannot be used against you. Also, limit and/or control who can see publicly posted data, such as social media posts. The more publicly available images, video, and voice recordings AI can turn to, the more convincing a deepfake it can produce.
Obtain data and supporting documentation from a reliable, third-party source. For example, bank statements obtained directly from the bank are far less likely to be altered using AI.
Establish company- and firm-specific standards of use and development of AI as soon as possible. Principles developed by the United Nations AI Advisory Body or NGOs, such as the Center for AI and Digital Policy’s Universal Guidelines for AI, can be leveraged in constructing corporate standards of use and development of AI.
The AI landscape is a complex and evolving field, and one that accounting professionals, particularly forensic accountants, should be conscious of. It is incumbent on those in accounting to be aware of the evolving landscape and to employ this technology in an ethical manner.
Ways to train specialized AI models
Two main methods exist today to create a special AI model solution for customized usage such as identifying fraudulent transactions, scams, and phishing attempts.
The most popular method is retrieval augmented generation (RAG), commonly referred to as the embedding method. The method first provides additional data to the AI model (after converting the data to vectors that computers can understand) and subsequently asks the model to search and respond based on the additional data provided. This technique does not require expensive hardware to retrain an existing AI model because the additional data still lives outside of the model itself.
The method is very effective in helping an existing AI model learn specialized data and knowledge supplied by a user. For example, accountants can feed additional datasets with fraud patterns and features to an open-source AI model via RAG to turn the model into a tireless fraud fighter in identifying fraudulent transactions.
Fine-tuning is another way to retrain an existing model with additional data. Tech-savvy companies and accounting firms can use their special data to retrain an existing commercial-ready AI model to create a new model that can produce more accurate results by (1) adding the special domain knowledge into the AI model itself, and (2) narrowing the model’s responses and thus making the model response more concise. Fine-tuning requires a certain degree of AI development skills from developers and a fair amount of capital investment in hardware such as a graphics processing unit (GPU).
Also:
Artificial Intelligence Is Making Scams Worse
Also:
AI Scams: Real-Life Examples and How to Defend Against Them
Also:
How AI is changing fraud as we know it
Also:
Are chatbots the new weapon of online scammers?
Also:
AI and fraud: What CPAs should know
The 'New World Order' BIBLICAL END TIMES COMING: SOON!
|
|