Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.
Photo by Cesc Maymo/Getty Images

Banking Giants Warn That AI Could Fuel Cybercrime and Diminish Employee Morale

Wall Street banks are increasingly warning investors about the potential dangers tied to artificial intelligence as financial companies work through the challenges of adopting this technology.

Recommended Videos

In a report by Bloomberg News, several concerns were raised, including AI “hallucinations,” the use of AI by cybercriminals for malicious purposes, and the possible negative effects on employee morale. JPMorgan Chase, in a recent regulatory filing, explained that using AI could lead to “workforce displacement,” which might hurt employee morale and retention while also increasing competition for highly skilled workers.

Although banks have mentioned AI-related risks in past reports, new issues are arising as the financial industry continues to incorporate AI into its day-to-day operations. Striking the right balance between using AI to improve customer service and fighting cybercrime is becoming more complex and requires careful handling.

“Having those right governing mechanisms in place to ensure that AI is being deployed in a way that’s safe, fair, and secure — that simply cannot be overlooked,” said Ben Shorten, finance, risk and compliance lead for banking and capital markets in North America at Accenture. “This is not a plug-and-play technology.”

An-artificial-intelligence-AI-sign-sits-illuminated-at-Mobile-World-Congress-2025
Photo by Cesc Maymo/Getty Images

Bloomberg’s report suggests that banks might be using technologies that depend on outdated or biased financial data. For instance, Citigroup has admitted to the difficulties of using generative AI, worrying that analysts might have to deal with “ineffective, inadequate, or faulty” results. In the bank’s 2024 annual report, it stated that these data issues could damage its reputation and negatively impact customers and financial performance.

PYMNTS recently discussed the growing connection between AI and cybercrime, pointing out that the use of AI technologies has led to more advanced cyberattacks in 2024. Some of these threats include ransomware, zero-day exploits, and supply chain attacks.

“It is essentially an adversarial game; criminals are out to make money and the [business] community needs to curtail that activity. What’s different now is that both sides are armed with some really impressive technology,” said Michael Shearer, chief solutions officer at Hawk, in an interview with PYMNTS.

In another report, PYMNTS looked at Amazon Web Services’ recent efforts to tackle AI hallucinations using automated reasoning, a method based on traditional logic principles. AWS Director of Product Management Mike Miller stressed that this development is especially important for heavily regulated industries like finance and healthcare, as it aims to make AI outputs more reliable.

As major financial institutions like JPMorgan and Citigroup continue to adopt AI into their operations, the demand for thorough risk management strategies is growing. The changing landscape requires careful thought to make the most of the benefits of technology while reducing significant risks.

Sources: Bloomberg, JP Morgan Chase, PYMNTS, PYMNTS (2)


Attack of the Fanboy is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author