Sears AI Chatbot Exposes Sensitive Customer Data Online

An investigation reveals how Sears' AI chatbot leaked customer contact info and personal details, putting users at risk of phishing and fraud.
A concerning security vulnerability has been discovered in Sears' AI-powered customer service chatbot, exposing sensitive information about users who interacted with the system. According to a recent investigation, the chatbot's conversations with customers, including their contact details and personal data, were made publicly accessible on the web, putting those individuals at serious risk of phishing attacks and fraud.
Chatbot Conversations Leaked Online
The issue came to light when security researchers found that Sears' AI chatbot was storing full transcripts of its interactions with customers in an unsecured manner, making them viewable to anyone on the internet. These conversations often contained users' phone numbers, email addresses, and other personal information that could be easily exploited by bad actors.
Threat of Phishing and Fraud
The exposure of this data presents a major threat, as scammers can use the information gleaned from the chatbot conversations to launch highly targeted phishing campaigns or attempt other types of fraud. Victims may receive emails, calls, or messages that appear legitimate, tricking them into revealing even more sensitive data or making fraudulent payments.
Source: Wired


