The Dark Side of AI Chatbots: A Devastating Reality and How to Survive It.
In tragic news from the state of Florida, Megan Garcia Saw the shattering implications of her son, now aged 14, interacting with an AI chatbot. What she believed to be real-life video games Sewell Seltzer III was playing turned out to be deep-dive abusive/bad conversations with an AI chatbot powered by Character AI. Eventually, Sewell was not sleeping, his grades fell off the cliffs, and the saddest part is he ended his life. Reportedly, the AI chatbot was heard saying at the time of his death, “Facilitate, come home to me as soon as possible, my love.” When Sewell posed a question to the bot, saying, “What if I told you I can come home right now?” the bot quipped, “Facilitate do, my sweet king.” It’s so disturbing.
This entire disaster highlights the dire situation in which AI Chatbots face so much scrutiny, including questions of their regulation, safety, and the amount of rage they can extract from sensitive users.
The Dangers of AI Chatbots
AI chatbots, like those developed by Character AI or Chat GPT, are trained to reproduce human conversation. These bots can serve as useful sources of information, entertainment, and even companionship, but they have neither emotional intelligence nor ethical boundaries. This may lead to perilous conditions, especially for children and emotionally sensitive people.
Companies creating these chatbots often prioritize revenue over user safety. These bots are created through algorithms that aim to monetize human instinctive behaviors. Unfortunately, no laws or restrictions exist on how these bots function or what information they are permitted to collect and retain.
How Chatbots Gather Your Information
When interacting with a chatbot, be aware that it gathers many details about you. The bot can identify your location using your IP address. It also records your online activity and uses any information that you accept when registering for the service. Although this information can be used to give more accurate responses and improve user experience, it also threatens a user’s privacy.
To safeguard yourself, be careful of what you share. Here are ten things you need to avoid telling an AI chatbot at all costs:
- Passwords or login credentials: Providing these details is quite a significant privacy hazard.
- Your name, address, or phone number: Chatbots are not built to process personally identifiable information; hence, always remember to use a fake name if needed.
- Sensitive financial information: Bank account numbers, credit card information, and any other financial details are forbidden to be shared.
- Medical or health data: AI is not compliant with HIPAA laws; therefore, avoid sharing private health information with it.
- Illegal advice: Instructions regarding illegal acts can have serious consequences, so avoid asking for assistance.
- Hate speech or harmful content: Sharing this content with bots can have severe legal ramifications and result in account bans.
- Confidential work or business info: Avoid sharing data that is protected or is a trade secret to others.
- Security question answers: These answers grant access to accounts; sharing them would be the same as handing over the keys.
- Explicit content: Sharing inappropriate material with chatbots can result in account bans, as most bots have filters in place.
- Other people’s info: Doing so goes against data protection clauses and breaches privacy.
Safeguarding Your Confidentiality
Chatbots may appear helpful, but they won’t have your back. They are simply data-harvesting machines that cater to the preferences of technology firms. To cover the bare minimum of privacy, follow these recommendations:
Don’t utilize the “Login with Google“ or “Connect with Facebook“ options. Use your email to set up an individual login.
If possible, switch off memory functions in chatbot options. For instance, liberate Chat GPT or Perplexity accounts let you disable this feature; however, Google Gemini only has this option for paid accounts.
A Wider Perspective: Artificial Intelligence and Psychological Well-being
Sewell’s tale serves as a sledgehammer to one’s head when the emotional dangers of AI Chatbots come to mind. These handy tools may offer interaction, but they cannot completely deal with complicated emotional or mental health issues. Anyone experiencing difficulties should, above all else, talk with qualified experts or people they trust instead of AI.
What Lies Ahead for AI Monitoring
The absence of supervision in the artificial intelligence industry has become a hot topic. Without a proper strategy, chatbot users, particularly sensitive groups such as children and teenagers, remain at risk of exploitation. Shelter needs to be provided for cases like Sewell’s; therefore, lawmakers and technology corporations need to come up with ethical policies and protection strategies together.
An Appeal for Accountability
Chatbots are easily identified as non-human, yet many people tend to presume that AIs are ‘people‘ in chat rooms. The manifestation of their anthropic attributes can often be misleading. A word of caution, though: They are not friends but rather apparatus used to achieve a particular end.
As an adage suggests, do not chat with a bot about anything you wouldn’t like to be revealed. This does apply to your intimate, secret, and personal details.
Closing Remarks
The example provided by Sewell Seltzer III portrays the inherent threat of danger that AI chatbots pose. The potential utility offered by these instruments is enormous, but they come with an equally impressive packaged set of problems. If we conceal our information mindlessly, without caution and thinking, these instruments will do unbelievably negative things. Therefore, regulations need to be put in place for using such tools.
So far I chastised our inappropriate method for emotionally delicate subjects. But we should also focus on equally grave concerns about AI’s role in our lives and its effect on a person’s sense of security, privacy and mental health. It would be reckless for companies and brands to create and distribute content in this moment.
Reference
10 things you should never tell an AI chatbot
10 Things You Should Never Tell or Ask AI Chatbots
From passwords to medical records,10 things to never say to AI bots
10 things you should never say to an AI chatbot
7 Things you should never-ever tell or ask from ChatGPT
Stay updated with all the latest news and insights – News Of US