Chatbots and Virtual Assistants to Tighten Up Data Privacy

Date: 08/14/2019

If you are one of the millions of consumers who use a voice-activated assistant in your home or through your smartphone, your personal data and activity may become more secure due to new data privacy regulations like the European Union’s GDPR and recent privacy-related legislation. Virtual assistants and chatbot tools will now have to tighten up their security to protect your information. Siri, Alexa and Google Home are just a handful of the artificial intelligence tools that interact with live people every day. We rely on these devices for everything from looking up a phone number or a favorite song to controlling the utilities that power our homes. Because of that, they are fertile ground for hackers who are looking for private information or who seek to get a picture of our day-to-day activities. The amount of use they get is another reason AI data privacy is so important. Even if you do not own or use a voice-activated virtual assistant, you have probably interacted with a chatbot online. You may not even know it. These tools use artificial intelligence to provide customer support for businesses. You may have visited a retailer’s website and found a “live chat” button to click or had a pop-up box open with the words, “Hi! How can I help you today?” on the screen. While some businesses still use human customer service reps to provide support, a growing number of companies are already relying on computers to carry on the conversation and solve any problems. Some experts are already at work helping developers create privacy-compliant AI tools that still have enough room to be useful. If your virtual assistant cannot store your shopping or search history, for example, how will it help you find that great brand of coffee you tried? How will it know what songs or movies to recommend when you tell it to play something “upbeat?” This kind of data collection is what makes AI-driven tools useful and easy to operate, rather than forcing human users to repeat themselves with every interaction. The first step for developers is to draft a clear policy on what information is collected from users. From there, it is important to store it securely for data privacy. Some states are already requiring chatbots to disclose that they are not actual people and to request permission to record or save the chat conversation. It is a good idea for businesses in every state to start working in that direction since these data privacy laws are already being put in place. On a more personal note, it is important that companies develop AI tools that incorporate the ability to respond accordingly if a minor initiates the interaction. This can prevent a toddler from renting a movie on Amazon or a teenager from asking for critical medical advice from a robot. The most important step is to remember that technology and innovation are fluid. There is no such thing as a one-and-done law or regulation where privacy and tech intersect. Any data privacy policies or upgrades, especially where AI and chatbots are concerned, must be revisited frequently to ensure they are still complying with the law and protecting the public. If you are a victim of identity theft in need of assistance, you can receive free remediation services from ITRC. Call one of our expert advisors toll-free at 888.400.5530 or LiveChat with us. For on-the-go assistance, check out the free ID Theft Help App from ITRC.


You might also like… Is Deepfake the Next Step in Cybercrime? Things to Consider When Using VPN Shutterbugs Beware! DSLR Ransomware Attack Targets Digital Cameras   

How much information are you putting out there? It’s probably too much. To help you stop sharing Too Much Information, sign up for the In the Loop.

Get ID Theft News

Stay informed with alerts, newsletters, and notifications from the Identity Theft Resource Center