Data Privacy Day is held annually on Jan. 28 to create awareness about the importance of respecting privacy, safeguarding data and enabling trust. Here are a few resources to help you be more #PrivacyAware from the National Cyber Security Alliance – plus, learn how you can get involved this Data Privacy Day.

Established in North America in 2008 as an extension of a similar event in Europe, it’s held each year on January 28th in honor of the signing of Convention 108, the “first legally binding international treaty dealing with privacy and data protection.”

The National Cyber Security Alliance (NCSA) oversees this annual event, and as such the organization plays host to a number of important community awareness-raising activities. While no one can argue that personal data protection and privacy are year-round causes, Data Privacy Day serves as a great way to kick-off your twelve-month commitment towards security.

The theme for this year’s observance is “Respecting Privacy, Safeguarding Data and Enabling Trust,” all three of which are critical areas of need for citizens and businesses alike. StaySafeOnline.org contains a wealth of information on protecting yourself, but its Data Privacy Day resources include ways to get involved at home, at work, and in your community. There are some simple measures you can take, like just changing your profile picture on your social media accounts in order to get the conversation out there, as well as some far more involved activities, like volunteer opportunities to take the message to schools, community centers, churches, and more.

One event you don’t want to miss is the 2017 Data Privacy Day Event Live From Twitter HQ. Register now for exciting TED-style talks and segments including “Scams, ID Theft and Fraud, Oh My – And Ways to Fight Back” with ITRC CEO, Eva Velasquez.

To find out more about the many ways to get involved this year, check out these resources and make plans to attend the #ChatSTC Twitter chat in order to get valuable privacy tips. More importantly, use this time to plan how you will incorporate data privacy into your everyday life, and how you will make it a lifelong good habit.

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

Some of the hottest techno gadgets of the holiday season have now been opened and are positioned somewhere in your home just waiting for further instructions. If you’re one of the many shoppers who tried to purchase one of this year’s hot-ticket gift items only to find out they’re out of stock until after the New Year, that might not be the worst news to some privacy-savvy consumers.

Several companies have released home models of their virtual assistants (VA), as well as third-party accessories to go with them. Amazon’s Echo and Google’s Home are both compatible with their own lines of smartphone-driven, wi-fi-enabled outlets and appliances. With the right setup, you can tell your VA to turn on the lights in the living room, open or close the garage door, play your favorite song, or look up the show times to a newly-released movie. There are literally hundreds of functions that the devices can perform, depending on the model and the accessories you’ve chosen.

How do these devices work so well? They rely on lots of complicated artificial intelligence (AI) interface, but there’s an even more mundane mechanism: they’re recording and analyzing everything you say.

In order to understand your preferences and commands, these mini-audio sponges soak up your commands and send them to their servers where engineers can tweak the devices’ capabilities based on your voice patterns. They can also look at whether or not your command was successful—as in, “Alexa, play The Nutcracker Suite by Tchaikovsky”—and help the device learn from its mistakes. If your Amazon Echo played the Pentatonix version and you had to correct it, the device can “learn” which one you really wanted for the future.

Here’s an actual interaction with an Amazon Echo device from December 23, 2016:

  • “Alexa, play Dance of the Toy Soldiers by Pentatonix.”
  • “I can’t find dance songs by Pentatonix.”
  • “Alexa, play March of the Toy Soldiers by Pentatonix.”
  • “I can’t find the song March of the Toy Soldiers by Pentatonix.”
  • “Alexa, play The Nutcracker by Pentatonix.”
  • “I can’t find the album The Nutcracker by Pentatonix.”
  • “Alexa, play Waltz of the Sugar Plum Fairy by Pentatonix.”
  • “Dance of the Sugar Plum Fairy by Pentatonix.” And the music begins.

There are several shifts in the dynamic during that “conversation.” The device knew to look for a specific song by calling up previous information from its stored servers. When the command switched to “Nutcracker” in hopes that it would recognize it, the device knew that it referred to an album instead of a song; unfortunately, the group didn’t release an entire album called The Nutcracker, but rather only one song.

However, when the command was to play the semi-accurate song title—in this case, “Waltz of the Sugar Plum Fairy” instead of the correct title, “Dance of the Sugar Plum Fairy”—the device was able to make that adjustment without further input from the user. How did it learn to do that? Through its AI machine learning, something that is improved every single time any user around the world issues a command.

This listening and recording has some privacy experts on edge, mostly due to the potential for as-of-yet-unknown ramifications. Are servers storing our voice patterns and connecting those voices to our user accounts? Absolutely, it’s how these fairly expensive devices are improved upon. Are hackers or the government using our vocal patterns against us? No. Could they ever do such a thing? Well, that we’re less sure of.If you take issue with having your voice stored and analyzed, your only current course of action is to not purchase one of these home devices. It’s also very important if you do opt for a virtual assistant that you read the fine print and make sure you’re comfortable with the terms and agreements before you install it.

Questions about identity theft? Contact the ITRC toll-free at (888) 400-5530 or on-the-go with the new IDTheftHelp app for iOS and Android.

In a follow up to our recent ITRC blog “Did you get a snoop for Christmas?” I wanted to share a personal story that many of you may relate to.

As someone who is very privacy-centric, I love exploring and experiencing many of the gadgets and goodies that are available to make our lives easier and more fun.  My husband enjoys this as well, so at the last minute I decided to get him an Echo as a gift.

I had already done my homework on how the device works, like what data they are gathering and storing.  I read up on perspectives from both privacy advocates and technology fans, so I felt equipped to handle this responsibility.  I had been forewarned that I was about to conduct an experiment with my privacy expectations in my own home.  But as a seasoned professional, I figured I could weather the storm knowing that I had invited this stranger into my home.

Robert Arkin, robot ethicist and director of Mobile Robot Laboratory at the Georgia Institute Of Technology, once said about the pros and cons of technology, “You can choose to stay out, paddle, or plunge in.”  While I’m not usually one to plunge in, staying out isn’t the right choice for me either.  How can I provide authentic opinions if I don’t have my own personal experiences?

I knew ahead of time that Alexa would be recording everything that we asked her.  I knew that the data would be stored and crunched, then used for the purposes disclosed in the privacy statements and T&Cs (Terms & Conditions). I also knew that there were definitely future uses that I couldn’t even fathom yet.  But I told myself I wouldn’t be compromising my comfort or security by asking for music or a weather report or how many grams are in an ounce.  It did not escape me that this is still data about me that is being collected, but I decided that this was all in keeping with my “paddle in” approach.  I was still cautious in advance, knowing that the full ramifications are yet to be understood.

We decided to put Alexa in our upstairs office.  Most of the time spent in that room is largely silent anyway, since we are working online, reading, etc.  Of course, for the occasional call, we could always remember to mute the Alexa microphone just to be certain.  Since it always has to be “on” in order to hear the wake word “Alexa,” muting it was something I would need to do whenever I was in my office.   Knowing me, I would be just as likely to cut the power if need be.

This is all good in theory.  Now for the practical experience part: fast forward to December 26 when I was sitting downstairs in my living room talking with my son.  We needed batteries for the remote and I asked him if he could pick some up at the store.  Then I casually said, “Or I could ask Alexa to buy batteries.” We both laughed until I could hear her—from UPSTAIRS, remember—rattling off all of the different choices of battery types.  We stopped laughing and looked at each other.  It was like being caught complaining about an elderly relative that you thought was out of earshot.  It was creepy.

To be sure, I did say the wake word Alexa, and it was a request that would be in keeping with what she was designed for, but I wasn’t talking to her, I was talking about her.  And she was listening, FROM UPSTAIRS, for Pete’s sake.  In less than 24 hours, I had the real experience I was seeking.  And it taught me a lot.

Returning her may not be an option as I’m already guilty of personifying her, and she was a gift after all.  But after the visceral experience of having an eavesdropper lurking upstairs, she may be relegated to the garage, where the only thing she will hear is me complaining about doing the laundry.


Eva Velasquez is the president and chief executive officer of the Identity Theft Resource Center.

Follow Eva on Twitter @ITRCCEO 

The ride-share company Uber has faced no shortage of controversy over the past few years.

From the earliest days of the company’s launch, there have been more than a few complaints about privacy and safety. For those unfamiliar with Uber, here’s a rundown: customers install the app on their smartphones and register for an account on the site. They simply poke the app and let it find their location, then an Uber driver comes to pick them up to take them where they need to go. On the plus side, it’s a convenient transportation method that lets customers pay through a stored payment method, but the downside is that the drivers are not actually Uber employees.

Early on, Uber came under fire for tracking its customers’ locations via the app, including while they were en route using the service. Another tracking feature was enabled that lets you track your friends and family members, with their permission, as well as allows customers to pay for rides for those in their Family Profile. You might argue that both of these features were for the riders’ safety. But a new policy has left customers and security experts alike scratching their heads, wondering what justification Uber finds for tracking its customers… after they’re dropped off.

According to the company’s new policy, riders will be tracked for five minutes after their rides have ended. Customers will exit the vehicle and continue on their way, while their apps continue to track their locations and report that information back to Uber. Without any logical safety reason for this extended tracking, critics have been left to wonder what its purpose could be.

One of the chief complaints is that this is an all-or-nothing feature. The latest update to the Uber app gives users only two choices: be tracked all the time (even though the company says it only plans to track for those additional five minutes) or be tracked never, which means the app is useless since drivers won’t know your pickup location.

Whether you live in an area serviced by Uber or not, this still affects you. How? Because security advocates see this as another “baby step” towards eroding away personal privacy. If you’re not reading the terms and conditions when you create a new account of any kind, you may be opening yourself up to this kind of business practice. If any company has openly stated their intention to track you or sell your data but you didn’t read the terms, then they’re not actually doing anything “wrong,” at least not from a legal standpoint.

Fortunately, there is a workaround for this kind of tracking, and it’s to disable the apps on your smartphone so that they’re not refreshing in the background, not using your location settings, and not tagging your location. It does make your smartphone a little harder to use—meaning you have to type in your location instead of having it automatically found, for example—but if you’re concerned about your safety and security, it’s a small price to pay.

As always, anyone who believes their identity has been stolen or their personal data has been compromised is invited to connect with the ITRC through our toll-free call center at (888) 400-5530, or on-the-go with the new IDTheftHelp app for iOS and Android.

It’s been called “the biggest lie on the internet,” or rather, the one that most of us tell when we click to agree that we’ve read the entire terms and conditions. The reality about those terms, though, is that you may be handing over a lot of your privacy when you check that little box.

Just for fun, an IT company posed a social experiment surrounding terms and conditions. Their online sweepstakes that claimed to give away a brand-new computer, and their terms and conditions were spelled out. They found that 100% of the entrants had not read the terms, despite checking the box that they had. How did the company know? One of the clauses stated that in order to be eligible, entrants had to submit a photograph of their shoes. No one submitted the photo, yet they agreed and entered anyway.

It’s important to know what many terms and conditions can include, especially if you’ll be activating new accounts in the coming weeks to go along with any holiday gifts. It’s also important to learn how to find those terms at a later date, just in case you went ahead and checked the box without reading it thoroughly.

1. Photographs and videos

If you upload your holiday photos to social media sites, you might have granted permission for the site to use it. You would still own the photo, but you would no longer control what they do with it and you would not make any money if they chose to use your photo for advertising purposes.

Be aware of a very common Facebook hoax: Copying and pasting a status onto your wall that states Facebook cannot use your photos is not a valid demand. You opted to use Facebook when you signed up for a free account… you don’t get to tell them they can’t share your photos.

Remember, even if the company itself doesn’t want to use your pictures, that doesn’t mean other social media users won’t copy and paste them and use them for themselves. Even changing your settings to Private only means there’s no “share” button, it doesn’t mean the image can’t be copied.

2. Location-based monitoring

Geotagging has gotten a lot of coverage in recent years, and technology manufacturers have responded by giving their customers the option to turn off the geotagging feature. But if you have certain options turned on, like the option to find your device if it’s lost or stolen, you’ve just agreed that the monitoring takes place anytime the phone is turned on. Presumably, that means that your phone in your pocket or purse is transmitting your location to the server that enables the find feature.

Other services, like in-car navigation systems and entertainment systems, also track your location when activated. This is how the service can pinpoint your location and give you directions, or send the police in an accident.

3. Automatic renewal

Some companies offer free trials to consumers in order to let them experience the service before they pay, but be careful. Some of the terms and conditions not only state that your service will be renewed automatically for a fee if you don’t cancel before the cutoff date, but they also bill annually, meaning you just signed up for a year’s service. You can still choose to cancel the service, but the refund may or may not occur depending on the terms and conditions.

4. Selling your information

Many companies reserve the right to use or sell your base-level information—things like your name, mailing address, or email address—and this isn’t necessarily a bad thing. After all, if you opened an account with a website, logically you might be interested in similar offers from other websites. At the same time, if the account you created was free, someone has to pay the bills. Selling your information to advertisers is one way that small companies can keep the lights on without having to charge you a lot of money for an account.

Typically, you’ll be offered the chance to opt out of newsletters, promotional mailings, or outside offers. If not, you might have granted them permission to sell your information when you agreed to the terms.

5. These terms may change

If you are one of the tech users who meticulously reads the service agreement before checking the box, great! But you still might find yourself surprised by sudden changes. That’s because many service agreements leave the door open for changes down the road; after all, if they discover that a facet of their company isn’t working out, they need to be able to fix it.

That’s why a lot of companies will send out emailed updates to their terms and conditions if they make any changes. You may find that you have to check the box again the next time you use that site or software, but you may also discover that the changes took effect whether you read the email or not. It all depends on the change and how the notification went out.

 

Connect with the ITRC through our toll-free call center at (888) 400-5530, or on-the-go with the new IDTheftHelp app for iOS and Android.

ID-theft criminals may be the ultimate Grinch this holiday season as kids’ smart toys create vulnerabilities for hacking, data theft and cyberattacks.

While the Black Friday and Cyber Monday cyberthreats are behind us, we cannot let our guard down, as ID- theft criminals continue to target new access points through the Internet of Things — for example, children’s toys. Mattel’s Barbie Doll, the iconic doll series coveted by millions of children, has now become “smart,” and that sadly means there’s a dark side. The cybervulnerability of smart toys is all too real.

Smart toys, similar to other smart devices and appliances, connect to your home’s Wi-Fi network. This means that if compromised, criminals have a conduit into all activities on your home network. ID-theft criminals may then attempt to garner your personally identifiable information, access your home security system or listen to personal conversations through baby monitors or even the new Hello Barbie doll.

According to the Huffington Post, the new Hello Barbie doll, which connects to the Web to provide answers to your children’s questions, “uses a microphone, voice recognition software and artificial intelligence to enable a call-and-response function similar to Siri or Google Now. A free smartphone app that connects the toy to a user’s Wi-Fi network brings this Barbie into a class of technology often referred to as the Internet of Things, or IoT.”

To the credit of Mattel, the company that markets the Barbie brand, it has partnered with entertainment company ToyTalk to develop the doll’s information-security technology to minimize potential security issues and to protect consumers’ security. However, it’s not just smart toys that create an opportunity for cybercriminals to steal our children’s information, such as names, ages and even photographs. It’s also through direct attacks on organizations where parents register their children’s information, such as VTech, a recent data-breach victim with millions of records compromised.

The VTech website advises that “4.8 million customer (parent) accounts and 6.3 million related kid profiles worldwide are affected, which includes approximately 1.2 million Kid Connect parent accounts. In addition, there are 235,000 parent and 227,000 kids accounts in PlanetVTech. Kid profiles, unlike account profiles, only include name, gender and birthdate.”

Understand that anytime you create accounts for your children for educational products or services, both you and your children’s information is a target for hackers. This is because hackers are looking for information such as your e-mail address or passwords. Simply attaining your e-mail address allows hackers to engage in spear phishing attacks, which have proven incredibly effective. Hackers also realize that people oftentimes utilize the same passwords for multiple sites. They can take the password to try to drain your bank accounts.

Mark’s most important: Don’t let cybercriminals steal your happy holidays by using strong and up-to-date Wi-Fi security along with strong password management.

Mark Pribish is vice president and ID-theft practice leader at Merchants Information Solutions Inc., an ID theft-background screening company based in Phoenix. Contact him at markpribish@merchantsinfo.com.

This article was originally published on AZcentral.com and republished with the author’s permission.

There’s disturbing news for anyone who relies on a vehicle to get around: the National Safety Council has reported that motor vehicle deaths increased by 8% in 2015 over the previous year, marking the largest single-year increase in 50 years.

“Over the last year at the state level, the NSC estimates Oregon (27%), Georgia (22%), Florida (18%), and South Carolina (16%) all experienced increases in fatalities, while only 13 states showed improvement.”

One of the chief culprits that experts blame for the traffic deaths is distracted driving, which encompasses everything from texting, updating social media, and even attempting to post on Snapchat while driving, as in the case of one fatality involving the filter that displays the miles-per-hour the person was traveling when the image was taken. Several states have already enacted legislation that bans certain behaviors while driving in order to combat this epidemic.

Law enforcement may have a new weapon in the fight against distracted driving, but it has privacy experts taking a somewhat cautious stance. Called a Textalyzer after the word “breathalyzer” and already introduced a bill before the New York state legislature, it’s a device that allows officers to scan drivers’ phones to see if they were using their phones prior to a crash.

The issue of law enforcement interacting with citizens’ phones has already been a hotly contested topic, one that was heard by the Supreme Court back in 2014. The Court ruled that citizens’ smartphones contain just as much personal information, photos, and correspondence as their homes, and therefore require a warrant before they can be searched. The Textalyzer, however, doesn’t look at the contents of the activity but instead is only supposed to report whether the phone was being used in violation of the law.

There’s another privacy consideration, though, which is to be aware of the potential for hacking. As with any new technology, the full scope of the potential for identity theft has to be considered before it can be unleashed in the public sector. The Internet of Things has already taught us that the “unknowns” behind new technology can actually have serious ramifications for privacy and cybercrime.

Anyone who believes their identity has been stolen or their personal data has been compromised is invited to connect with the ITRC through our toll-free call center at (888) 400-5530, or on-the-go with the new IDTheftHelp app for iOS and Android.

As parents, you may have to pull off an especially tricky balancing act around the holidays. It can be hard to navigate the commercials that entice kids with the hottest holidays toys while still trying to stick to a budget, and no one wants to imagine the dilemma of not being able to find that one present a child has his heart set on. However, while you focus this year on maintaining some level of sanity to your holiday shopping, there’s another important factor to keep in mind.

Safety is always a consideration when you’re buying any toy for a child. Is it a choking hazard? Does it contain harmful chemicals in the plastic? Is it age-appropriate and will it encourage healthy play? Those are all important points to keep in mind, but it’s absolutely vital in the digital age to make cybersecurity a part of your shopping list as well.

Last year, holiday headlines went haywire with news of a Mattel product that posed a security risk to children. The company’s Hello Barbie interactive doll recorded children’s conversations with the doll and transmitted them over wifi to a third-party. The goal of the data gathering was to make the doll more personal to its owner and to make it more intuitive in its responses; most artificial intelligence relies on “machine learning,” after all, to help it be more useful and accurate. But many parents and security advocates balked at the notion that a third-party—and potentially any hackers who worked their way in—were listening in and recording underaged children.

Also, around this time last year educational technology toy manufacturer VTech suffered an intentional data breach that stole the user profiles for millions of consumers’ Learning Lodge accounts. The information included adult account holders’ names, email addresses and passwords, secret questions and answers, I.P. addresses, mailing addresses, and more. Even more alarming, the hacker also stole the names, genders, birth dates, and even photographs of the users’ children.

So this holiday shopping season, it’s important for parents to understand all the potential security risks and mechanisms that drive the toys they plan to buy. We tend to discover a security vulnerability after the fact, but there are some common sense questions you can ask before you make that purchase:

1. Does the toy require a wifi connection, Bluetooth connection, or downloaded app to make it work?

2. Does the toy require you to make an account in order to use it?

3. Does it record, store, or share any information about you or your child?

4. If it’s installed on your computer or mobile device, what permissions does it require, like access to your camera and microphone?

5. If it’s installed, are there optional permissions it wants, like access to your contacts list or photo albums?

It’s important to find these things out before you buy so that you can make a determination about the product’s potential for harm. If you’re confident that your child is old enough to understand the security requirements and can follow your rules for safe use, then you’ll feel better knowing the risks and knowing that you’ve addressed them.

Questions about identity theft? Connect with the ITRC through our toll-free call center at (888) 400-5530, live chat feature or on-the-go through our IDTheftHelp app for iOS and Android.

The latest tech craze in the realm of GPS mapping might be more of a timesaver than a world changer, but that hasn’t stopped customers from hurrying to jump on board. And anyone who’s been late getting out the door due to some misplaced car keys won’t have any trouble seeing the allure.

There’s a category of new devices on the market from a growing number of providers, and they make everyday life a little easier. These small tags come in different shapes, colors, and sizes depending on the manufacturer, but they all let you attach them to a typical object like your car keys, then track that object on your smartphone. Once you begin the search for the keys or the TV remote or any other small item, the tag may emit a small alarm (depending on the company) and will provide its location on the accompanying app through your mobile device.

Given their typical size, they can be placed on practically anything that gets misplaced easily. But some researchers are more afraid of the potential for harm and the loss of consumers’ privacy than the inconvenience of your child misplacing his lunchbox. First, there were the concerns over the lax security that some of the apps had. More importantly, as these tags rely on GPS coordinates for their location and sync to your device over Bluetooth to provide that data, it doesn’t take a criminal mastermind to think up some possible—although currently unlikely—scenarios in which your tracking tag can turn on you.

One of the chief complaints from researchers regardless of the manufacturer has been the open pairing with Bluetooth. Your smartphone might be paired with the tag, but what’s to stop someone in your vicinity (such as at the mall) from searching for Bluetooth devices on his phone, “forgetting” or removing your device, then pairing your car keys with his phone in order to track you. We’d have to ask ourselves why someone would want to do that, but we don’t have to wonder if they actually can do it because the answer is yes.

Researchers found other reasons for concern based on what type of device was being investigated, but the real takeaway is this: consumers have to be cautious about what can be done with any new technology before they sign on to use it. If consumers are aware that their objects can be tracked and are comfortable with any plausible or implausible risks, then they’re fine. But what we have to constantly safeguard against, though, is the not knowing. We cannot come to rely on a new service, technology, or concept without educating ourselves on its functionality, assessing any potential dangers, and determining our comfort level with the possibility of harm.

Interested in more cyber news? Check out the ITRC blog to keep you updated and aware of the latest topics and events.

Short Answer: We Don’t Think So

Privacy is a hot commodity in the current climate of technology and connectivity. It can be hard to balance out the need for security with the enticing functionality of latest apps and gadgets. When we factor in the additional need for public safety and effective law enforcement, it can feel like our privacy gets tossed around like a beach ball.

One of the ways that tech companies are working to ensure protection and privacy for their customers is by developing better security protocols, such as end-to-end encryption. This type of encryption, now being put into use for things like messaging apps, means that the text is encrypted when the user sends it, and then encrypted again when the recipient receives it. It also means the company who created the app can never see the content of the messages, share the messages, or have them fall into the wrong hands by being hacked.

One company in particular has taken the time to weigh the pros and cons and decided the best approach was to skip end-to-end encryption on one of its apps. Google’s new Allo messaging app was expected to employ this level of security, but the company has decided it is not compatible with its ongoing efforts at machine learning and artificial intelligence. This has some security experts and privacy advocates up in arms, as users can well imagine.

Yes, this decision does leave the door open for your messages to be nabbed in a data breach. It also means law enforcement can seek a warrant for those messages if they have reasonable proof that you’ve committed a crime and the content of those messages is involved. But for most app users, neither of those concerns register high on their list of priorities because they’re not sending sensitive information through the app.

The important take away is that Google did not make this decision through a lack of effort or through empty promises of security; the company isn’t leaving your texts vulnerable due to oversight or lack of protocols, as is too often the case in data breaches. This was an intentional decision because Google relies on user activity to “educate” its artificial intelligence lab and to help you with better autocorrect options, for example.

When a company makes a conscious decision about its security measures and then makes consumers aware of its decision and the reasons behind it, the company is being transparent. This empowers the consumer to make their own decisions based on the facts. Therefore, if users are concerned about the lack of end-to-end encryption, there are plenty of apps that do offer it. The more concerning security risk comes when a company assures its customers that their data is safe and locked up tight, then fails to put into place the adequate protections they promised. Basically, if you don’t like the methodology behind Allo, don’t use it. Make sure before you use any app that you understand its security and how it impacts your privacy.

Anyone can be a victim of identity theft, anyone can use our services, and anyone can help us help others. If you found this information useful, please consider donating to the Identity Theft Resource Center to help us keep our services free to the public.