It takes only one infected app to take down an entire network.

Personal cell phones in the workplace have become a rather controversial topic in recent years. Early on, back when businesses used to ban personal devices, the result was astounding costs to issue phones and tablets to employees. With the adoption of bringing your own device (BYOD) policies, the costs went down but the dangers went up. Apart from the mundane concerns of employees checking their phones constantly or using them to access social media on company time, there are very real risks associated with them.

One study of the Android operating system has found that it would only take a single infected app to completely take down the entire US 911 emergency call system with a DDoS attack. How could something as simple as a game or streaming movie app disrupt an entire nation? The Android platform is based on the idea of open source code, or the ability to let others build their own apps and sell them to the public in third-party app stores. Since it’s difficult for hackers to break into an app like Netflix they create a fake version that looks and acts like the real deal. Users download the “wrong” version by mistake and never know that it’s full of malicious code.

When you go to work and connect to your company’s network with your phone, that malware infects the entire network. This is just one example that makes many employers feel they should have the right to inspect any phones that connect to the network, whether they’re company owned or personal.

According to privacy and employment practices experts, that may just be legally allowable. Without clear federal legislation about how this kind of technology impacts citizens, the rules are usually left up to the individual workplace. That means depending on the company—and in some cases, even depending on the location—the rights to privacy you think you had at one job are no longer in place at another job.

This is just one more reason why it’s a smart move to have an employee handbook that addresses technology and internet use at work. This type of issue can readily be explained before anyone carries any device into the facility and attempts to connect to the network. Any surprises can result in hurt feelings at the very least, and termination or lawsuits at worst. Of course, a solid technology handbook is good for all employees and would encompass other aspects of network security and data breach prevention as well.

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

It’s tough to be famous. Having A-lister celebrity status and the wealth that goes along with it might seem glamorous from the outside, but recent cybersecurity issues have proven that it’s not all private jets and red carpets.

 A major headlining data breach in 2014 affected the private email accounts of more than 300 well-known people, including actresses like Jennifer Lawrence and Emma Watson. The hacker was caught and sentenced last month to only nine months in jail after leaking nude pictures of his victims online.

How did the hacker, Edward Majerczyk of Chicago, pull it off? A simple phishing attack. He emailed his victims with what appeared to be a letter from their internet service providers, informing them of an issue with their accounts. The celebrities—or quite possibly, their staff members—turned over their usernames and passwords.

British soccer star and model David Beckham recently suffered a ransom attack when his sports management agency was breached by hackers who then demanded a hefty ransom payment in exchange for not releasing the contents of his email online. Beckham, whose email was handled by the agency on his behalf, was not the only victim. The agency handles accounts for other top-notch athletes like Usain Bolt and Xavi Hernandez, and more than 18.6 million emails were held hostage.

The agency refused to pay the ransom and the emails were leaked. While they did contain a few embarrassing rants, there was apparently nothing genuinely career-ending in any of them.

Reports have surfaced of another well-known actress, Emily Ratajkowski, whose iCloud account was breached by a hacker. The link to the exposed account was sent to an online tabloid reporter with instructions to publish it, apparently for no fee whatsoever. The hacker seems to simply want to expose the actress’ very private photos and personal emails. Ratajkowski was also one of the victims of the 2014 celebrity hacking.

Why are celebrities such hot targets for this kind of thing? Mostly because there’s an audience for it. Even people who would never think it’s okay to steal someone’s identity, or break into their email accounts, might be tempted to click on the photos; after all, they weren’t the ones who hacked it, so they didn’t do anything wrong. But that’s actually not the case. Remember back to high school: the kid who stole a copy of the answer key from the teacher’s desk got in trouble, but so did everyone who looked at it in order to get the answers to the test. Viewing stolen content is still wrong, even if you’re not the one who originally had a hand in the theft.

It’s small comfort that this kind of celebrity attack is nothing new. For decades, paparazzi have stood waiting to snap famous people’s pictures and “gossip rag” reporters have dug through their trash cans for some dirt. Unfortunately, the digital age has just made the work of exposing people’s private lives easier and more effective. That trash can might have held a handful of pictures even just a few years ago, but an actor’s cloud storage account today can hold thousands or even tens of thousands of images and files.

What can the average citizen do in these cases? Refuse to play the game. For some hackers, the “street cred” of pulling off a major attack is all the compensation they want, but for many others, there’s big money to be made off of leaked photos or emails. When we peruse those stolen files, we’re making hacking both lucrative and more widespread. Don’t play along, and don’t support it. After all, today it might be a big celebrity’s personal account, but tomorrow it could be yours.

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

The wave of the future gadgets might once have been the domain of EPCOT Center and “The Jetsons,” but many of those what-if devices are now a reality. More than that, many of them are now in our homes, our pockets, and our everyday lives.

A lot of our newfangled technology is “smart” or connected, meaning it has functions that require internet access. It might be storing your favorite TV shows in “the cloud,” or connecting your hallway lamp to the internet so that it knows to come on when it senses you—or rather, your smartphone since it’s always with you—coming within range. Either way, for most of our favorites new devices to work, we’ve been forced to share a little bit of our privacy.

One such device is the Amazon Echo. This home-based virtual assistant has a number of competitors on the market already, but recent information about how these convenience-keepers work has made people question the privacy we take for granted. Basically, all of these devices are “always on” and listening for their wake words, or names, to be called. If Amazon’s device hears one of three names you can choose at setup, it begins recording the interaction for customization purposes and better functionality. While the need to listen in and record has some concerned consumers rethinking their privacy, this feature is clearly stated by the developer and includes instructions for deleting the recordings whenever you like.

Another device manufacturer has come under fire for recording and storing consumers’ information, but this time it was without their knowledge or permission. A home electronics manufacturer has been ordered to pay more than $2 million to the Federal Trade Commission and the State of New Jersey because it let consumers know that the television could make suggestions for content to watch, but didn’t disclose that the reason the TV knew what you might like is because your viewing habits were being monitored. Not only that, the company must change its disclosure notice to let customers know that this feature requires access to some personal information.

While it might seem like this is the stuff of conspiracy theorists, the hard truth is that our privacy is more important than ever before. IoT-connected gadgets are in our homes, our schools, even inside our bodies thanks to medical advances. Without a clear understanding of our rights and full disclosure from the manufacturers, it’s hard to know when we’re giving up a little bit of our personal security. Keeping companies accountable for disclosure is an important step in keeping ourselves safe.

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

When the internet of things first took off with “connected” devices and home appliances, the resulting reaction was fairly positive.

After all, this was finally the EPCOT Center-style wave of the future we’d long been promised. Our thermostats could adjust themselves based on our usage and our time away, our lights could come on when our smartphones told them to, our refrigerators could order our groceries as we ran out of milk. In short, it was pretty cool.

But then the privacy risks started to come into question. Who else could see our thermostat usage and know whether or not we were home? Which advertisers were ableto tap into that refrigerator’s grocery list and target us with products, whether we wanted them or not?

The bigger risk so far has been from privacy issues related to IoT medical implants. From pacemakers to glucose monitors, the internet has provided a better quality of care by letting doctors access their patients’ implants, but who else can see the data?

As it turns out, the police can, if they have a warrant and reason to suspect you of a crime. That’s certainly the case in a very bizarre tale out of Ohio. According to reports, a man set fire to his own house in order to commit insurance fraud. After different parts of his story didn’t line up, the police requested a warrant for the information recorded by the suspect’s pacemaker, which a judge then granted. Experts in the case have already concluded that his medical history and his heart rate readout from the device indicate he was never in any danger, and that the timeline of his heart rate’s increases and decreases couldn’t match his version of the events.

That case and others have privacy experts concerned. If the man didn’t have a pacemaker—or at least didn’t have one that sent recorded readouts to his doctor over the internet—he would never have been forced to cooperate in his own incrimination.

Another headline-grabbing case involved a man who was charged with murder, largely based on recordings from his home virtual assistant, Amazon’s Alexa. Again, other circumstances provided enough cause for the judge to issue the warrant, but if the man had not owned an IoT-connected device, there would have been no recordings from the night of the murder.

These are just two cases in which users’ own technology may turn on them in a court of law, and it’s a trend that has raised some eyebrows among privacy advocates. It’s also certainly something lawmakers will be expected to address in the future, but for now, it may take a few key court rulings in order to set a privacy precedent.

How much information are you putting out there? It’s probably too much. To help you stop sharing Too Much Information, sign up for the TMI Weekly.

Online advertising is to many tech users what those annoying late-night TV commercials once were. The cheesy sales pitches, the obnoxious fast-talking announcers, and the promises of “But wait! If you act now we’ll double your order!” have become such a part of our pop culture that they’re a joke all on their own.

Some users rely on ad blocking software to eliminate the popups and the flashing sidebar offers, and while it does make our lives easier, there are some pitfalls to it. The first is that the websites we visit rely on that advertising revenue to keep their sites going and to keep the internet relatively inexpensive for most users. The other factor is that some sites can detect your ad blocker and require you to disable it in order to proceed.

But how do websites actually benefit from the ads? By tracking your visit and your interaction. If you simply “see” the ad on your screen there’s one level of payoff, but if you were to actually click on it, then there’s more revenue. In some instances, actually making a purchase after clicking on an ad can result in even more revenue for the website that hosted it.

If you’d like to see this in action, here’s a test. Be warned, it will result in altering the way the internet thinks of your shopping needs. Try searching for something completely out of the blue, something that you would never shop for, like a stroller for your dog or a different model of vehicle. Then start paying attention to the ads that appear in your email inbox, your social media pages, and other sites.

You’ll notice that dog stroller or that new pickup truck starting to appear in different ads on different websites. You might even see competitors’ products or other brands than the one you searched for, or related products like ultra-expensive dog food—you obviously take special care of your pet since you’re interested in pushing it in a stroller—after that search.

That’s the sort of tracking that privacy experts are concerned about. If the internet ad industry can keep up with your search history and generate algorithms based on your interests, what’s to stop someone from following your searches and deciding on malicious ways to use that information? Even worse, who already has access to that search information? A search for a brand-new, high-dollar vehicle could mean you’ve got money to spend. A search for Black Friday deals on toys could alert a pedophile that there are children in your home. A search for medical marijuana could potentially alert someone—perhaps even your employer—to an illness you may have or an illegal activity you’re allegedly engaging in, depending on where you live.

Now, backing up…those are all very frightening and very unrealistic speculations about what happens to your internet searches. But it’s still the underlying reason why security advocates want tighter reins on who can track you for advertising purposes and who can legally access that data. Remember though, when you signed up for an account with an internet service provider or a cellular service provider, you agreed to certain terms and conditions. You might not have read the fine print, but it’s very likely that you agreed to have your search and shopping behaviors monitored and shared.

How much information are you putting out there? It’s probably too much. To help you stop sharing Too Much Information, sign up for the TMI Weekly.

As a consumer, you’ve probably heard about the importance of monitoring your credit reports for any signs of suspicious activity, and staying on top of your credit score to make sure your purchasing power is all that it should be.

Your reports and your score are compiled by the three major credit reporting agencies: TransUnion, Experian, and Equifax. They compile information on your credit card accounts, any debt and collections issues, even inquiries by other agencies into your credit. As a US consumer, you’re entitled to one free copy of each of these agencies’ reports every year, and the instructions for getting those reports can be found here.

While these might be the “big dogs” of the credit reporting world, there are far more credit agencies than just these three and they have different functions. Known as “specialty credit reporting agencies,” these other entities have specific focuses that pertain to your buying history and consumer behaviors.

There’s an agency dedicated to your banking activity, such as keeping up with how many bounced checks you’ve had. Another agency deals with real estate, specifically apartment or home rentals, that monitors missed payments to your landlord. Other agencies check up on your history of payments to your utility companies or to medical facilities, and more. However, not all of these agencies will have information to report on you; if you’ve never rented an apartment, for example, or if you rented from a family friend who didn’t report you for missing a rent payment, then that agency might not be able to compile a report.

Who gets to access your specialty credit reports? They’re typically requested by lenders in very detailed circumstances. A potential landlord might not care about your Experian report, since that may not include data on what kind of tenant you’ll be when it comes to paying on time. The utility company also doesn’t have a lot of interest in your credit card history, but will certainly want to know from the specialty credit reporting agency that addresses public utilities what your past behavior has been like.

Now for the good news: if you care to know, you’re entitled to copies of these reports, too. Some of them will be free once per year like your major credit reports, while others may charge you a nominal fee for the information. However, if you are ever faced with an issue—like being turned down for an apartment or declined a bank account—based on the information in these reports, you’re entitled to a free copy at that time. You may also receive a free report if your identity has been stolen and used in a way that a specialty credit reporting agency would monitor, such as opening phone service or other utilities in your name.

For more information on specialty credit reporting and how it can affect you, read more at

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

Taking care of your identity and protecting your privacy might seem like insurmountable tasks, especially in the face of major data breaches, large-scale international hacking, and other “out of my hands” threats.

While those things certainly are problematic, the reality is there are many steps you can take to make yourself less likely to become an identity theft victim and to minimize the damage if your information has already been compromised.

Trying to take every single privacy step all at once is a surefire way to suffer from burnout and “data breach fatigue,” a very real phenomenon that can occur when the public is overwhelmed with constant news of identity theft dangers. But experts in a variety of fields know that making real changes in your life starts small by developing good habits and sticking to them. Tune In to Watch: Data Privacy Day 2018 – Live From LinkedIn hosted by the National Cyber Security Alliance.

Here are seven great privacy habits you can start working on:

Mailbox Monday

Your mailbox contains many of the pieces of your identity puzzle, and recent statistics have shown that mail theft is still a widespread problem. Make Monday the day you stop leaving mail in your mailbox, stop mailing important papers or checks in the corner mailbox, and stop throwing away unshredded documents or credit card offers.

TMI Tuesday

Sign up to receive the Identity Theft Resource Center’s TMI Weekly delivered to your inbox each week …yes, as in “too much information.” It outlines news items and information on oversharing and other threats to your personal data. Signing up is easy, and the link can be found here.

Weak Password Wednesday

It’s tempting to use a super-simple password (like “password”…literally) or to make one really good password and use it for all your online accounts. Unfortunately, both of those are great ways to hand your information over to a hacker. Use Wednesday to think about your passwords and to change a few of them on the dozens of online accounts you may have.

Twitter Thursday

From live Twitter chats to daily social media updates from privacy experts, Thursday is a great day to spend a little while on Twitter catching up on the latest news. A number of organizations host regular chats throughout the month, like the ITRC’s monthly #IDTheftChat events. Participate in a #ChatSTC with @StaySafeOnline on Wednesday, Jan. 10, where we’ll be talking about “Privacy Matters ‒ Why You Should Care and What You Can Do to Manage Your Privacy.”

Financial Friday

It’s the last day of the work week for a lot of people, and there’s no better day to spend some time on your financial privacy. Monitor your bank accounts and credit card accounts for any signs of unusual activity, and check up on any mobile payment accounts or apps you use (like PayPal and Apple Pay) to make sure there’s nothing out of the ordinary. If your credit card company offers it, log in and take a peek at your credit score; you won’t see your whole report, but if there’s a sudden, dramatic change in your score, that’s a sure sign that you need to order copies of your credit reports.

Share It Saturday

Did you know that you can be harmed if someone you know falls for a scam? Let’s say your mom shares a “forward this to ten people” hoax email or social media post, and you click on a link it contains. Congratulations, you may have just downloaded a virus to your computer. It’s not enough to protect your own privacy, so Saturdays are a great day to share genuine news items about data breaches, scams, and fraud. You’ll protect the people you care about, and you just might be protecting yourself.

Social Media Sunday

A lot of people like to set aside Sundays for a little rest and relaxation before heading into another work week, and that can mean checking up on social media buzz. But are you oversharing online? Do you have your privacy settings in place, and are they set to the most protective level? Stop and think for a while about what you share, where you share it, and how far it can go.

Contact the Identity Theft Resource Center for toll-free, no-cost assistance at (888) 400-5530. For on-the-go assistance, check out the free ID Theft Help App from ITRC.

Data Privacy Day is held annually on Jan. 28 to create awareness about the importance of respecting privacy, safeguarding data and enabling trust. Here are a few resources to help you be more #PrivacyAware from the National Cyber Security Alliance – plus, learn how you can get involved this Data Privacy Day.

Established in North America in 2008 as an extension of a similar event in Europe, it’s held each year on January 28th in honor of the signing of Convention 108, the “first legally binding international treaty dealing with privacy and data protection.”

The National Cyber Security Alliance (NCSA) oversees this annual event, and as such the organization plays host to a number of important community awareness-raising activities. While no one can argue that personal data protection and privacy are year-round causes, Data Privacy Day serves as a great way to kick-off your twelve-month commitment towards security.

The theme for this year’s observance is “Respecting Privacy, Safeguarding Data and Enabling Trust,” all three of which are critical areas of need for citizens and businesses alike. contains a wealth of information on protecting yourself, but its Data Privacy Day resources include ways to get involved at home, at work, and in your community. There are some simple measures you can take, like just changing your profile picture on your social media accounts in order to get the conversation out there, as well as some far more involved activities, like volunteer opportunities to take the message to schools, community centers, churches, and more.

One event you don’t want to miss is the 2017 Data Privacy Day Event Live From Twitter HQ. Register now for exciting TED-style talks and segments including “Scams, ID Theft and Fraud, Oh My – And Ways to Fight Back” with ITRC CEO, Eva Velasquez.

To find out more about the many ways to get involved this year, check out these resources and make plans to attend the #ChatSTC Twitter chat in order to get valuable privacy tips. More importantly, use this time to plan how you will incorporate data privacy into your everyday life, and how you will make it a lifelong good habit.

How much information are you putting out there? It’s probably too much. We are here to help you stop sharing Too Much Information. Sign up for the TMI Weekly.

Some of the hottest techno gadgets of the holiday season have now been opened and are positioned somewhere in your home just waiting for further instructions. If you’re one of the many shoppers who tried to purchase one of this year’s hot-ticket gift items only to find out they’re out of stock until after the New Year, that might not be the worst news to some privacy-savvy consumers.

Several companies have released home models of their virtual assistants (VA), as well as third-party accessories to go with them. Amazon’s Echo and Google’s Home are both compatible with their own lines of smartphone-driven, wi-fi-enabled outlets and appliances. With the right setup, you can tell your VA to turn on the lights in the living room, open or close the garage door, play your favorite song, or look up the show times to a newly-released movie. There are literally hundreds of functions that the devices can perform, depending on the model and the accessories you’ve chosen.

How do these devices work so well? They rely on lots of complicated artificial intelligence (AI) interface, but there’s an even more mundane mechanism: they’re recording and analyzing everything you say.

In order to understand your preferences and commands, these mini-audio sponges soak up your commands and send them to their servers where engineers can tweak the devices’ capabilities based on your voice patterns. They can also look at whether or not your command was successful—as in, “Alexa, play The Nutcracker Suite by Tchaikovsky”—and help the device learn from its mistakes. If your Amazon Echo played the Pentatonix version and you had to correct it, the device can “learn” which one you really wanted for the future.

Here’s an actual interaction with an Amazon Echo device from December 23, 2016:

  • “Alexa, play Dance of the Toy Soldiers by Pentatonix.”
  • “I can’t find dance songs by Pentatonix.”
  • “Alexa, play March of the Toy Soldiers by Pentatonix.”
  • “I can’t find the song March of the Toy Soldiers by Pentatonix.”
  • “Alexa, play The Nutcracker by Pentatonix.”
  • “I can’t find the album The Nutcracker by Pentatonix.”
  • “Alexa, play Waltz of the Sugar Plum Fairy by Pentatonix.”
  • “Dance of the Sugar Plum Fairy by Pentatonix.” And the music begins.

There are several shifts in the dynamic during that “conversation.” The device knew to look for a specific song by calling up previous information from its stored servers. When the command switched to “Nutcracker” in hopes that it would recognize it, the device knew that it referred to an album instead of a song; unfortunately, the group didn’t release an entire album called The Nutcracker, but rather only one song.

However, when the command was to play the semi-accurate song title—in this case, “Waltz of the Sugar Plum Fairy” instead of the correct title, “Dance of the Sugar Plum Fairy”—the device was able to make that adjustment without further input from the user. How did it learn to do that? Through its AI machine learning, something that is improved every single time any user around the world issues a command.

This listening and recording has some privacy experts on edge, mostly due to the potential for as-of-yet-unknown ramifications. Are servers storing our voice patterns and connecting those voices to our user accounts? Absolutely, it’s how these fairly expensive devices are improved upon. Are hackers or the government using our vocal patterns against us? No. Could they ever do such a thing? Well, that we’re less sure of.If you take issue with having your voice stored and analyzed, your only current course of action is to not purchase one of these home devices. It’s also very important if you do opt for a virtual assistant that you read the fine print and make sure you’re comfortable with the terms and agreements before you install it.

Questions about identity theft? Contact the ITRC toll-free at (888) 400-5530 or on-the-go with the new IDTheftHelp app for iOS and Android.

In a follow up to our recent ITRC blog “Did you get a snoop for Christmas?” I wanted to share a personal story that many of you may relate to.

As someone who is very privacy-centric, I love exploring and experiencing many of the gadgets and goodies that are available to make our lives easier and more fun.  My husband enjoys this as well, so at the last minute I decided to get him an Echo as a gift.

I had already done my homework on how the device works, like what data they are gathering and storing.  I read up on perspectives from both privacy advocates and technology fans, so I felt equipped to handle this responsibility.  I had been forewarned that I was about to conduct an experiment with my privacy expectations in my own home.  But as a seasoned professional, I figured I could weather the storm knowing that I had invited this stranger into my home.

Robert Arkin, robot ethicist and director of Mobile Robot Laboratory at the Georgia Institute Of Technology, once said about the pros and cons of technology, “You can choose to stay out, paddle, or plunge in.”  While I’m not usually one to plunge in, staying out isn’t the right choice for me either.  How can I provide authentic opinions if I don’t have my own personal experiences?

I knew ahead of time that Alexa would be recording everything that we asked her.  I knew that the data would be stored and crunched, then used for the purposes disclosed in the privacy statements and T&Cs (Terms & Conditions). I also knew that there were definitely future uses that I couldn’t even fathom yet.  But I told myself I wouldn’t be compromising my comfort or security by asking for music or a weather report or how many grams are in an ounce.  It did not escape me that this is still data about me that is being collected, but I decided that this was all in keeping with my “paddle in” approach.  I was still cautious in advance, knowing that the full ramifications are yet to be understood.

We decided to put Alexa in our upstairs office.  Most of the time spent in that room is largely silent anyway, since we are working online, reading, etc.  Of course, for the occasional call, we could always remember to mute the Alexa microphone just to be certain.  Since it always has to be “on” in order to hear the wake word “Alexa,” muting it was something I would need to do whenever I was in my office.   Knowing me, I would be just as likely to cut the power if need be.

This is all good in theory.  Now for the practical experience part: fast forward to December 26 when I was sitting downstairs in my living room talking with my son.  We needed batteries for the remote and I asked him if he could pick some up at the store.  Then I casually said, “Or I could ask Alexa to buy batteries.” We both laughed until I could hear her—from UPSTAIRS, remember—rattling off all of the different choices of battery types.  We stopped laughing and looked at each other.  It was like being caught complaining about an elderly relative that you thought was out of earshot.  It was creepy.

To be sure, I did say the wake word Alexa, and it was a request that would be in keeping with what she was designed for, but I wasn’t talking to her, I was talking about her.  And she was listening, FROM UPSTAIRS, for Pete’s sake.  In less than 24 hours, I had the real experience I was seeking.  And it taught me a lot.

Returning her may not be an option as I’m already guilty of personifying her, and she was a gift after all.  But after the visceral experience of having an eavesdropper lurking upstairs, she may be relegated to the garage, where the only thing she will hear is me complaining about doing the laundry.

Eva Velasquez is the president and chief executive officer of the Identity Theft Resource Center.

Follow Eva on Twitter @ITRCCEO