We’ve all seen the ads on TV and around the internet for increasingly connected devices and voice activated bots to help around the home, and we even have personalities attributed to the voices—Siri with an attitude. The idea of a coffee machine kicking on when an alarm clock goes off would have once seemed very Jetson-esque, but now is entirely possible. Recent leaps in technology, data speed, and cloud storage make this—and much more—possible. Such connectivity is called the Internet of Things (IoT), and its future is bright and full of potential.
In a business setting, this could mean the conference room you’ve booked could have your PowerPoint presentation queued up when you walk into the room. The sign-in device at the receptionist’s desk could alert someone their next appointment has arrived. Many green-certified buildings turn off lights and adjust thermostats according to times when no one occupies the room in order to save energy. These are fairly small occurrences, but the more connected we get, the more outrageous the possibilities.
Self-driving cars? Already in production. Robots that perform surgeries guided by a doctor 1200 miles away? Check. Say you’re running late the morning when you’re scheduled to deliver a presentation and you hit a traffic jam. What if your car could re-route your commute through your GPS on your phone to minimize the delay, and also use your phone to message everyone attending the meeting that based on drive time and current traffic, you’ll be arriving 10 minutes later and not to rush to the meeting location?
But with every great leap forward for society, there’s a big question: ethics. For today’s purposes, we’ll stick to the intersection of the ethics of marketing and data sharing where it pertains to consumer permissions and privacy.
With all this connectivity comes a great influx of data.
Smart billboards in San Francisco are “seeing” who drives by, tracking passing cell phones to provide advertisers with demographics data about where to place their best billboard ads. Burger King crossed the line with an ad that triggered the Google Home speaker to recite the description of the Whopper from Wikipedia. Google caught on quickly and pushed a hotfix that shut it down, but the idea of a corporation triggering responses in our homes for devices supposedly under our control is an issue that will be whispered about today and argued about on Twitter tomorrow.
The average electronics-owning citizen is not technologically savvy enough to spot where the factory security settings on a device are inadequate or sufficient and take steps to plug the holes. Ask yourself the last time you fully read a Terms of Service agreement before clicking “agree” to install a new app or sign in to a new device. Chances are that answer is an uncomfortable one. Do we know how much of our behavioral patterns are already being recorded by our smartphones and other smart devices for corporate use?
What’s to stop the next merchandiser from deliberately building marketing strategy into their commercial to commandeer our devices? The idea of virtual assistants like Siri and Alexa are that they learn from us the more we use them, but the more we use them, the more they learn about us. Are we comfortable giving up a certain level of privacy in order to have a more productive virtual assistant? Is it only Siri/Alexa using what they’ve learned, or are there data miners going through that collection of 1s and 0s and selling the results to corporate interests? In the past, such as with the Nelson TV ratings program, we’d give permission for someone to keep an eye on specific behavior patterns we ascribe to for their use, but it went no further.
Now, there are stories about the cameras on baby or security monitors spying on us.
What are our rights to privacy?
In April of this year, Congressional rules regulating the sale of personal data by internet service providers were repealed by the Trump administration, paving the way for our device usage habits to be sold by companies like Comcast, AT&T, and Verizon. What’s to keep that data from containing our GPS locations, providing corporations with our every move? They’d know how often we eat out, how often we have to stop for something and which store we prefer, what we browse for online that translates into our spending habits, etc. Who’s to even say that with devices like smartphones being able to listen in while awaiting voice commands, ISPs won’t be selling conversations we think are private?
Just because the companies can listen in doesn’t mean they will. But without an ethical framework for privacy and security protections, there are risks for consumers by misaligned marketing teams.
Maybe that Facebook ad for the weight loss supplements showed up on our timeline because of that conversation we had with our best friend about maybe joining a gym, or maybe it didn’t.
How can you trust your voice commanded virtual assistant to help you out without being influenced by the corporation buying the data that makes the assistant more helpful to you via learned behaviors?
Or worse, how do you know if your self-driving car changes your drive to work specifically to engage in a marketing opportunity, such as driving by that smart billboard, or because there’s slow traffic on your usual route?
As marketers, our job is to blend the automated, the humane, the ethical and the profitable strategies together to serve on person: the customer. Empathy will be needed more than ever as we attempt to personalize and customize each marketing experience without stepping over the line into the intimate world in the mind of the consumer.
Marketing is designed to influence our behaviors, but in the age of so many connected devices, where’s the line between influence and hijacking?