Mike McGrail, April 2016
Tech industries leaders like Microsoft CEO Satya Nadella and IBM CTO Rob High have opened a new chapter in the long-running discussion of whether humans control technology or technology is starting to control humans. Both are among the growing chorus reacting to recent advances in artificial intelligence and robotics that portend ever-more-human technology and a thicket of moral economic questions.
There’s a thread of this discussion that extends into communications and brand-building. It’s a growing branch of neuroscience focused on social media that offers incredible insight on human responses to content delivered over social media and the potential to reach people on deeper levels that possible today.
Researchers are peeking into the human brain with ever-growing interest to see how different kinds of social media content engage different parts of the brain and trigger an action. Using functional magnetic resonance imaging (fMRI), researchers can watch the brain respond to social media content in real time. Combining it with observing and quantifying social media behavior can reveal what content or message is likely to cause an eye roll, and what might nudge a person into action. Blogger The Neurocritic did an excellent (and partly satirical) piece that outlines the potential of this branch of neuroscience here, referring to a landmark 2015 study of social media behavior by researchers at Freie Universität in Berlin and Princeton University.
The growing body of knowledge about our brains’ responses to different social media content is a treasure trove for brands and the people who promote them. It’s related to the storytelling response we’ve written about here, but offering a potentially more detailed and focused blueprint for triggering behaviors.
It’s that potential that raises an ethical question: at what point does pushing biochemical buttons in the brain cross the line from marketing to manipulation? Nadella and High expressed concern about technology replacing human jobs. But what about its potential to decide what’s relevant to us by reaching into our heads and make the connections between sense, thought, community and values that determines what’s relevant to us?
I had to back away from this issue as a communications professional and look at it as a human to decide what I really think about it. As in so many things in life, the answer lay in the middle and required a detour into a not-directly-related subject: subliminal advertising.
When subliminal advertising came to light in 1957 with James Vicary’s “eat popcorn” message spliced into movie reels, there was a public backlash. It continued even after it turned out that Vicary faked the results and the subliminal messages didn’t really work. People didn’t like being the targets of psychological sleight of hand, regardless of its efficacy.
For that very reason, Vicary’s stunt had a positive effect. It turned attention on subliminal messages, and the subsequent study revealed that subliminal messages can’t force people to do things they don’t want to. They can only influence a behavior that the person was disposed to have anyway.
Fast forward back to today. Artificial intelligence, social media neurology, Big Data analytics … they could all be analogous to Vicary’s subliminal advertising if people tried to use them to bypass our conscious decision making. But also like Vicary’s subliminal advertising, they haven’t been proven to work like that – and very possibly never will.
Writing in the New Yorker in 2013, Gary Marcus constructed a persuasive case that analytics can reveal essential data points, but that it still takes human cognition to see the relationships and make the connections that lead a person to embrace something as relevant or act on it. Three years later – an eternity in the fast-moving tech world – there’s no evidence that anything has changed.
Because our cognition still rules, we are still running the machine and not vice versa. That means that these developments in social media neuroscience, Big Data, etc., aren’t inherently dehumanizing. Like every technology that has come before them, it’s not what they can do that’s at issue. It’s how we use them.
No one wants to be manipulated. But speaking personally, I do want to be influenced. I want to be influenced to support causes I believe in. I want to be influenced to spend my money with companies whose values reflect mine. If those causes and companies can impassion me by showing specific content through specific channels, if you can use my brain to get to my conscience, empathy and responsibility, then have at it. Heck, I’ll put the electrode in my head myself.
Constantly challenging technology’s role in our lives is just good sense. Technology can disrupt society in hurtful ways, but most of the time it also gives us the means to ease the hurt. Maybe the best advice for pondering the technology questions raised by Nadella and High and others comes from the conclusion of the classic Fritz Lang movie “Metropolis”: Between the mind that plans and the hands that build there must be a Mediator, and this must be the heart.