Here we are at the end of A-Z, so what better way to end it than by discussing one of the most influential social psychologists, Robert Zajonc (pronounced Zah-yuntz).
His contributions to the subfield are numerous, and he is apparently the 35th most cited psychologist of the 20th century. Some even state that he is one of the "creators" of modern social psychology, and his work is certainly partially responsible for the shift to "cognitive" social psychology. One of his best known contributions is the mere exposure effect, but much of his work is linked to this concept, in its focus on how presence of and exposure to others changes us cognitively and even physically.
The mere exposure effect deals with the attitude change that naturally occurs as something/someone becomes more familiar to us. That is, as we continue to be exposed to a person, place, or object, our attitude naturally becomes more positive. Once again, this simple concept has many important applications, but one of the most striking is in using the mere exposure effect to improve relations with certain groups, such as groups that are often the target of prejudice and discrimination. Having friends who are of different races, sexual identities, and so on, reduces your feelings of prejudice toward those groups. This finding offers an important rationale for the need for diversity in schools and workplaces.
Zajonc also studied social facilitation, finding that it occurs not just in humans, but animals and even cockroaches. And he and a colleague (Greg Markus) are also known for the Confluence Model, which deals with birth order and intelligence. According to this theory, first-born children are more intelligent, because they are born into adult-only environments, and also, if they eventually have younger siblings, they are involved in teaching those children. Last-born children are born into the most mixed adult-child environment, and also do not have the opportunity to teach younger children, resulting in lower intelligence. However, the magnitude of this impact on intelligence tests is small, about 0.2 standard deviations.
What many of his research has in common is the focus on cognition, but also on how affect (emotion) comes into play. This work culminated in his address for the Distinguished Scientific Contribution Award from the American Psychological Association: Feeling and Thinking, Preference Needs No Inferences. You can read the address/article for free at the link, but in short, Zajonc found that cognition and affect are two separate systems, but affect is more influential and experienced first. Cognition - the more systematic, rational approach - takes time and mental energy. Emotions are often automatic.
Because we're cognitive misers, we tend to gravitate toward the easy approach when making decisions, so we often let our emotions guide us. As I wrote in the Quick v. Slow post this month, decisions made through this faster, automatic channel are not necessarily wrong, nor are decisions made through the slower, systematic channel necessarily right. They are simply different approaches to a problem, and the proper approach really depends on the situation. However, social psychology as a field had started to focus too heavily on cognition, avoiding affect or emotion, which gave an incomplete understanding of social psychological concepts. Zajonc's work encouraged researchers to once again consider affect in their work.
Hope you've enjoyed A-Z!
Saturday, April 30, 2016
Friday, April 29, 2016
Y is for "You Are Not So Smart"
I was midway into teaching a course in Learning & Behavior when one of my favorite students came up to me and simply said, "You are not so smart."
I must have noticeably paused, wondering where he was going with this, when he got a look of total embarrassment on his face and said, "No, I mean, that's the name of a book I think you would love. You Are Not So Smart. I don't mean you're not smart."
I laughed and thanked him for his recommendation, and immediately added the book to my wishlist. As I was doing some additional research, I learned that not only is You Are Not So Smart a book, it's also a website and a series of podcasts, all about the various cognitive biases humans experience.
So my student was right - I love You Are Not So Smart. And also, I am not so smart. But that's okay, because the same is true for everyone.
The book/website/podcasts are all about the various ways we delude ourselves. This includes, for instance, self-enhancement biases - ways in which we make ourselves seem better than we are. The Dunning-Kruger effect is a great example. It also includes simply skewed perception, such as our tendency to rewrite our memories to make them fit with our current identity.
On the flipside, it can also include self-deprecating biases, like impostor syndrome, and self-handicapping, like learned helplessness. Humans are complex creatures, after all.
The great thing about You Are Not So Smart is that it is incredibly approachable, even for people with little to no knowledge of psychology, and it clearly explains and applies this information, with lots of pop culture references sprinkled in. David McRaney, the man behind You Are Not So Smart, skillfully does what I hope to do with this blog, and he's a great role model for my own writing.
So be sure to check out You Are Not So Smart. If you enjoy it as much as I do, be sure to check out his follow-up book - which I'm embarrassed to say I just learned about - You Are Now Less Dumb!
I must have noticeably paused, wondering where he was going with this, when he got a look of total embarrassment on his face and said, "No, I mean, that's the name of a book I think you would love. You Are Not So Smart. I don't mean you're not smart."
I laughed and thanked him for his recommendation, and immediately added the book to my wishlist. As I was doing some additional research, I learned that not only is You Are Not So Smart a book, it's also a website and a series of podcasts, all about the various cognitive biases humans experience.
So my student was right - I love You Are Not So Smart. And also, I am not so smart. But that's okay, because the same is true for everyone.
The book/website/podcasts are all about the various ways we delude ourselves. This includes, for instance, self-enhancement biases - ways in which we make ourselves seem better than we are. The Dunning-Kruger effect is a great example. It also includes simply skewed perception, such as our tendency to rewrite our memories to make them fit with our current identity.
On the flipside, it can also include self-deprecating biases, like impostor syndrome, and self-handicapping, like learned helplessness. Humans are complex creatures, after all.
The great thing about You Are Not So Smart is that it is incredibly approachable, even for people with little to no knowledge of psychology, and it clearly explains and applies this information, with lots of pop culture references sprinkled in. David McRaney, the man behind You Are Not So Smart, skillfully does what I hope to do with this blog, and he's a great role model for my own writing.
So be sure to check out You Are Not So Smart. If you enjoy it as much as I do, be sure to check out his follow-up book - which I'm embarrassed to say I just learned about - You Are Now Less Dumb!
Thursday, April 28, 2016
X is for Factorial (X) Design
The simplest study has two variables: the independent variable (X), which we manipulate, and the dependent variable (Y), the outcome we measure. The simplest independent variable has two levels: experimental (the intervention) and control (where we don't change anything). We compare these two groups to see if our experimental group is different.
To use a recent example, if we want to study the Von Restorff effect, we would have one group with a simple list (control) and another group with one unusual item added to the list (experimental). We would then measure memory for the list.
But we don't have to stop at just one independent variable. We could have as many as we would like. So let's say we introduced another variable from a previous post: social facilitation. Half of our participants will complete the task alone (no social facilitation) while the other half will compete with other participants (social facilitation).
When we want to measure the effects of two independent variables, we need to have all possible combinations of those two variables. This is a factorial (also known as a crossed) design. We figure out how many groups we need by multiplying the number of groups for the first variable (X) by the number of groups for the second variable (Z).
For the example I just gave, this would be a 2 X 2 design. The X would be pronounced as "by." This gives us a total of 4 groups: unique item-no social facilitation, simple items-no social facilitation, unique items-social facilitation, and simple items-social facilitation. Only by having all possible combinations can we separate out the effects of both variables.
Not only does this design require us to have more people and often more study materials, it requires us to have more hypotheses: predictions about how the study turns out.
We would have one hypothesis for the first independent variable: lists that contain unique items will be more memorable than simple lists. And another for our second independent variable: participants who compete against others will remember more list items than people who do not compete against others.
But we would also have a hypothesis about how the two variables interact. Since we expect people with unique lists and social facilitation groups to perform better (remember more items), we might expect people with both unique lists and social facilitation to have the best performance. And on the opposite end of the spectrum, we might expect people who receive simple lists with no social facilitation to perform poorest.
We might also think that unique lists alone (no social facilitation) and social facilitation (no unique items) alone will produce about the same performance. So we would hypothesize that these two groups will be about the same. We would then run our statistical analysis to see if we detect this specific pattern.
The great thing about this design is that we don't have to use it with two manipulated variables. We could have one of our variables be a "person" variable: a characteristic about the person we can't manipulate. For example, one variable could be gender. This changes our design from "experimental" to "quasi-experimental."
For my masters thesis, I studied a concept know as stereotype threat, which occurs when a stereotype about a group affects a group member's performance. I looked at how stereotype threat affects women's math performance. So one of my variables was manipulated (stereotype threat) and the other was gender. This is a common design for examining gender differences.
Wednesday, April 27, 2016
W is for John Watson
John Watson, an American psychologist, was responsible for the establishment of behaviorism, which had a stronghold in the field of psychology for many decades.
This was in part due to the methods available to study people - the only way we could observe what happens within a person's brain was by asking them how they were feeling, what they were thinking, and so on. Behaviorists determined that because the only thing we could observe was behavior, that should be the only thing we measure. Taken to its extreme is the stance that observable behavior is everything, thought and feelings essentially don't exist, and the only factors affecting behavior are external to the person. This particular school of thought is often called "radical behaviorism."
I could go on for a while about Watson, thanks to the fact that I was briefly brainwashed by behaviorists in undergrad. My undergraduate program had a very strong behaviorist slant, I had a pet rat (a baby of two of the rats from our rat lab), and I even took an entire course devoted to B.F. Skinner. So I was walking around spouting about a variety of behaviorist concepts, including Watson's "tabula rasa" or "blank slate." Essentially, what this concept means is that, when we are born, our mind is a blank slate - Watson and other radical behaviorists did not believe that babies were born with any semblance of a personality. All behavior is learned from the environment. That is, radical behaviorism is a deterministic model: it denies the existence of free will. In fact, Watson once said:
Every time Albert encountered the white rat, Watson would make sudden loud noises. Over time, Albert began to show fear toward the white rat. But it didn't stop there. Albert was afraid of anything white and furry, including bunnies and a Santa Claus mask. Now, for his next study, Watson was going to undo the fear conditioning he introduced to poor Albert, but Albert moved away before that study could be completed. Though people tried to track down Little Albert later on - to see if there was a man with an irrational fear of white furry things out there - his true identity has not been confirmed.
This was in part due to the methods available to study people - the only way we could observe what happens within a person's brain was by asking them how they were feeling, what they were thinking, and so on. Behaviorists determined that because the only thing we could observe was behavior, that should be the only thing we measure. Taken to its extreme is the stance that observable behavior is everything, thought and feelings essentially don't exist, and the only factors affecting behavior are external to the person. This particular school of thought is often called "radical behaviorism."
I could go on for a while about Watson, thanks to the fact that I was briefly brainwashed by behaviorists in undergrad. My undergraduate program had a very strong behaviorist slant, I had a pet rat (a baby of two of the rats from our rat lab), and I even took an entire course devoted to B.F. Skinner. So I was walking around spouting about a variety of behaviorist concepts, including Watson's "tabula rasa" or "blank slate." Essentially, what this concept means is that, when we are born, our mind is a blank slate - Watson and other radical behaviorists did not believe that babies were born with any semblance of a personality. All behavior is learned from the environment. That is, radical behaviorism is a deterministic model: it denies the existence of free will. In fact, Watson once said:
Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select—doctor, lawyer, artist, merchant-chief and, yes, even beggar-man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors.In one of Watson's best known studies, he shaped a baby to be afraid of something he had originally loved. This controversial study was known as the "Little Albert" experiment. "Albert" (not his real name) loved white rats; at least, they discovered that he loved white rats when they introduced him to one, along with various objects, to try to decide what to shape the boy to be afraid of (yes, really). That is, Watson was trying to prove that humans were not born with many fears; nearly all fears are shaped by the environment. What humans are naturally afraid of are loud noises. So, Watson decided to use loud noises to condition Albert to be terrified of the white rat.
Every time Albert encountered the white rat, Watson would make sudden loud noises. Over time, Albert began to show fear toward the white rat. But it didn't stop there. Albert was afraid of anything white and furry, including bunnies and a Santa Claus mask. Now, for his next study, Watson was going to undo the fear conditioning he introduced to poor Albert, but Albert moved away before that study could be completed. Though people tried to track down Little Albert later on - to see if there was a man with an irrational fear of white furry things out there - his true identity has not been confirmed.
Tuesday, April 26, 2016
V is for Hedwig von Restorff
Many people going into psychology in recent decades are women. In fact, it is quickly becoming a woman-dominated field. Unfortunately, many of the well-known scholars and researchers in psychology, including social psychology, are men. I've blogged about gender issues before, talking about topics like representation of women in the STEM fields and perceptions about ability based on gender. So for today's topic, I wanted to feature a woman who made an important contribution to the field.
Hedwig von Restorff was a German psychologist, trained in the Gestalt tradition. Gestalt psychology can be summed up as "the whole is greater than the sum of its parts" (though interestingly, the original quote actually translates to "the whole is other than the sum of its parts"). That is, Gestalt psychologists focus on overall constructs, and how people perceive individual pieces as part of a whole. They also study topics like pattern recognition, and even biases in perceiving patterns and order where they do not exist. It goes back to the idea that humans like order, and will perceive the world in such a way as to bring order to chaos.
Unfortunately, we don't a lot about Restorff's contributions to the field of psychology, because much of her work was never translated into English. However, she is best known for the isolation paradigm, also known as the Von Restorff effect. This occurs when we remember an object or item in a list better because it is unusual.
Though this seems like a very simple concept, it has widespread applications. It is frequently applied to advertising, which is perhaps one reason why advertising can seem very off-the-wall and random. Marketers have to do something different to set themselves apart. As other marketers do the same thing, "weird" becomes the new normal, so marketers have to keep pushing the envelope to make their product, or at least its advertising, seem different.
This concept can also work in concert with other cognitive biases, like the availability heuristic, where something seems common or likely because it is easily remembered. This might explain why people tend to focus on rare events of a behavior instead of more common outcomes. For instance, among people who are anti-vaccination, this would explain why they focus on the rare serious side effects that can occur from vaccination, as opposed to the more common illness that comes along with not not being vaccinated. The serious side effects, like Guillain–BarrĂ© syndrome is memorable, because it is so unusual. But your odds of having that side effect are ridiculously small: about 1 in 1 million.
This is still an important area of research today. Restorff's legacy lives on.
Hedwig von Restorff was a German psychologist, trained in the Gestalt tradition. Gestalt psychology can be summed up as "the whole is greater than the sum of its parts" (though interestingly, the original quote actually translates to "the whole is other than the sum of its parts"). That is, Gestalt psychologists focus on overall constructs, and how people perceive individual pieces as part of a whole. They also study topics like pattern recognition, and even biases in perceiving patterns and order where they do not exist. It goes back to the idea that humans like order, and will perceive the world in such a way as to bring order to chaos.
Unfortunately, we don't a lot about Restorff's contributions to the field of psychology, because much of her work was never translated into English. However, she is best known for the isolation paradigm, also known as the Von Restorff effect. This occurs when we remember an object or item in a list better because it is unusual.
Either a demonstration of this effect, or a Prog Rock album cover - you be the judge. |
Though this seems like a very simple concept, it has widespread applications. It is frequently applied to advertising, which is perhaps one reason why advertising can seem very off-the-wall and random. Marketers have to do something different to set themselves apart. As other marketers do the same thing, "weird" becomes the new normal, so marketers have to keep pushing the envelope to make their product, or at least its advertising, seem different.
This concept can also work in concert with other cognitive biases, like the availability heuristic, where something seems common or likely because it is easily remembered. This might explain why people tend to focus on rare events of a behavior instead of more common outcomes. For instance, among people who are anti-vaccination, this would explain why they focus on the rare serious side effects that can occur from vaccination, as opposed to the more common illness that comes along with not not being vaccinated. The serious side effects, like Guillain–BarrĂ© syndrome is memorable, because it is so unusual. But your odds of having that side effect are ridiculously small: about 1 in 1 million.
This is still an important area of research today. Restorff's legacy lives on.
Monday, April 25, 2016
U is for Unobtrusive Measures
These past few posts have focused heavily on bias in methods - or rather, removing bias from the study through specific controls and research methods. One of the big topics for psychologists is finding out what people think about something. Obviously, asking them what they think is one way, but people are eager to please, and may answer in the way they think the researcher wants, rather than how they actually feel. Additionally, when you conduct opinion polls, there are guidelines about sample sizes, if you want your results to be representative of the larger group. For that reason, one of the big contributions in psychology is how to measure how people feel about something through observations or by measuring something related (what we call proxy measures). That is, we can find ways to measure people's opinions unobtrusively.
One of the best books on the topic is the aptly titled Unobtrusive Measures:
This book is full of some really crafty ways to measure what people think of something. My favorite methods described in the book are known as erosion methods. You can find out how much people like something (how much they stand or walk around that place) by looking at the erosion of flooring, steps, or stone. And the famous example of how this was used involved the Museum of Science and Industry (in Chicago) and a bunch of baby chicks.
This exhibit, known as the Hatchery, is one of the most popular exhibits at MSI. How do they know this? They had to replace the floor tiles around the exhibit every six weeks, whereas tiles in other parts of the museum are often not replaced for years. In fact, they could even rank the popularity of different exhibits based on how frequently the floor tiles are replaced. When combined with observation, they discovered that people spent longer amounts of time at the Hatchery than anywhere else in the museum. These two pieces of information could be obtained from repair records and a researcher standing by exhibits to observe how people behave. Much easier - and cheaper! - than fielding a survey with thousands of people.
One of the best books on the topic is the aptly titled Unobtrusive Measures:
This book is full of some really crafty ways to measure what people think of something. My favorite methods described in the book are known as erosion methods. You can find out how much people like something (how much they stand or walk around that place) by looking at the erosion of flooring, steps, or stone. And the famous example of how this was used involved the Museum of Science and Industry (in Chicago) and a bunch of baby chicks.
This exhibit, known as the Hatchery, is one of the most popular exhibits at MSI. How do they know this? They had to replace the floor tiles around the exhibit every six weeks, whereas tiles in other parts of the museum are often not replaced for years. In fact, they could even rank the popularity of different exhibits based on how frequently the floor tiles are replaced. When combined with observation, they discovered that people spent longer amounts of time at the Hatchery than anywhere else in the museum. These two pieces of information could be obtained from repair records and a researcher standing by exhibits to observe how people behave. Much easier - and cheaper! - than fielding a survey with thousands of people.
Saturday, April 23, 2016
T is for Norman Triplett
As I mentioned during my History of Social Psychology post, Norman Triplett is credited with having conducted the first experiment in social psychology, in which he examined an effect known as social facilitation.
Social facilitation occurs when you perform better when others are present than when you are alone. In his first study of the phenomenon, he noticed that cyclists were faster when someone else was riding with them than on their own. He decided to study this concept in the lab. He recruited 40 children, and had them turn fishing reels. He divided them into groups, where they alternated between performing the task individually or in competition with another child:
Group A: (1) alone, (2) competition, (3) alone, (4) competition, (5) alone, (6) competition
Group B: (1) alone, (2) alone, (3) competition, (4) alone, (5) competition, (6) alone
Interestingly, as I was doing some quick reading to refresh my memory, I discovered some new (to me) information about the study. An article by Stroebe (abstract here) included a reanalysis of the data and found differences between alone and competition performance, but none of these differences were significant. At the time Triplett conducted this study, he didn't have access to statistical analyses we have today, so he instead eyeballed the data. What statistical analyses do is remove guesswork and experimenter bias when examining results. We use controls while conducting the research to keep the experimenter from having an impact on participants, and giving them clues about how they are supposed to behave, but those controls are meaningless if we don't also have an unbiased way of analyzing the data.
We interrupt this blog post to give you a crash course in statistics. Statistics allow us to look for patterns in data, by examining differences in scores (which we call variation). Some variation is random, due to things like individual differences, fatigue, and so on; in Triplett's study, that would be the variation within the "alone" conditions or the "competition" conditions. This information tells us how much variation we will expect to see in scores by chance alone. Other variation is systematic, due to the experimental conditions; in Triplett's study, that would be the difference (variation in scores) between "alone" and "competition".
We compare these two types of variation to determine if our condition had an effect. If we see more of a difference between alone and competition (systematic) than we see by chance alone, we conclude that our experimental conditions had an effect. And our analyses give us a probability - that is the probability, if only chance were operating (no experimental effect), that we would see a difference between groups of that size.
Of course, I'm oversimplifying, but this is the basic premise of statistics. It might seem technical and unnecessary to some, but it keeps researchers honest. I mean, if you think you're going to see an effect, you devote time and energy into setting up and conducting the study, and then more time and energy into compiling your data, wouldn't you hope to see the effect you were expecting? To the point that you might see patterns that aren't actually there? As a field, we have agreed that we want as little bias as possible in conducting, analyzing, and reporting results, so we've developed methods, statistical analyses, and reporting standards to do just that.
To be fair to Triplett, he didn't have access to these statistical analyses at the time, so we'll forgive him for simply eyeballing the data. Further, more researchers have studied social facilitation and found support for it. In fact, this later research shows that social facilitation can occur in two ways: co-action (or competition, as Triplett studied) and audience.
In addition to social facilitation, Triplett also contributed to the psychology of magic - rather, what occurs in the perceiver watching a magician. This included concepts like misdirection and suggestibility. You can read the full-text of his article on the topic here.
Social facilitation occurs when you perform better when others are present than when you are alone. In his first study of the phenomenon, he noticed that cyclists were faster when someone else was riding with them than on their own. He decided to study this concept in the lab. He recruited 40 children, and had them turn fishing reels. He divided them into groups, where they alternated between performing the task individually or in competition with another child:
Group A: (1) alone, (2) competition, (3) alone, (4) competition, (5) alone, (6) competition
Group B: (1) alone, (2) alone, (3) competition, (4) alone, (5) competition, (6) alone
Interestingly, as I was doing some quick reading to refresh my memory, I discovered some new (to me) information about the study. An article by Stroebe (abstract here) included a reanalysis of the data and found differences between alone and competition performance, but none of these differences were significant. At the time Triplett conducted this study, he didn't have access to statistical analyses we have today, so he instead eyeballed the data. What statistical analyses do is remove guesswork and experimenter bias when examining results. We use controls while conducting the research to keep the experimenter from having an impact on participants, and giving them clues about how they are supposed to behave, but those controls are meaningless if we don't also have an unbiased way of analyzing the data.
We interrupt this blog post to give you a crash course in statistics. Statistics allow us to look for patterns in data, by examining differences in scores (which we call variation). Some variation is random, due to things like individual differences, fatigue, and so on; in Triplett's study, that would be the variation within the "alone" conditions or the "competition" conditions. This information tells us how much variation we will expect to see in scores by chance alone. Other variation is systematic, due to the experimental conditions; in Triplett's study, that would be the difference (variation in scores) between "alone" and "competition".
We compare these two types of variation to determine if our condition had an effect. If we see more of a difference between alone and competition (systematic) than we see by chance alone, we conclude that our experimental conditions had an effect. And our analyses give us a probability - that is the probability, if only chance were operating (no experimental effect), that we would see a difference between groups of that size.
Of course, I'm oversimplifying, but this is the basic premise of statistics. It might seem technical and unnecessary to some, but it keeps researchers honest. I mean, if you think you're going to see an effect, you devote time and energy into setting up and conducting the study, and then more time and energy into compiling your data, wouldn't you hope to see the effect you were expecting? To the point that you might see patterns that aren't actually there? As a field, we have agreed that we want as little bias as possible in conducting, analyzing, and reporting results, so we've developed methods, statistical analyses, and reporting standards to do just that.
To be fair to Triplett, he didn't have access to these statistical analyses at the time, so we'll forgive him for simply eyeballing the data. Further, more researchers have studied social facilitation and found support for it. In fact, this later research shows that social facilitation can occur in two ways: co-action (or competition, as Triplett studied) and audience.
In addition to social facilitation, Triplett also contributed to the psychology of magic - rather, what occurs in the perceiver watching a magician. This included concepts like misdirection and suggestibility. You can read the full-text of his article on the topic here.
Friday, April 22, 2016
S is for Stanford Prison Experiment
In 1971, a young professor by the name of Philip Zimbardo conducted a study in which he simulated a prison in the basement of a university building. Participants (total of 24 men) were recruited through a newspaper ad for a 2-week study, and paid $15 a day. Half were randomized to serve as prison guards, who took orders from a research assistant who served as warden and Zimbardo who served as prison superintendent. They were given a uniform that consisted of batons, mirrored sunglasses, and khaki clothing from a military surplus store.
The other half were randomized to serve as prisoners. In order to make the study as realistic as possible, Zimbardo got the cooperation of the Palo Alto police department, who picked up each prisoner, arrested them (complete with frisking and handcuffing), and booked them, before finally dropping them off at the prison, blindfolded. They were also given a uniform: a smock, stocking cap, and chain around their ankle.
All participants, whether assigned to be guard or prisoner, were subjected to rigorous psychological testing before being accepted into the study, to ensure they had no mental or physical illnesses, and had no history of crime or drug abuse. So these were all healthy, "normal" young men who had never been inside a prison. Everything they did was a product of their experience in the study, as well as, for the prison guards, imitation of what they'd seen portrayed in the media and instructions from Zimbardo and the RA. The guards were told that they were not allowed to harm the prisoners, or withhold food/water, but were encouraged to create an atmosphere in which the prisoners lost their sense of individuality and identity, and were made to feel powerless.
Unfortunately, the study quickly got out of hand. The guards began using psychological control tactics on prisoners, and one prisoner began (in the words of Zimbardo) to "act crazy" - and even Zimbardo got carried away, at first chiding the prisoner about being weak, before finally realizing the participant was truly suffering and releasing him. However, even this momentarily realization with this participant did not change the way he and others acted toward the remaining prisoners.
It took an outside observer, a graduate student named Christina Maslach, whom Zimbardo was dating (and later married), for Zimbardo and others to see that things had gotten out of hand. She told Zimbardo that what he was doing was unethical, and in fact, threatened to break up with him if he didn't call off the study. What was supposed to be a two-week study ended after 6 days.
This study, and others I've blogged about this month, resulted in widespread changes to ethical standards for research. However, this study also inspired Zimbardo's research on how situations turn people bad. That is, rather than having bad apples, Zimbardo argues for the "bad barrel" - good people can be made to do terrible things in the right situation. He wrote about this in his book, The Lucifer Effect:
A documentary (Quiet Rage) was made shortly after the study was completed, and some of the participants were reunited to discuss their experiences. What is striking is that, among the prisoners, there was still a great deal of resentment for how they were treated by the guards.
And as with the Milgram study, a movie was very recently released about the study:
Source: Stanford Prison Experiment website |
Source: Stanford Prison Experiment website |
Unfortunately, the study quickly got out of hand. The guards began using psychological control tactics on prisoners, and one prisoner began (in the words of Zimbardo) to "act crazy" - and even Zimbardo got carried away, at first chiding the prisoner about being weak, before finally realizing the participant was truly suffering and releasing him. However, even this momentarily realization with this participant did not change the way he and others acted toward the remaining prisoners.
It took an outside observer, a graduate student named Christina Maslach, whom Zimbardo was dating (and later married), for Zimbardo and others to see that things had gotten out of hand. She told Zimbardo that what he was doing was unethical, and in fact, threatened to break up with him if he didn't call off the study. What was supposed to be a two-week study ended after 6 days.
This study, and others I've blogged about this month, resulted in widespread changes to ethical standards for research. However, this study also inspired Zimbardo's research on how situations turn people bad. That is, rather than having bad apples, Zimbardo argues for the "bad barrel" - good people can be made to do terrible things in the right situation. He wrote about this in his book, The Lucifer Effect:
And as with the Milgram study, a movie was very recently released about the study:
Thursday, April 21, 2016
R is for (the Theory of) Reasoned Action
I've talked a lot this month about groups, how they are formed, and how they influence us. But a big part of social psychology, especially it's current cognitive focus, is on attitudes, and how they influence us. And as good social psychologists, we recognize the formation and influence of attitudes is determined by others and our perceptions of what they expect from us.
Attitudes are tricky, though. They alone do not shape what we do. In fact, there is a great deal of research on how attitudes are a poor predictor of behavior, known sometimes as the attitude-behavior gap or value-action gap. There are other factors that influence us, that may interact with or even counteract our attitudes. Instead, various forces including attitudes shape what is known as behavioral intention - what we intend to do in certain situations. This intention is then used to predict the behavior itself, recognizing that situational forces may exert their influence between intention and behavior.
Two social psychologists, Fishbein and Ajzen (pronounced Ay-zen), developed the Theory of Reasoned Action to predict behavioral intention, and in turn behavior, with two factors: attitudes and norms. Attitudes can vary in strength - from very important to not important - and evaluation - positive to negative. Norms can also range from very broad, such as societal norms, to more specific, such as norms within your social group. Within that norm factor, there are two subconcepts: normative beliefs (what we think others expect of us) and motivation to comply (that is, do we want to conform or be different?). If we draw this model, it would look something like this:
Not long after publishing on this model, Ajzen decided to build on this theory to improve its predictive power. Thus, the Theory of Planned Behavior was born. This new theory adds one additional component to the old theory: perceived behavioral control. This concept was influenced by self-efficacy theory, and represents a person's confidence in his/her ability to engage in the behavior in question. Perceived behavioral control is influenced by control beliefs, or beliefs about the factors that may help or hinder carrying out the behavior. Each of these three factors not only influences behavioral intention, they can also influence each other. For instance, your own attitude about something can color your judgment of what others think. The degree of control you believe you have over the behavior can influence your attitude. And so on.
When Ajzen drew the model, it looked like this:
Because psychologists recognize that perception can be biased, he also included a box for "actual behavioral control." What we think may not be accurate, and what is actually true may still influence us, even if we fail to notice the truth. Humans are skilled at self-deception.
One important thing to keep in mind if you're trying to predict behavior from attitudes is that specific attitudes are more predictive than general attitudes. Asking someone their general attitude toward the legal system will be far less predictive of how they vote as a juror than their attitude about a specific case. But even when you measure a specific attitude, you may not get the behavior you expect. For my dissertation research, I studied pretrial publicity - information seen in the media before a case goes to trial - and its influence on verdicts. Pretrial publicity is an interesting area of research, especially because no one has really found a good theory to explain it. That is, we know it biases people, but when we try to apply a theory to it, the study still finds pretrial publicity effects but often fails to confirm the theory.
I decided to apply attitudes to the study - very specific attitudes. That is, I hypothesized that pretrial publicity information is only biasing if a person has a specific attitude about that piece of information as indicative of guilt. So, to put it more simply with one of the pieces of information I used in my study: finding out a person confessed is only biasing if you believe that only guilty people confess. I gave participants news stories with one of four pieces of information: confession, resisting arrest, prior record, or no biasing information (control condition).
Then I told them they would be reading a case and rendering a verdict, but first, I asked them to complete a measure of attitudes. These measures are sometimes used during a process known as voir dire, in which potential jurors are questioned to determine if they should be added to the jury. Embedded in this measure were questions about the specific pieces of information. They read the case, and selected a verdict.
The problem is that, like so many other studies before, I found pretrial publicity effects, but attitudes were often unrelated. Even people who didn't believe confession was indicative of guilt were more likely to select guilty when they heard that information pretrial. I was able to apply some different theories to the results, ones related to thought suppression and psychological reactance, concepts I've blogged about before. But I was quite disappointed that I still couldn't fully explain what I was seeing.
Like I said, attitudes are tricky.
Attitudes are tricky, though. They alone do not shape what we do. In fact, there is a great deal of research on how attitudes are a poor predictor of behavior, known sometimes as the attitude-behavior gap or value-action gap. There are other factors that influence us, that may interact with or even counteract our attitudes. Instead, various forces including attitudes shape what is known as behavioral intention - what we intend to do in certain situations. This intention is then used to predict the behavior itself, recognizing that situational forces may exert their influence between intention and behavior.
Two social psychologists, Fishbein and Ajzen (pronounced Ay-zen), developed the Theory of Reasoned Action to predict behavioral intention, and in turn behavior, with two factors: attitudes and norms. Attitudes can vary in strength - from very important to not important - and evaluation - positive to negative. Norms can also range from very broad, such as societal norms, to more specific, such as norms within your social group. Within that norm factor, there are two subconcepts: normative beliefs (what we think others expect of us) and motivation to comply (that is, do we want to conform or be different?). If we draw this model, it would look something like this:
Not long after publishing on this model, Ajzen decided to build on this theory to improve its predictive power. Thus, the Theory of Planned Behavior was born. This new theory adds one additional component to the old theory: perceived behavioral control. This concept was influenced by self-efficacy theory, and represents a person's confidence in his/her ability to engage in the behavior in question. Perceived behavioral control is influenced by control beliefs, or beliefs about the factors that may help or hinder carrying out the behavior. Each of these three factors not only influences behavioral intention, they can also influence each other. For instance, your own attitude about something can color your judgment of what others think. The degree of control you believe you have over the behavior can influence your attitude. And so on.
When Ajzen drew the model, it looked like this:
Because psychologists recognize that perception can be biased, he also included a box for "actual behavioral control." What we think may not be accurate, and what is actually true may still influence us, even if we fail to notice the truth. Humans are skilled at self-deception.
One important thing to keep in mind if you're trying to predict behavior from attitudes is that specific attitudes are more predictive than general attitudes. Asking someone their general attitude toward the legal system will be far less predictive of how they vote as a juror than their attitude about a specific case. But even when you measure a specific attitude, you may not get the behavior you expect. For my dissertation research, I studied pretrial publicity - information seen in the media before a case goes to trial - and its influence on verdicts. Pretrial publicity is an interesting area of research, especially because no one has really found a good theory to explain it. That is, we know it biases people, but when we try to apply a theory to it, the study still finds pretrial publicity effects but often fails to confirm the theory.
I decided to apply attitudes to the study - very specific attitudes. That is, I hypothesized that pretrial publicity information is only biasing if a person has a specific attitude about that piece of information as indicative of guilt. So, to put it more simply with one of the pieces of information I used in my study: finding out a person confessed is only biasing if you believe that only guilty people confess. I gave participants news stories with one of four pieces of information: confession, resisting arrest, prior record, or no biasing information (control condition).
Then I told them they would be reading a case and rendering a verdict, but first, I asked them to complete a measure of attitudes. These measures are sometimes used during a process known as voir dire, in which potential jurors are questioned to determine if they should be added to the jury. Embedded in this measure were questions about the specific pieces of information. They read the case, and selected a verdict.
The problem is that, like so many other studies before, I found pretrial publicity effects, but attitudes were often unrelated. Even people who didn't believe confession was indicative of guilt were more likely to select guilty when they heard that information pretrial. I was able to apply some different theories to the results, ones related to thought suppression and psychological reactance, concepts I've blogged about before. But I was quite disappointed that I still couldn't fully explain what I was seeing.
Like I said, attitudes are tricky.
Wednesday, April 20, 2016
Q is for Quick v. Slow Processing
Today is the first cheat day - that is, I kind of cheated in coming up with a q-themed title. Usually, this concept is referred to as fast-slow processing. But close enough, right?
Basically, your thought processes can be divided into two types: fast (quick) and slow. There are a few different theories about these different processes, but they're all categorized as "dual process theories." The two big ones are Chaiken's Heuristic Systematic Model and Petty and Cacioppo's Elaboration Likelihood Model (which is actually a model of persuasion). Additionally, Kahneman calls them "intuition" and "reasoning."
As I've blogged about before, we're cognitive misers - mental energy is a fixed resource and so we save it for the times we really need it. So we tend to go through life on auto-pilot, processing things quickly and with as little effort as needed. In Chaiken's model, we call these heuristics, which are quick categories or mental shortcuts. What feels good or makes us happy? What do we usually do? This is great if you're deciding, for instance, where to go for lunch.
In Petty and Cacioppo's model of persuasion, this is called the peripheral route. We may choose to believe someone because they have a higher degree or are attractive. Persuasion occuring on this route is often temporary. We may be persuaded in the moment, but that attitude change is not likely to "stick." You may have experienced a time before when a friend is persuaded to a new way of thinking, and vehemently expresses that new attitude, only to fall back to their old way of thinking.
The other route is systematic. We think really hard about what we want, employing logic and reason, as well as emotions, to come to a conclusion. In Petty and Cacioppo's model, this is called the central route. We think critically about what the person trying to persuade us is saying and doing, and come to our own conclusions. Persuasion occuring through this route is more long-term. We may have a permanent, or nearly permanent, change in attitude.
Which route or approach we take depends on two things: ability and motivation. We must be able to think critically or systematically about something in order to do that. Therefore, a person with higher intelligence is more likely to engage in central route or systematic processing.
But - and there's always a but - we must also have the motivation to do so. Two people may be of different levels of intelligence, but if the high intelligence person is unmotivated to think critically, s/he won't look much different than the low ability person. Because we're cognitive misers, we tend to function at low motivation for thought.
Now, there are some people who are more motivated to think systematically than others. We call these people "high need for cognition." Those are the people who, for instance, go through all the pros and cons about different lunch options before making a decision. They will still use heuristics or peripheral route processing on occasion, because even though their motivation is high, their ability might be low moment to moment if they've expended a lot of cognitive resources. But because high need for cognition people tend to have higher baseline ability, they have more resources to work with, and it takes them longer to exhaust those resources.
It's important to point out that decisions made using heuristics or peripheral route processing are not necessarily wrong. You may, for instance, choose to believe a person because she has a PhD in a topic, without really processing what she has to say. But another person who thinks critically may also believe the person and be persuaded, because of the strength of the arguments. And there are certainly times when systematic thought is unnecessary.
Basically, your thought processes can be divided into two types: fast (quick) and slow. There are a few different theories about these different processes, but they're all categorized as "dual process theories." The two big ones are Chaiken's Heuristic Systematic Model and Petty and Cacioppo's Elaboration Likelihood Model (which is actually a model of persuasion). Additionally, Kahneman calls them "intuition" and "reasoning."
As I've blogged about before, we're cognitive misers - mental energy is a fixed resource and so we save it for the times we really need it. So we tend to go through life on auto-pilot, processing things quickly and with as little effort as needed. In Chaiken's model, we call these heuristics, which are quick categories or mental shortcuts. What feels good or makes us happy? What do we usually do? This is great if you're deciding, for instance, where to go for lunch.
In Petty and Cacioppo's model of persuasion, this is called the peripheral route. We may choose to believe someone because they have a higher degree or are attractive. Persuasion occuring on this route is often temporary. We may be persuaded in the moment, but that attitude change is not likely to "stick." You may have experienced a time before when a friend is persuaded to a new way of thinking, and vehemently expresses that new attitude, only to fall back to their old way of thinking.
The other route is systematic. We think really hard about what we want, employing logic and reason, as well as emotions, to come to a conclusion. In Petty and Cacioppo's model, this is called the central route. We think critically about what the person trying to persuade us is saying and doing, and come to our own conclusions. Persuasion occuring through this route is more long-term. We may have a permanent, or nearly permanent, change in attitude.
Which route or approach we take depends on two things: ability and motivation. We must be able to think critically or systematically about something in order to do that. Therefore, a person with higher intelligence is more likely to engage in central route or systematic processing.
But - and there's always a but - we must also have the motivation to do so. Two people may be of different levels of intelligence, but if the high intelligence person is unmotivated to think critically, s/he won't look much different than the low ability person. Because we're cognitive misers, we tend to function at low motivation for thought.
Now, there are some people who are more motivated to think systematically than others. We call these people "high need for cognition." Those are the people who, for instance, go through all the pros and cons about different lunch options before making a decision. They will still use heuristics or peripheral route processing on occasion, because even though their motivation is high, their ability might be low moment to moment if they've expended a lot of cognitive resources. But because high need for cognition people tend to have higher baseline ability, they have more resources to work with, and it takes them longer to exhaust those resources.
It's important to point out that decisions made using heuristics or peripheral route processing are not necessarily wrong. You may, for instance, choose to believe a person because she has a PhD in a topic, without really processing what she has to say. But another person who thinks critically may also believe the person and be persuaded, because of the strength of the arguments. And there are certainly times when systematic thought is unnecessary.
Tuesday, April 19, 2016
P is for Parasocial Relationships
Human beings are social creatures; we seek out relationships with other people in a variety of capacities and to fulfill a variety of important needs. In fact, we are so hard-wired to build relationships with others, that we may even feel connected - in a social sense - to people we have never met, often people we encounter through the media. We call this phenomenon "parasocial relationships."
This behavior begins very early on in life. As children, we learn about social norms and how we should behave by watching others, including through television, movies, and video games. Because children have such active imaginations, and often do not yet know the difference between fantasy and reality, they may come to believe the characters they watch and even interact with are real. As we grow older, we (hopefully) learn that the characters aren't real...
But the feelings and connections continue to influence us and shape how we interact with others. Even as adults, we continue to feel connections to characters and media personalities, even when we recognize that those connections aren't real. You could argue that fandom, having favorite characters, and so on, are all extensions of our tendency to form parasocial relationships.
The concept of parasocial relationships plays an important part in a theory of media known as uses and gratifications (U&G) theory - essentially, people have different motivations for using media, and will select media that fulfills their needs. In this theory, rather than being passive recipients of media information, consumers play an active role in seeking out and integrating media into their lives. However, though U&G theory is relatively new (since the 1940s), the concept of parasocial relationships has been around much longer, and could encompass feelings of connectivity with story characters, or even gods and spirits.
While we all show this tendency, some people are more likely to form parasocial relationships or rather, more likely to form strong parasocial relationships, than others. People who do not have many opportunities for regular social interaction, for instance, tend to compensate for this deficit with parasocial relationships. I actually had the opportunity to witness this firsthand several years ago. My mom is visually impaired, and since I was in school, and my dad and brother worked full-time, she spent a lot of time at home with the dog and the TV. I introduced her to my all-time favorite show, Buffy the Vampire Slayer, and got her hooked.
So hooked, in fact, that I noticed she started talking about the characters - especially Willow, her favorite character - as though they were real people.
At first, I was a bit concerned, until I started thinking back to the concept of parasocial relationships. I realized that what she was doing was actually quite normal, maybe even healthy. And though somewhat more intense, her connection to the characters was not altogether different from mine - considering that show can make me laugh or cry, regardless of how many times I've seen a particular episode, I likely also feel some strong social connections to the characters of Buffy.
And though I've focused the post so far on characters, we can also form parasocial relationships with real people, like celebrities. For instance, I know a lot about some of my favorite authors - I've read about them, even met a few of them, and can talk about them almost as though I know them. While at the logical level I know I don't actually KNOW them, it's completely normal to still feel a social connection to them.
This behavior begins very early on in life. As children, we learn about social norms and how we should behave by watching others, including through television, movies, and video games. Because children have such active imaginations, and often do not yet know the difference between fantasy and reality, they may come to believe the characters they watch and even interact with are real. As we grow older, we (hopefully) learn that the characters aren't real...
But the feelings and connections continue to influence us and shape how we interact with others. Even as adults, we continue to feel connections to characters and media personalities, even when we recognize that those connections aren't real. You could argue that fandom, having favorite characters, and so on, are all extensions of our tendency to form parasocial relationships.
The concept of parasocial relationships plays an important part in a theory of media known as uses and gratifications (U&G) theory - essentially, people have different motivations for using media, and will select media that fulfills their needs. In this theory, rather than being passive recipients of media information, consumers play an active role in seeking out and integrating media into their lives. However, though U&G theory is relatively new (since the 1940s), the concept of parasocial relationships has been around much longer, and could encompass feelings of connectivity with story characters, or even gods and spirits.
While we all show this tendency, some people are more likely to form parasocial relationships or rather, more likely to form strong parasocial relationships, than others. People who do not have many opportunities for regular social interaction, for instance, tend to compensate for this deficit with parasocial relationships. I actually had the opportunity to witness this firsthand several years ago. My mom is visually impaired, and since I was in school, and my dad and brother worked full-time, she spent a lot of time at home with the dog and the TV. I introduced her to my all-time favorite show, Buffy the Vampire Slayer, and got her hooked.
So hooked, in fact, that I noticed she started talking about the characters - especially Willow, her favorite character - as though they were real people.
Because, honestly, who doesn't love Willow? |
At first, I was a bit concerned, until I started thinking back to the concept of parasocial relationships. I realized that what she was doing was actually quite normal, maybe even healthy. And though somewhat more intense, her connection to the characters was not altogether different from mine - considering that show can make me laugh or cry, regardless of how many times I've seen a particular episode, I likely also feel some strong social connections to the characters of Buffy.
And though I've focused the post so far on characters, we can also form parasocial relationships with real people, like celebrities. For instance, I know a lot about some of my favorite authors - I've read about them, even met a few of them, and can talk about them almost as though I know them. While at the logical level I know I don't actually KNOW them, it's completely normal to still feel a social connection to them.
Monday, April 18, 2016
O is for Obedience to Authority
The events during the Holocaust left many people wondering why and how so many people went along with the systematic murder of millions of people. This deplorable event in our world history inspired many scholars and researchers to fully understand why so many people participated and, hopefully with that knowledge, prevent it from ever happening again. One of these researchers was Stanley Milgram, who conducted groundbreaking and highly controversial research on obedience to authority. His studies are standard reading in most introductory psychology courses, and his findings are still surprising (I started to type the word shocking, and realized that would be a very bad pun) even today.
Milgram started with a basic study, which he built upon over time to better understand the concepts involved. In the basic study, participants were recruited through newspaper advertisements, to participate in a study on memory. Though they wanted people from all walks of life (everything from white-collar workers and professionals to laborers and construction workers), they initially only recruited men.
Participants arrived at Yale University to meet another participant, who was actually an actor working for the researchers (in research methodology, we refer to a person pretending to be a participant, but actually part of the study team, as a confederate). The participant and confederate draw slips of paper to determine who will be the Teacher and who will be the Student; the drawing was rigged so that the real participant always received the role of Teacher.
The Student was then strapped into a machine that would deliver electric shocks, and the Teacher was taken to another room with a board that allowed him to deliver shocks to the Student. The experimenter was also in the room with the Teacher, giving commands as needed to keep the study going. The Student let both the experimenter and Teacher know he had a heart condition, and the experimenter responded by stating the shocks were painful, but not dangerous. The Teacher received a sample shock of 45 volts.
For the study, the Teacher read the Student a series of word pairs. Then, during the testing stage, the Teacher would read one of the words from the pairs, followed by four words: one was the other part of the pair, and the rest were distractor words. The Student had to select the correct answer by pressing a button from his room. If the Student gave the wrong answer, the Teacher delivered a shock, beginning with the lowest setting for the first wrong answer, and moving up the board for each subsequent wrong answer.
As the Teacher delivered shocks, he would become more and more uncomfortable with the Student's painful responses (which began around 75 volts, turned into demands to stop the study at 150 volts, and became screams of agony around 270 volts). The experimenter would keep the Teacher on task, ordering him to continue with the study. At 330 volts, the Student stopped responding completely, and the experimenter would be there to tell the Teacher that nonresponse should be treated like a wrong answer. The Teacher has no idea whether the Student is simply refusing to respond, or is unconscious or worse.
As you might have guessed, this was not a study of memory or learning. The Student is a confederate, trained to respond in the same way with each real participant (Teacher). In fact, the Student's painful cries were prerecorded, so they would sound the same each time. What Milgram was actually studying was obedience; would the Teacher continue to shock the Student - who is not only clearly in pain, but has a heart condition - simply because the experimenter told him to?
When Milgram proposed this study, and asked colleagues what they thought would be the outcome, they said that only a sadist would deliver the highest voltage on the board. Since sadism is present in about 1% of the population, they thought only 1% would deliver the maximum 450 volts.
But Milgram knew it would be higher than that. And he was right. In this first study, 65% of participants delivered the maximum voltage. And no one stopped participating as Teacher before reaching 300 volts on the board.
As Milgram continued in this line of research, he looked at the factors that altered this rate of obedience. What would happen, for instance, if the experimenter was moved farther away from the Teacher, or the Student was moved closer? Would we see the same rate of obedience if the Teacher had to physically press the Student's hand onto a plate to deliver shocks? As you might imagine, obedience goes up as the experimenter gets closer to the Teacher, and goes down as the Student gets closer.
Milgram wrote the results of his various studies in a book, Obedience to Authority, which I highly recommend.
This is the research for which Milgram is best known, though he also contributed many other highly influential ideas. I just discovered that Magnolia Pictures made a movie about Milgram, with a great deal of attention paid to his obedience research:
Milgram started with a basic study, which he built upon over time to better understand the concepts involved. In the basic study, participants were recruited through newspaper advertisements, to participate in a study on memory. Though they wanted people from all walks of life (everything from white-collar workers and professionals to laborers and construction workers), they initially only recruited men.
Participants arrived at Yale University to meet another participant, who was actually an actor working for the researchers (in research methodology, we refer to a person pretending to be a participant, but actually part of the study team, as a confederate). The participant and confederate draw slips of paper to determine who will be the Teacher and who will be the Student; the drawing was rigged so that the real participant always received the role of Teacher.
The Student was then strapped into a machine that would deliver electric shocks, and the Teacher was taken to another room with a board that allowed him to deliver shocks to the Student. The experimenter was also in the room with the Teacher, giving commands as needed to keep the study going. The Student let both the experimenter and Teacher know he had a heart condition, and the experimenter responded by stating the shocks were painful, but not dangerous. The Teacher received a sample shock of 45 volts.
For the study, the Teacher read the Student a series of word pairs. Then, during the testing stage, the Teacher would read one of the words from the pairs, followed by four words: one was the other part of the pair, and the rest were distractor words. The Student had to select the correct answer by pressing a button from his room. If the Student gave the wrong answer, the Teacher delivered a shock, beginning with the lowest setting for the first wrong answer, and moving up the board for each subsequent wrong answer.
As the Teacher delivered shocks, he would become more and more uncomfortable with the Student's painful responses (which began around 75 volts, turned into demands to stop the study at 150 volts, and became screams of agony around 270 volts). The experimenter would keep the Teacher on task, ordering him to continue with the study. At 330 volts, the Student stopped responding completely, and the experimenter would be there to tell the Teacher that nonresponse should be treated like a wrong answer. The Teacher has no idea whether the Student is simply refusing to respond, or is unconscious or worse.
As you might have guessed, this was not a study of memory or learning. The Student is a confederate, trained to respond in the same way with each real participant (Teacher). In fact, the Student's painful cries were prerecorded, so they would sound the same each time. What Milgram was actually studying was obedience; would the Teacher continue to shock the Student - who is not only clearly in pain, but has a heart condition - simply because the experimenter told him to?
When Milgram proposed this study, and asked colleagues what they thought would be the outcome, they said that only a sadist would deliver the highest voltage on the board. Since sadism is present in about 1% of the population, they thought only 1% would deliver the maximum 450 volts.
But Milgram knew it would be higher than that. And he was right. In this first study, 65% of participants delivered the maximum voltage. And no one stopped participating as Teacher before reaching 300 volts on the board.
As Milgram continued in this line of research, he looked at the factors that altered this rate of obedience. What would happen, for instance, if the experimenter was moved farther away from the Teacher, or the Student was moved closer? Would we see the same rate of obedience if the Teacher had to physically press the Student's hand onto a plate to deliver shocks? As you might imagine, obedience goes up as the experimenter gets closer to the Teacher, and goes down as the Student gets closer.
Milgram wrote the results of his various studies in a book, Obedience to Authority, which I highly recommend.
This is the research for which Milgram is best known, though he also contributed many other highly influential ideas. I just discovered that Magnolia Pictures made a movie about Milgram, with a great deal of attention paid to his obedience research:
Saturday, April 16, 2016
N is for Negativity Bias
You probably won't be surprised if I tell you that human beings have a tendency to focus on the negative. Though many people try to be positive and grateful, when something bad happens, we tend to fixate on that thing, complain about it, and in many cases, let it ruin our mood/day/week/month/year/etc. This is known as the negativity bias; unpleasant things have a greater impact than pleasant or neutral things.
You can see how this bias might be important for survival. If something can result in a negative outcome (which could be as minimal as discomfort to as extreme as injury or death), it's going to get more of our attention and more strongly influence our behavior than something with a positive outcome. However, this bias influences a variety of decisions, including ones that would be better served with more rational consideration of the facts.
During this election year, you've probably heard MANY ads about different candidates, and as with many election cycles, MANY of these ads are actually attacks on other candidates: highlighting negative traits and bad things that candidate has done in his/her past. These ads capitalize on the negativity bias.
Obviously, if you're conscious of this bias, you can try to correct for it. One way is by making an effort to fixate on the positive, through a process called savoring; I've blogged about savoring before, and you can also read more about it here. Or just keep staring at that adorable puppy!
You can see how this bias might be important for survival. If something can result in a negative outcome (which could be as minimal as discomfort to as extreme as injury or death), it's going to get more of our attention and more strongly influence our behavior than something with a positive outcome. However, this bias influences a variety of decisions, including ones that would be better served with more rational consideration of the facts.
During this election year, you've probably heard MANY ads about different candidates, and as with many election cycles, MANY of these ads are actually attacks on other candidates: highlighting negative traits and bad things that candidate has done in his/her past. These ads capitalize on the negativity bias.
I was going to post some examples of negative ads here, but I'm sure you've seen tons. So here's a puppy instead. |
Obviously, if you're conscious of this bias, you can try to correct for it. One way is by making an effort to fixate on the positive, through a process called savoring; I've blogged about savoring before, and you can also read more about it here. Or just keep staring at that adorable puppy!
Friday, April 15, 2016
M is for Minimal Group Paradigm
Social psychology is devoted to the study of groups, so of course it makes sense that someone would study how groups are formed. The interesting thing is, it doesn't take much for people to begin assigning people to groups, including themselves. This is called the minimal group paradigm - how much information about others do we need before we start assigning them to our own group or different groups, and to show favoritism to our own group? And the answer is: not much.
This effect was demonstrated by a researcher named Henri Tajfel. His first work on the subject, in 1971, involved two studies - one in which people were randomly assigned to a group (which they were told was based on performance on task, though they were actually assigned at random) and another where participants selected which of two paintings they preferred.
In both studies, participants were then grouped with others and could assign cash awards to the other group members. Unsurprisingly, people in the same performance group or who liked the same painting received higher cash awards than different groups or people who liked the other painting.
Even when people know that group assignment was random, they still favor their own group. In fact, there's a great XKCD cartoon on the subject that I know I've blogged before. But here it is again:
This effect was demonstrated by a researcher named Henri Tajfel. His first work on the subject, in 1971, involved two studies - one in which people were randomly assigned to a group (which they were told was based on performance on task, though they were actually assigned at random) and another where participants selected which of two paintings they preferred.
In both studies, participants were then grouped with others and could assign cash awards to the other group members. Unsurprisingly, people in the same performance group or who liked the same painting received higher cash awards than different groups or people who liked the other painting.
Even when people know that group assignment was random, they still favor their own group. In fact, there's a great XKCD cartoon on the subject that I know I've blogged before. But here it is again:
Thursday, April 14, 2016
L is for Kurt Lewin
Wilhelm Wundt may have been the first to use the term "social psychology" and Floyd Allport may have been the first to identify it as the study of groups, but Kurt Lewin is widely recognized as the father of modern social psychology. His ideas continue to influence the field, and even other fields, and they manage to be both revolutionary and easy-to-understand. In short, Kurt Lewin is awesome.
Born in Prussia and educated in Germany, Lewin came to the United States in 1933 (where he changed the pronunciation of his name from Leh-veen to Lou-win), fleeing Jewish persecution. During his career, he worked at Cornell University, the University of Iowa, MIT, and Duke, and he founded the National Training Laboratories, at Bethel, Maine. Among his contributions are the concepts of "action research" (which he defined as "a comparative research on the conditions and effects of various forms of social action and research leading to social action"), applied research (that is, research that examines real-world issues, rather than theoretical research), "genidentity" (the multiple phases, or identities, of an object across time - a concept used in theories of space-time), and sensitivity training (an intervention to combat religious and racial prejudice).
But perhaps his strongest contribution has to do with what is known as the nature versus nurture debate. From its beginnings, psychology was the study of what makes a person the way they are, that is, understanding their thoughts, feelings, and actions. Some psychologists, such as the psychoanalysts, believed that the person was a product of his/her experiences - that is, to understand the current person, you had to know something about his/her past. Others, such as the behaviorists, believed people were shaped entirely by their experiences - specifically, reinforcements and punishments that shape behavior.
Then Lewin published his formula:
Lewin's equation also sets the stage for a concept called multifinality - that is, how people in similar situations could arrive at different outcomes. This is because of the interaction with person; because of this interaction, no environment is the same for any two people. Reality is determined by the person perceiving it, a concept known as psychological reality.
He extended this work when he developed the force-field model, in which the person is at the center of the life space, with forces (often social) exerting influence on the person as either helping forces or hindering forces.
Lewin's ideas continue to influence the field, and his focus on the importance of applied psychological research definitely shaped my education. I usually tell people my PhD is in social psychology; this isn't a lie, but technically, my degree is in applied social psychology. In addition to learning about theory, methods, and statistics, we took courses in applied topics (such as the law, politics, and so on) and were encouraged to seek out internships and research opportunities that tackled real-world issues. I would like to think the existence of such a program is an extension of Lewin's legacy.
And there is nothing so practical, when teaching about theory, as a great quote about the importance of theory. |
Born in Prussia and educated in Germany, Lewin came to the United States in 1933 (where he changed the pronunciation of his name from Leh-veen to Lou-win), fleeing Jewish persecution. During his career, he worked at Cornell University, the University of Iowa, MIT, and Duke, and he founded the National Training Laboratories, at Bethel, Maine. Among his contributions are the concepts of "action research" (which he defined as "a comparative research on the conditions and effects of various forms of social action and research leading to social action"), applied research (that is, research that examines real-world issues, rather than theoretical research), "genidentity" (the multiple phases, or identities, of an object across time - a concept used in theories of space-time), and sensitivity training (an intervention to combat religious and racial prejudice).
But perhaps his strongest contribution has to do with what is known as the nature versus nurture debate. From its beginnings, psychology was the study of what makes a person the way they are, that is, understanding their thoughts, feelings, and actions. Some psychologists, such as the psychoanalysts, believed that the person was a product of his/her experiences - that is, to understand the current person, you had to know something about his/her past. Others, such as the behaviorists, believed people were shaped entirely by their experiences - specifically, reinforcements and punishments that shape behavior.
Then Lewin published his formula:
Specifically, behavior is a function of person and environment, or person in the environment. This simple formula speaks volumes about human behavior. First of all, it slices through the nature-nurture debate by establishing the person-environment interaction, essentially saying "it's a bit of nature and a bit of nurture." But most importantly, it was revolutionary in its assertion that human behavior could be determined entirely by the situation. The situation exerts a powerful influence on us, sometimes even making us behave in ways that are completely contrary from how we usually behave. Taken to its extreme, some psychologists completely deny the existence of personality, instead highlighting that we behave in very unique ways depending on the situation and the specific social role we play in that situation.B = f(P, E)
Lewin's equation also sets the stage for a concept called multifinality - that is, how people in similar situations could arrive at different outcomes. This is because of the interaction with person; because of this interaction, no environment is the same for any two people. Reality is determined by the person perceiving it, a concept known as psychological reality.
He extended this work when he developed the force-field model, in which the person is at the center of the life space, with forces (often social) exerting influence on the person as either helping forces or hindering forces.
Lewin's ideas continue to influence the field, and his focus on the importance of applied psychological research definitely shaped my education. I usually tell people my PhD is in social psychology; this isn't a lie, but technically, my degree is in applied social psychology. In addition to learning about theory, methods, and statistics, we took courses in applied topics (such as the law, politics, and so on) and were encouraged to seek out internships and research opportunities that tackled real-world issues. I would like to think the existence of such a program is an extension of Lewin's legacy.
Wednesday, April 13, 2016
K is for Justin Kruger
Justin Kruger is a social psychologist who currently serves as a professor of marketing at New York University's Stern School of Business. His research interests include use of heuristics (which I'll be blogging about in the near future) and egocentrism in perspective taking. But some of the most interesting research he contributed was completed while he was a graduate student at Cornell University, working with David Dunning. This research is on overconfidence in self-assessment, and the finding from the research has become known as the Dunning-Kruger effect.
Specifically, the Dunning-Kruger effect has to do with rating one's own competence. The rating could be done for any skill or ability; in the original article on the topic (which you can read here), they assessed humor, logical reasoning, and English grammar. In addition to taking an objective assessment, participants were also asked to rate their own ability and how well they thought they did on the test.
They found that people at the lowest level of actual ability overestimated their ability, rating themselves as highly competent on the subject being assessed. In addition, people with the highest level of ability tended to underestimate their ability. In fact, when they charted objective ability and self-assessed ability together, it looked something like this:
The red line shows how people actually performed (on the objective test). This shows a clear, linear trend; people with low ability did poorly on the test, and people with high ability did well on the test. (Note: This is to be expected, because the "actual test score" line uses the same data that was used to assign people to ability groups. The actual test line is kind of redundant, but is included here to really drive home what the Dunning-Kruger effect looks like.)
Now look at the blue line, which shows how well people thought they did. The low performers thought they did well - better than people who are slightly below average and slightly above average. The high performers also thought they did well, though not as well as they actually did. The most accurate assessment came from the slightly above average group, but even they slightly underestimated their ability.
Why does this occur? The issue comes down to what we call "metacognition" - essentially thinking about or being aware of how we think. As I blogged about before with social comparison, people are motivated to evaluate themselves but they need something to which they can compare. In the case of self-assessment of one's ability, the comparison is what we think good performance looks like. People at the lowest level of ability lack the metacognitive skills to know what good performance looks like in order to accurately assess themselves. To put it simply: they don't know how much they don't know.
When you first encounter a subject you know nothing about, you have no idea what to expect and probably have no idea how much there is to know. So you underestimate how much you need to learn to become an expert, meaning you overestimate how close you are to expert level. (Do you know how many times people have told me that, because they've taken introductory psychology, they too are an expert in the subject?) As you learn more, you acquire knowledge and skills, but you also get a more accurate picture of how much more there is to know, so your assessment of your abilities goes down. That is, when you have moderate competence in a topic, you know a lot, but you also know how much you don't know.
Once again, this finding has some important real-world applications. The first that springs to mind is in job interviews, where people are constantly asked to assess their own abilities, but are rarely (at least in my field) given any objective test to demonstrate those abilities. This is perhaps one of many reasons why job interviews are generally not valid predictors of actual job performance - but that's a post for another day.
Specifically, the Dunning-Kruger effect has to do with rating one's own competence. The rating could be done for any skill or ability; in the original article on the topic (which you can read here), they assessed humor, logical reasoning, and English grammar. In addition to taking an objective assessment, participants were also asked to rate their own ability and how well they thought they did on the test.
They found that people at the lowest level of actual ability overestimated their ability, rating themselves as highly competent on the subject being assessed. In addition, people with the highest level of ability tended to underestimate their ability. In fact, when they charted objective ability and self-assessed ability together, it looked something like this:
The red line shows how people actually performed (on the objective test). This shows a clear, linear trend; people with low ability did poorly on the test, and people with high ability did well on the test. (Note: This is to be expected, because the "actual test score" line uses the same data that was used to assign people to ability groups. The actual test line is kind of redundant, but is included here to really drive home what the Dunning-Kruger effect looks like.)
Now look at the blue line, which shows how well people thought they did. The low performers thought they did well - better than people who are slightly below average and slightly above average. The high performers also thought they did well, though not as well as they actually did. The most accurate assessment came from the slightly above average group, but even they slightly underestimated their ability.
Why does this occur? The issue comes down to what we call "metacognition" - essentially thinking about or being aware of how we think. As I blogged about before with social comparison, people are motivated to evaluate themselves but they need something to which they can compare. In the case of self-assessment of one's ability, the comparison is what we think good performance looks like. People at the lowest level of ability lack the metacognitive skills to know what good performance looks like in order to accurately assess themselves. To put it simply: they don't know how much they don't know.
When you first encounter a subject you know nothing about, you have no idea what to expect and probably have no idea how much there is to know. So you underestimate how much you need to learn to become an expert, meaning you overestimate how close you are to expert level. (Do you know how many times people have told me that, because they've taken introductory psychology, they too are an expert in the subject?) As you learn more, you acquire knowledge and skills, but you also get a more accurate picture of how much more there is to know, so your assessment of your abilities goes down. That is, when you have moderate competence in a topic, you know a lot, but you also know how much you don't know.
Once again, this finding has some important real-world applications. The first that springs to mind is in job interviews, where people are constantly asked to assess their own abilities, but are rarely (at least in my field) given any objective test to demonstrate those abilities. This is perhaps one of many reasons why job interviews are generally not valid predictors of actual job performance - but that's a post for another day.
Tuesday, April 12, 2016
J is for Just World Hypothesis
Life isn't fair - we say this to people constantly, especially children, when they complain about something being unfair. And of course, fairness is all about perception. However, a sadly common cognitive bias, perhaps even among people who utter the phrase "life isn't fair" is that life is fair - or rather, just. Good things happen to good people and bad things happen to bad people. This is the basic premise behind the just world hypothesis, also sometimes called "belief in a just world."
The problem with the just world hypothesis is that the reasoning is circular. Bad things happen to bad people. But that also means that, if something bad happened to you, you must be a bad person. The just world hypothesis often quickly devolves into victim blaming.
If you'd like to see real-world examples of the just world hypothesis, check out the comments section on pretty much any news story. Just don't feed the trolls. Really, don't.
This topic was introduced by Melvin J. Lerner, who has contributed a great deal of research and scholarship into the study of justice. He first made his observations during clinical training; though Lerner's PhD is in social psychology, he completed post-doctoral training in clinical psychology. He noticed that his fellow therapists often blamed mental patients for their own suffering. He conducted additional research into the topic, and in addition to articles, published a book detailing his various studies on the just world hypothesis. Unfortunately that book appears to be out of print, but if you happen to find it at a used book store or library, it's an excellent - though disheartening - read.
Why do people engage in this cognitive fallacy? As I've said before, people try to organize their world to bring order from chaos. We don't like thinking that events are random, and we see ourselves as the star in a story, with themes, plots, and subplots. We strive for consistency - in our own identity as well as in others. And we engage in a variety of cognitive fallacies to protect ourselves emotionally from the horrors of the world. We believe that bad things do happen, but they won't happen to us, often referred to as the delusion of invincibility. And we take this a step farther, in order to preserve our delusion of consistency and order in the world, by believing the world is just. In fact, Lerner and his collaborator Miller put it best:
It's important to note that not everyone engages in this fallacy. Even in Lerner's studies, there were instances where people offered to help those in need and did not engage in victim blaming. So it is possible to break out of this delusion.
The problem with the just world hypothesis is that the reasoning is circular. Bad things happen to bad people. But that also means that, if something bad happened to you, you must be a bad person. The just world hypothesis often quickly devolves into victim blaming.
If you'd like to see real-world examples of the just world hypothesis, check out the comments section on pretty much any news story. Just don't feed the trolls. Really, don't.
This topic was introduced by Melvin J. Lerner, who has contributed a great deal of research and scholarship into the study of justice. He first made his observations during clinical training; though Lerner's PhD is in social psychology, he completed post-doctoral training in clinical psychology. He noticed that his fellow therapists often blamed mental patients for their own suffering. He conducted additional research into the topic, and in addition to articles, published a book detailing his various studies on the just world hypothesis. Unfortunately that book appears to be out of print, but if you happen to find it at a used book store or library, it's an excellent - though disheartening - read.
Why do people engage in this cognitive fallacy? As I've said before, people try to organize their world to bring order from chaos. We don't like thinking that events are random, and we see ourselves as the star in a story, with themes, plots, and subplots. We strive for consistency - in our own identity as well as in others. And we engage in a variety of cognitive fallacies to protect ourselves emotionally from the horrors of the world. We believe that bad things do happen, but they won't happen to us, often referred to as the delusion of invincibility. And we take this a step farther, in order to preserve our delusion of consistency and order in the world, by believing the world is just. In fact, Lerner and his collaborator Miller put it best:
"Individuals have a need to believe that they live in a world where people generally get what they deserve."This is a comforting and understandable belief. But as with many cognitive tendencies, it becomes twisted and dangerous, when we begin applying that belief to rationalize the suffering of others.
It's important to note that not everyone engages in this fallacy. Even in Lerner's studies, there were instances where people offered to help those in need and did not engage in victim blaming. So it is possible to break out of this delusion.
Monday, April 11, 2016
I is for Impression Formation
You've probably heard countless aphorisms on the importance of making a good first impression.
In fact a Google search of "how to make a good first impression" produced over 36 million results. Considering its importance, you probably won't be too surprised to learn that impression formation has been an important topic in social psychological research for decades.
Some of the first research on the topic was conducted by Solomon Asch. According to his work, when we make an impression of a person, we examine their traits on two basic factors: trait valence (a positive or negative evaluation of that trait) and trait centrality (or how important a trait is the person's identity - in a sense, the amount of weight we place on that trait in generating the impression). This particular form of impression formation is often known as the "cognitive algebra" approach, where the equation might look something like this:
This is a simplification, because a few other things are important, according to Asch's work, including primacy (first observations carry more weight - so yes, first impressions are important). In fact, when we make observations about others, we do so from the assumption that behaviors reflect stable personality traits, rather than momentary changes due to the situation (or what we call "states") and we like consistency (that is, the various traits have to mesh with each other, and so part of the changing impression of a person may be to make newly discovered traits "work" with previously discovered traits).
Of course, as we learned in the A post about the fundamental attribution error, we have a bias to assume behaviors, especially negative behaviors reflect traits, while we assume our own negative behaviors are the result of states. This bias certainly comes into play in impression formation, and we have a tendency to focus on negative behavior of others, even if that behavior has a rational, state-based explanation. Look for a future post on the "negativity bias" for more information!
In fact a Google search of "how to make a good first impression" produced over 36 million results. Considering its importance, you probably won't be too surprised to learn that impression formation has been an important topic in social psychological research for decades.
Some of the first research on the topic was conducted by Solomon Asch. According to his work, when we make an impression of a person, we examine their traits on two basic factors: trait valence (a positive or negative evaluation of that trait) and trait centrality (or how important a trait is the person's identity - in a sense, the amount of weight we place on that trait in generating the impression). This particular form of impression formation is often known as the "cognitive algebra" approach, where the equation might look something like this:
This is a simplification, because a few other things are important, according to Asch's work, including primacy (first observations carry more weight - so yes, first impressions are important). In fact, when we make observations about others, we do so from the assumption that behaviors reflect stable personality traits, rather than momentary changes due to the situation (or what we call "states") and we like consistency (that is, the various traits have to mesh with each other, and so part of the changing impression of a person may be to make newly discovered traits "work" with previously discovered traits).
Of course, as we learned in the A post about the fundamental attribution error, we have a bias to assume behaviors, especially negative behaviors reflect traits, while we assume our own negative behaviors are the result of states. This bias certainly comes into play in impression formation, and we have a tendency to focus on negative behavior of others, even if that behavior has a rational, state-based explanation. Look for a future post on the "negativity bias" for more information!
Saturday, April 9, 2016
H is for A (Brief) History of Social Psychology
So far this month, I've tried to introduce you to some my favorite/the most influential ideas, theories, and people in social psychology. But one of my favorite things to teach in any course was the history of the course topic. (In fact, History & Systems, which is essentially a history of psychology course, was my second favorite course in college - first was Research Methods.)
That's why today I'm going to offer you a brief history of social psychology. The first use of the term "social psychology" was by Wilhelm Wundt (considered by most to be the father of modern psychology) in 1862.
Unfortunately, his writings on the topic were not translated into English, and so they did not influence the stronghold of American social scientists.
However, in 1895, Norman Triplett of Indiana University did what is credited as the first social psychological experiment on a concept known as social facilitation. Essentially social facilitation occurs when the presence of others improves performance - for instance, when an athlete runs faster during the marathon than in her training leading up to it, or when a basketball player plays better in a game than practice.
Two textbooks were published on social psychology in 1908, one by William McDougall and the other by Edward Ross. But social psychology still lacked a unique identity that differentiated it from other, existing fields, as well as particular methods for use in studying social psychological topics. This information came in 1924, when Floyd Allport (brother of Gordon Allport) published his own textbook on the topic. In his book, he labeled social psychology as the psychology of groups, rather than individuals. The reach and topics of social psychology expanded in 1936 to the study of social issues, when a group of social psychologists founded the Society for the Psychological Study of Social Issues, which is still in existence today.
But perhaps one of the most interesting developments in social psychology occurred in 1930s Europe, where a combination of Jewish persecution and a policy in the Soviet Union forbidding the use of psychological tests (which halted a great deal of research) resulted in many influential social scientists immigrating to the United States. Among these was Kurt Lewin, one of the most influential social psychologists ever; in fact, some consider him the father of social psychology. Look for a special post all about him soon!
In the 1950s and 60s, much of the research was inspired by the heinous acts in World War II - Adorno's research on authoritarianism, Asch's research on social influence, Festinger's cognitive dissonance theory, and of course, Milgram's famous study on obedience to authority. During this same period, research by social psychologists showing that segregation had negative impacts on Black children was also used in the famous Brown v. Topeka Board of Education decision.
In the 1970s through today, the face of social psychology changed once again. With the availability of new methods and devices to study thought and the human brain, many researchers began adding a cognitive component to their social psychological research. Some specify this as a new subfield called cognitive social psychology.
I think the main reason I love learning about history so much is because of a fascination with where we've been and an interest in understanding the trajectory of where we are going. I'll be very interested in seeing where the field of social psychology takes me next!
That's why today I'm going to offer you a brief history of social psychology. The first use of the term "social psychology" was by Wilhelm Wundt (considered by most to be the father of modern psychology) in 1862.
Unfortunately, his writings on the topic were not translated into English, and so they did not influence the stronghold of American social scientists.
However, in 1895, Norman Triplett of Indiana University did what is credited as the first social psychological experiment on a concept known as social facilitation. Essentially social facilitation occurs when the presence of others improves performance - for instance, when an athlete runs faster during the marathon than in her training leading up to it, or when a basketball player plays better in a game than practice.
Two textbooks were published on social psychology in 1908, one by William McDougall and the other by Edward Ross. But social psychology still lacked a unique identity that differentiated it from other, existing fields, as well as particular methods for use in studying social psychological topics. This information came in 1924, when Floyd Allport (brother of Gordon Allport) published his own textbook on the topic. In his book, he labeled social psychology as the psychology of groups, rather than individuals. The reach and topics of social psychology expanded in 1936 to the study of social issues, when a group of social psychologists founded the Society for the Psychological Study of Social Issues, which is still in existence today.
But perhaps one of the most interesting developments in social psychology occurred in 1930s Europe, where a combination of Jewish persecution and a policy in the Soviet Union forbidding the use of psychological tests (which halted a great deal of research) resulted in many influential social scientists immigrating to the United States. Among these was Kurt Lewin, one of the most influential social psychologists ever; in fact, some consider him the father of social psychology. Look for a special post all about him soon!
In the 1950s and 60s, much of the research was inspired by the heinous acts in World War II - Adorno's research on authoritarianism, Asch's research on social influence, Festinger's cognitive dissonance theory, and of course, Milgram's famous study on obedience to authority. During this same period, research by social psychologists showing that segregation had negative impacts on Black children was also used in the famous Brown v. Topeka Board of Education decision.
In the 1970s through today, the face of social psychology changed once again. With the availability of new methods and devices to study thought and the human brain, many researchers began adding a cognitive component to their social psychological research. Some specify this as a new subfield called cognitive social psychology.
I think the main reason I love learning about history so much is because of a fascination with where we've been and an interest in understanding the trajectory of where we are going. I'll be very interested in seeing where the field of social psychology takes me next!
Friday, April 8, 2016
G is for Genovese
As with yesterday, today's A-Z deals with a person - however, unlike yesterday, this person was not a psychologist, but instead inspired an important area of social psychological research.
On March 13, 1964, Catherine "Kitty" Genovese was returning to her apartment in Queens, when she was brutally attacked and murdered in the apartment courtyard.
The case attracted a great deal of media attention, not just because of the facts of the case (that Kitty was attacked completely unprovoked, by a stranger, who returned after the initial attack to complete the crime), but because the inhabitants of the apartment building heard and in some cases saw, the attack. Yet the police were not called until Kitty laying dying in the arms of one of her neighbors. News reports claimed as many as 38 people witnessed the event but did not call the police, some saying later that they "did not want to get involved."
This case was in the media again about two weeks ago, when the perpetrator, Winston Moseley, passed away in prison, at the age of 81. The obituary of Moseley in the New York Times pointed out some of the mistakes in the initial coverage of the event (including coverage by NYT). The attack was not fully witnessed by anyone, though people heard/saw bits and pieces, some drawing incorrect conclusions (such as that this was a lover's quarrel).
However, even if the "38 witnesses" portion is incorrect, some witnesses were aware that something far worse than a lover's quarrel was occuring. One neighbor even shouted at Moseley to "Let that girl alone!" at which time, Moseley left. However, he returned about 10 minutes later to attack Genovese again. This attack lasted about half an hour. Police were not summoned until a few minutes after the attack was finished. Police arrived within minutes, but Genovese died in the ambulance.
This event inspired a great deal of psychological research on why people fail to help others in need. John Darley and Bibb Latané conducted research in 1968, directly inspired by the murder of Kitty Genovese, on the "bystander effect." This research established the seemingly paradoxical phenomenon that you are less likely to be helped by a group of people than one person alone. In their laboratory, they staged an emergency, and measured whether participants alone or in groups (in some cases, groups with confederates) stepped in to help. When the participant was alone and heard the call for help, 70 percent reported the emergency or went to help. In groups, only 40 percent helped in any way.
The reason for this is diffusion of responsibility. When you alone witness someone in need of help, you have 100 percent responsibility; however, if you are in a group of 5 and witness someone in need of help, you only have 20 percent responsibility. Additionally, as I've blogged about recently, we look for cues from others on how we should respond. If other people appear calm and detached, we may think we are misinterpreting the situation (that is, there is no emergency).
In a more innocuous situation, you've probably witnessed something similar in classes. After the teacher/professor has covered a topic and asks if there are any questions, most students say nothing, even if they did not understand; they're looking to others to see if they are also confused. This leads to an annoying situation experienced by most teachers/professors: most students missed a concept on a test or quiz, but no student asked clarifying questions.
Through multiple experiments, changing small elements of the situation, Darley and Latané found that bystanders must go through 5 specific processes in order to help:
Sometimes, knowledge really is power.
On March 13, 1964, Catherine "Kitty" Genovese was returning to her apartment in Queens, when she was brutally attacked and murdered in the apartment courtyard.
By Source, Fair Use of copyrighted material in the context of Murder of Kitty Genovese
The case attracted a great deal of media attention, not just because of the facts of the case (that Kitty was attacked completely unprovoked, by a stranger, who returned after the initial attack to complete the crime), but because the inhabitants of the apartment building heard and in some cases saw, the attack. Yet the police were not called until Kitty laying dying in the arms of one of her neighbors. News reports claimed as many as 38 people witnessed the event but did not call the police, some saying later that they "did not want to get involved."
This case was in the media again about two weeks ago, when the perpetrator, Winston Moseley, passed away in prison, at the age of 81. The obituary of Moseley in the New York Times pointed out some of the mistakes in the initial coverage of the event (including coverage by NYT). The attack was not fully witnessed by anyone, though people heard/saw bits and pieces, some drawing incorrect conclusions (such as that this was a lover's quarrel).
However, even if the "38 witnesses" portion is incorrect, some witnesses were aware that something far worse than a lover's quarrel was occuring. One neighbor even shouted at Moseley to "Let that girl alone!" at which time, Moseley left. However, he returned about 10 minutes later to attack Genovese again. This attack lasted about half an hour. Police were not summoned until a few minutes after the attack was finished. Police arrived within minutes, but Genovese died in the ambulance.
This event inspired a great deal of psychological research on why people fail to help others in need. John Darley and Bibb Latané conducted research in 1968, directly inspired by the murder of Kitty Genovese, on the "bystander effect." This research established the seemingly paradoxical phenomenon that you are less likely to be helped by a group of people than one person alone. In their laboratory, they staged an emergency, and measured whether participants alone or in groups (in some cases, groups with confederates) stepped in to help. When the participant was alone and heard the call for help, 70 percent reported the emergency or went to help. In groups, only 40 percent helped in any way.
The reason for this is diffusion of responsibility. When you alone witness someone in need of help, you have 100 percent responsibility; however, if you are in a group of 5 and witness someone in need of help, you only have 20 percent responsibility. Additionally, as I've blogged about recently, we look for cues from others on how we should respond. If other people appear calm and detached, we may think we are misinterpreting the situation (that is, there is no emergency).
In a more innocuous situation, you've probably witnessed something similar in classes. After the teacher/professor has covered a topic and asks if there are any questions, most students say nothing, even if they did not understand; they're looking to others to see if they are also confused. This leads to an annoying situation experienced by most teachers/professors: most students missed a concept on a test or quiz, but no student asked clarifying questions.
Through multiple experiments, changing small elements of the situation, Darley and Latané found that bystanders must go through 5 specific processes in order to help:
- Notice something is wrong. Probably the reason most people in Genovese's apartment building did nothing is because, at 3 am, they may not have heard anything, and/or may have grown accustomed to blocking out noises outside their building.
- Interpret that situation as an emergency. Some people who heard the attack on Genovese did not realize it was a brutal physical attack, thinking it was instead lovers or drunks arguing.
- Feel some degree of responsibility to help. This is where diffusion of responsibility comes into play.
- Be able to offer some form of assistance. It's possible that people who overheard the attack knew what it was, but didn't go outside to help for fear of also getting attacked. And in emergency situations, people aren't thinking clearly, and may not consider all potential options (such as calling the police).
- Offer their chosen form of assistance.
Sometimes, knowledge really is power.
Thursday, April 7, 2016
F is for Festinger
Leon Festinger was an American social psychologist, who, despite completing graduate studies with Kurt Lewin (one of the most important social psychologists, and topic of a future blog post), hesitated to study social psychological topics, because he considered the subfield to be "loose," "vague," and "unappealing." It's perhaps because of his criticism of the field's methods and conclusions that when he finally did enter the subfield, he contributed ideas and theories that were revolutionary and highly influential.
One of his most important contributions - and in fact, one that is viewed by some as the most important social psychological theory - is the concept of cognitive dissonance. Like many social psychologists, he was responding to the stronghold of behaviorism, which focused on observable behaviors, and the forces (rewards and punishments) shaping behavior. Behaviorism downplayed factors like cognition and emotion, because they could not be easily observed. However, social psychologists recognized that people were not mindless automatons responding to rewards and punishments; they are thinking, feeling individuals, who seek to bring order to their world by understanding the reasons behind their thoughts and actions.
In fact, we are so motivated to have a coherent, consistent sense of self, that we will explain away behaviors that are contrary to our beliefs, and may even change our beliefs so that they are consistent with our behavior.
Cognitive dissonance occurs when we behave in a way that does not align with our beliefs. We respond by changing our behavior, changing our beliefs, or acquiring new information or opinions that allow our current behavior and beliefs to match. Festinger's classic experiment on cognitive dissonance involved having participants complete a boring, repetitive task of turning pegs, and filling and emptying a tray of spools. Participants were then asked, as a favor to the experimenter, to tell the next participant (actually a confederate - that is, someone who works for the experimenter and is only pretending to be a participant) that the study tasks were enjoyable. Half were offered $1, and the other half were offered $20. Later, they were asked to rate how enjoyable they found the tasks. Those who were paid $1 rated the task as significantly more enjoyable than those paid $20. Why?
The study tasks were selected specifically to be boring. In fact, they did pilot testing to find the most boring tasks they possibly could. So now participants were being asked to state something that was objectively boring was enjoyable. Participants who were paid $20 to lie probably rationalized that they just did it for the money. But the people paid $1 had not received enough money to rationalize their behavior away; they faced some serious cognitive dissonance. So they changed their opinion, deciding the task actually was enjoyable.
This concept definitely has important real-world applications.
In fact, Festinger began developing the concept of cognitive dissonance after observing an apocalyptic cult that believed the world would be destroyed by a flood on December 21, 1954. The leader of the group, Marian Keech (a pseudonym - her real name was Dorothy Martin) claimed this message came from a group of aliens known as "the Guardians." Festinger and two colleagues, Henry Riecken and Stanley Schacter, observed the group from the inside both before and after the apocalypse was supposed to take place.
When the world wasn't destroyed by flood, Keech claimed that God has spared us because of the good work of the members. Members of the cult became even more devoted to the cause and mission of the group. Festinger hypothesized that, because many members of the group had quit their jobs and gotten rid of possessions to devote their time to the group, they were motivated to accept Keech's explanation and to reaffirm their commitment to the group, to reduce cognitive dissonance.
The other major theory Festinger contributed is social comparison theory, a topic I've blogged about recently. Of course, Festinger's theory takes social comparison further than I did in this previous post. Not only do we compare ourselves to others to evaluate whether we're on the right track, we 1) tend to group ourselves with others to whom we are similar in skills and abilities, and 2) may change our attitudes or behaviors to make ourselves more similar to others (or make the others appear more similar to us), a concept that sounds suspiciously like cognitive dissonance.
One of his most important contributions - and in fact, one that is viewed by some as the most important social psychological theory - is the concept of cognitive dissonance. Like many social psychologists, he was responding to the stronghold of behaviorism, which focused on observable behaviors, and the forces (rewards and punishments) shaping behavior. Behaviorism downplayed factors like cognition and emotion, because they could not be easily observed. However, social psychologists recognized that people were not mindless automatons responding to rewards and punishments; they are thinking, feeling individuals, who seek to bring order to their world by understanding the reasons behind their thoughts and actions.
In fact, we are so motivated to have a coherent, consistent sense of self, that we will explain away behaviors that are contrary to our beliefs, and may even change our beliefs so that they are consistent with our behavior.
Cognitive dissonance occurs when we behave in a way that does not align with our beliefs. We respond by changing our behavior, changing our beliefs, or acquiring new information or opinions that allow our current behavior and beliefs to match. Festinger's classic experiment on cognitive dissonance involved having participants complete a boring, repetitive task of turning pegs, and filling and emptying a tray of spools. Participants were then asked, as a favor to the experimenter, to tell the next participant (actually a confederate - that is, someone who works for the experimenter and is only pretending to be a participant) that the study tasks were enjoyable. Half were offered $1, and the other half were offered $20. Later, they were asked to rate how enjoyable they found the tasks. Those who were paid $1 rated the task as significantly more enjoyable than those paid $20. Why?
The study tasks were selected specifically to be boring. In fact, they did pilot testing to find the most boring tasks they possibly could. So now participants were being asked to state something that was objectively boring was enjoyable. Participants who were paid $20 to lie probably rationalized that they just did it for the money. But the people paid $1 had not received enough money to rationalize their behavior away; they faced some serious cognitive dissonance. So they changed their opinion, deciding the task actually was enjoyable.
This concept definitely has important real-world applications.
In fact, Festinger began developing the concept of cognitive dissonance after observing an apocalyptic cult that believed the world would be destroyed by a flood on December 21, 1954. The leader of the group, Marian Keech (a pseudonym - her real name was Dorothy Martin) claimed this message came from a group of aliens known as "the Guardians." Festinger and two colleagues, Henry Riecken and Stanley Schacter, observed the group from the inside both before and after the apocalypse was supposed to take place.
When the world wasn't destroyed by flood, Keech claimed that God has spared us because of the good work of the members. Members of the cult became even more devoted to the cause and mission of the group. Festinger hypothesized that, because many members of the group had quit their jobs and gotten rid of possessions to devote their time to the group, they were motivated to accept Keech's explanation and to reaffirm their commitment to the group, to reduce cognitive dissonance.
In fact, they even wrote a book of this observations, which is still available in reprint. |
The other major theory Festinger contributed is social comparison theory, a topic I've blogged about recently. Of course, Festinger's theory takes social comparison further than I did in this previous post. Not only do we compare ourselves to others to evaluate whether we're on the right track, we 1) tend to group ourselves with others to whom we are similar in skills and abilities, and 2) may change our attitudes or behaviors to make ourselves more similar to others (or make the others appear more similar to us), a concept that sounds suspiciously like cognitive dissonance.
Wednesday, April 6, 2016
E is for Experimenter Expectancy Effects
I've talked so far about how groups influence individuals and their behavior. But psychology has also contributed to our understanding of what controls (that is, what forces outside of what we are studying do we need to hold constant?) are necessary when we do research on people. And this contribution impacts many fields beyond psychology, such as medical research (like drug trials). One important thing to keep in mind is what effect the researcher might have on the participant, and how that researcher might inadvertently influence the participant's behavior. We call these "experimenter expectancy effects."
To take a step back and get (briefly) into the history of psychology: Psychology as a field grew out of two other fields - the physical sciences (like physics, chemistry, and so on) and philosophy. Early psychologists used methods from these respective fields for some of their first studies on psychological topics. But they learned that controls necessary in a physics study differ from those needed when studying people, so they had to develop new methods.
In the early 1900s, a man named Wilhelm von Osten gained media attention for his horse, Clever Hans.
Von Osten claimed that Hans could perform arithmetic, read and understand German, and keep track of time and the calendar. Von Osten would ask Hans a question, and Hans would respond by tapping his hoof. The German board of education sent psychologist Carl Stumpf to investigate the claims. They discovered that Hans was responding to the questioner's posture and facial expression to know when to stop tapping. (So Hans really was Clever, but at picking up social cues, not at arithmetic.)
Another famous example of experimenter expectancy effects came from research in the 1920s and 1930s at the Hawthorne Works factory in Cicero, Illinois. The researcher wanted to discover the best light conditions to maximize productivity in factory workers. They found that, regardless of lighting conditions used, worker productivity increased when changes were made, and decreased when the study ended. The conclusion is that the workers were more productive not because of the conditions, but because they knew they (and more importantly, their productivity) were being observed. This concept later became known as the Hawthorne effect.
We are social creatures, and we look for cues from others to make sure we're responding the way we're supposed to. This is great if you're in a new situation and want to fit in, but not so great when you're participating in a study. For this reason, new controls had to be added to studies of people to make sure the experimenter isn't unconsciously influencing the participant.
One way is through blinding. Single blind means the participant does not know what is being studied, or at least, what is being changed about his/her situation (the independent variable) to affect his/her response (the dependent variable); double blind means the experimenter also doesn't know what condition (level of the independent variable) the participant is in and/or what outcome is expected. Instead, a person who doesn't interact directly with the participant knows that information and uses it when analyzing data. In drug trials, this means that some people get the real drug and some get a placebo, and neither the participant nor the experimenter knows which the participant received.
But wait, there's more! There's another famous study of experimenter expectancy effects that I'll be blogging about later!
To take a step back and get (briefly) into the history of psychology: Psychology as a field grew out of two other fields - the physical sciences (like physics, chemistry, and so on) and philosophy. Early psychologists used methods from these respective fields for some of their first studies on psychological topics. But they learned that controls necessary in a physics study differ from those needed when studying people, so they had to develop new methods.
In the early 1900s, a man named Wilhelm von Osten gained media attention for his horse, Clever Hans.
Von Osten claimed that Hans could perform arithmetic, read and understand German, and keep track of time and the calendar. Von Osten would ask Hans a question, and Hans would respond by tapping his hoof. The German board of education sent psychologist Carl Stumpf to investigate the claims. They discovered that Hans was responding to the questioner's posture and facial expression to know when to stop tapping. (So Hans really was Clever, but at picking up social cues, not at arithmetic.)
Another famous example of experimenter expectancy effects came from research in the 1920s and 1930s at the Hawthorne Works factory in Cicero, Illinois. The researcher wanted to discover the best light conditions to maximize productivity in factory workers. They found that, regardless of lighting conditions used, worker productivity increased when changes were made, and decreased when the study ended. The conclusion is that the workers were more productive not because of the conditions, but because they knew they (and more importantly, their productivity) were being observed. This concept later became known as the Hawthorne effect.
We are social creatures, and we look for cues from others to make sure we're responding the way we're supposed to. This is great if you're in a new situation and want to fit in, but not so great when you're participating in a study. For this reason, new controls had to be added to studies of people to make sure the experimenter isn't unconsciously influencing the participant.
One way is through blinding. Single blind means the participant does not know what is being studied, or at least, what is being changed about his/her situation (the independent variable) to affect his/her response (the dependent variable); double blind means the experimenter also doesn't know what condition (level of the independent variable) the participant is in and/or what outcome is expected. Instead, a person who doesn't interact directly with the participant knows that information and uses it when analyzing data. In drug trials, this means that some people get the real drug and some get a placebo, and neither the participant nor the experimenter knows which the participant received.
But wait, there's more! There's another famous study of experimenter expectancy effects that I'll be blogging about later!
Tuesday, April 5, 2016
D is for Discrimination
I've talked a lot on this blog (especially during the last few days) about how people assign themselves to social groups, and the impacts group membership has on their thoughts and behaviors. What I haven't touched on as much is how being a member of a group impacts how you behave toward another group. We know that people tend to assign positive characteristics to in-group members and negative characteristics to out-group members, so it makes sense that they might also behave differently toward people they believe possess these positive or negative characteristics. This differential treatment is called discrimination.
Like so many concepts in psychology, this harmful behavior arises from mundane causes. Human beings are cognitive misers - this means that they save their mental energy for the times when it is really needed, and spend the rest of the time on a sort of auto-pilot. They create categories (schema) to quickly represent both people and objects. There's no need for you to figure out how a chair works each time you encounter a new one, because you have a cognitive representation of that object that tells you what it is and how it works. In fact, when you learn, you engage in two simple processes: generalizing, which means group things that are similar together, and discriminating, which means separating things that are different from other things. Remember Sesame Street?
This simple game played with children teaches them how to generalize and discriminate. We also notice when people are different from us, which is a form of discriminating (not discrimination - yet).
Because our brain is built to put people and objects into easy categories, this can make us likely to start applying certain characteristics to those categories of people. If we're not careful, these can become stereotypes - beliefs about the behavior or thoughts of a certain group. Stereotypes can lead to prejudice, which is an inflexible and incorrect (and usually negative) attitude toward a certain group. Finally, prejudice can lead to discrimination, which is a behavior toward a certain group.
I do want to mention that by explaining the "lineage" of discrimination, I'm in no way excusing it. Instead, I think that by understanding the natural ways these attitudes, beliefs, and behaviors arise, we can take steps to prevent prejudice and discrimination. Obviously it takes work, because it often requires people to think in the opposite way to which they are inclined. For instance, some prejudice interventions ask people to look for commonalities between themselves and the out-group, essentially recategorizing people so that your out-group is now part of your in-group. Increased contact with the out-group can also decrease prejudice. (I'll be blogging later in April about both of these interventions, because they involve some of the most influential research in social psychology - so stay tuned!)
Like so many concepts in psychology, this harmful behavior arises from mundane causes. Human beings are cognitive misers - this means that they save their mental energy for the times when it is really needed, and spend the rest of the time on a sort of auto-pilot. They create categories (schema) to quickly represent both people and objects. There's no need for you to figure out how a chair works each time you encounter a new one, because you have a cognitive representation of that object that tells you what it is and how it works. In fact, when you learn, you engage in two simple processes: generalizing, which means group things that are similar together, and discriminating, which means separating things that are different from other things. Remember Sesame Street?
This simple game played with children teaches them how to generalize and discriminate. We also notice when people are different from us, which is a form of discriminating (not discrimination - yet).
Because our brain is built to put people and objects into easy categories, this can make us likely to start applying certain characteristics to those categories of people. If we're not careful, these can become stereotypes - beliefs about the behavior or thoughts of a certain group. Stereotypes can lead to prejudice, which is an inflexible and incorrect (and usually negative) attitude toward a certain group. Finally, prejudice can lead to discrimination, which is a behavior toward a certain group.
I do want to mention that by explaining the "lineage" of discrimination, I'm in no way excusing it. Instead, I think that by understanding the natural ways these attitudes, beliefs, and behaviors arise, we can take steps to prevent prejudice and discrimination. Obviously it takes work, because it often requires people to think in the opposite way to which they are inclined. For instance, some prejudice interventions ask people to look for commonalities between themselves and the out-group, essentially recategorizing people so that your out-group is now part of your in-group. Increased contact with the out-group can also decrease prejudice. (I'll be blogging later in April about both of these interventions, because they involve some of the most influential research in social psychology - so stay tuned!)
Subscribe to:
Posts (Atom)