Posted tagged ‘AI’

AI, Opinions, and Politics: Polarization and Distance as the Result of the Connected Age

June 3, 2017

On June 2, 2017, CNN Correspondent Bill Weir was interviewed about the journey he’s taken around the US, interviewing people for the show “States of Change.” He was asked by the host, Kate Bolduan, about what surprised him in what he heard from people.

Weir responded, “What I was surprised about is how distant people are in the most-connected age in human history.” This remark struck me for its obvious truth, and because it reminded me of research I had done years earlier. The simulation we created was difficult to explain at the time; I wonder if now it resonates more clearly.

In 2005, I was leading a team working on artificial intelligence in association with the Defense Advanced Research Projects Agency (DARPA) for creating “social AIs.” As part of this, we became interested in how opinions passed between individuals in a population, and what this meant for the population overall. This topic underlies all sorts of social interaction including reputation and political beliefs.

To test out some of our ideas, we set up a fairly simple simulation of “agents,” each represented by a little dots that had two opinions of abstract subjects. We represented these with colors – green and red. An agent that liked green and not red would show up as bright green. One who liked red but not green would show up bright red. An agent who liked both would show up yellow (the combination of the two colors), and one that liked neither would show up as black (the absence of both colors).

start condition 1

Starting condition for 200 agents (and two “kings”) with random placement and opinions.

These little dots start off randomly distributed around a space with random opinions of both green and red. Each agent would move about randomly, except that they liked to be near other agents with similar opinions as theirs. Each agent would tend to congregate near those with similar opinions, and move away from those with differing opinions.

 

Each agent also had a number of “associations” – essentially, which agents were its friends that it listened to, as each one broadcast its opinion (“I like red a lot and green a little!”) at regular intervals. Friend-agents might have different opinions, and on hearing a friend’s opinion, an agent might change its own views. The probability of an agent changing its view depended on its “Breadth,” that is, it’s openness to new ideas (modeled after the Five Factor Model of personality) and the strength of its association with another agent. The more broad-minded the agent and the stronger its association with another, the more likely it would be to adjust its opinion based on what one of its associates broadcast to it.

We also introduced two “kings,” agents with firm opinions of “pro-red” and “pro-green” that exercised an outsized influence on the other agents. Think of these as “thought leaders,” whether in the media, politics, fashion, etc. The dots (up to 200 agents plus the two “kings”) start in random positions with the agents having random opinion values.

simulation condition 2

Midway through a simulation: different agents are starting to coalesce in their opinions and locations.

In the simulation shown here, each agent has an association with about 10% of the others, and a 10% Breadth. As the simulation progresses, the two kings repel each other and move as far away from each other as they can. The other agents continue to influence each other, gradually finding their own place and adjusting their opinions based on what those around them say.

end condition with bridge

Late in the simulation: Red and Green factions have formed, in this case with a centrist bridge between them.

 

 

Eventually, the simulation settles into one of a few common patterns. Here, Red and Green factions are still both strong, but there’s also a “bridge” between them of agents who have more moderate opinions. This is the result we get when each agent only listens to a few others.

 

 

 

 

end condition factions 1

Two highly separated factions. Each is its own echo-chamber, with agents reinforcing each others’ opinions.

On the other hand, if we change the parameters of the simulation, we can get very different results. In particular, by making the agents far more connected – 100% associations, so everyone hears everyone’s opinions – the late-stage simulation looks markedly different as you can see here.

 

In cases like this, two distinct factions have emerged, all-red and all-green, with little or no common bridge between them. This didn’t happen every time, but it is common once you have a highly connected population.

end condition factions 2

Two distinct factions with a weak bridge between them.

Some variations include a weak bridge between the two main factions, and sometimes a variation of a “leaderless” third faction that is strong (or weak) in both, but not accepted by either faction.

 

 

end condition factions 3

Another variation with a highly connected population: two main factions with two “leaderless” factions (red+green and neither red or green)

 

 

 

 

The critical variable here is connectedness – how many of the agents are connected to and listening to others. It seems counter-intuitive, but in the first case above, when agents have only a few others that they listen to – a local “social horizon” – the eventual result is less polarized: there’s more variance of opinion, less echo-chamber effect, and more centrist agents bridging between different factions.

Conversely, when agents become more connected, they also become more polarized, more like those around them. This means that all or nearly all the agents retreat into an us-versus-them echo chamber where factions become deeply entrenched, self-reinforcing, and without contact with others who disagree even mildly. This reduces communication between factions, with all the attendant problems we see today.

What struck me in listening to Bill Weir on CNN was how pervasive and obvious this situation is to us now, and how unknown it was just twelve years ago. Today we all know about echo chambers and “fake news” and the entirely disparate narratives that different political factions hear. We have an idea that this is weakening our society, but maybe we don’t quite all see that yet. In 2005, these results were seen by people at DARPA and at various AI conferences, but induced more head-scratching than anything else: people didn’t understand the population dynamics at play, and they couldn’t see how a population could become so polarized, especially if they were also so deeply connected.

Today, I think we’re living out what this simulation shows. Maybe it’ll be more understandable now?

Did Maslow get it wrong? (and why this matters for games)

November 23, 2008

You may be familiar with Maslow’s hierarchy of needs (more on this below the cut).  Maslow’s theory has heavily influenced the architecture of our AI technology, which is why I’m attuned to discussions of it or instances that support or undercut it.  Recently I ran across a theory in education known as “CBUPO,” an ungainly acronym for “Comptence, Belonging, Usefulness, Potency, Optimism” designed by Richard Sagor at Washington State University (an accessible introduction can be found here (pdf)). Sagor’s theory suggests some interesting modifications to Maslow that have consequences for how we understand ourselves — as well as the motivations for gamers and AIs.

(Warning: psychological theory leading to AI and game-relevant thoughts below.)

(more…)

The Future of AI: Social AI

November 14, 2008

I’ve been talking a lot about “social AI” recently as a way to differentiate what we have been developing from “typical” or traditional AI.

The easy way to say this is “we don’t do pathfinding.”  Which isn’t entirely true (we have a simple but effective pathfinding mechanism), but it shows where our focus is(n’t).  Agents need to move around a world, sure; and showing crowds of agents walking purposefully about makes for a great visual demo.  But to be interesting — or even more, meaningful — they need to do a lot more than that.

(more…)