Archive for the ‘AI’ category

AI, Opinions, and Politics: Polarization and Distance as the Result of the Connected Age

June 3, 2017

On June 2, 2017, CNN Correspondent Bill Weir was interviewed about the journey he’s taken around the US, interviewing people for the show “States of Change.” He was asked by the host, Kate Bolduan, about what surprised him in what he heard from people.

Weir responded, “What I was surprised about is how distant people are in the most-connected age in human history.” This remark struck me for its obvious truth, and because it reminded me of research I had done years earlier. The simulation we created was difficult to explain at the time; I wonder if now it resonates more clearly.

In 2005, I was leading a team working on artificial intelligence in association with the Defense Advanced Research Projects Agency (DARPA) for creating “social AIs.” As part of this, we became interested in how opinions passed between individuals in a population, and what this meant for the population overall. This topic underlies all sorts of social interaction including reputation and political beliefs.

To test out some of our ideas, we set up a fairly simple simulation of “agents,” each represented by a little dots that had two opinions of abstract subjects. We represented these with colors – green and red. An agent that liked green and not red would show up as bright green. One who liked red but not green would show up bright red. An agent who liked both would show up yellow (the combination of the two colors), and one that liked neither would show up as black (the absence of both colors).

start condition 1

Starting condition for 200 agents (and two “kings”) with random placement and opinions.

These little dots start off randomly distributed around a space with random opinions of both green and red. Each agent would move about randomly, except that they liked to be near other agents with similar opinions as theirs. Each agent would tend to congregate near those with similar opinions, and move away from those with differing opinions.

 

Each agent also had a number of “associations” – essentially, which agents were its friends that it listened to, as each one broadcast its opinion (“I like red a lot and green a little!”) at regular intervals. Friend-agents might have different opinions, and on hearing a friend’s opinion, an agent might change its own views. The probability of an agent changing its view depended on its “Breadth,” that is, it’s openness to new ideas (modeled after the Five Factor Model of personality) and the strength of its association with another agent. The more broad-minded the agent and the stronger its association with another, the more likely it would be to adjust its opinion based on what one of its associates broadcast to it.

We also introduced two “kings,” agents with firm opinions of “pro-red” and “pro-green” that exercised an outsized influence on the other agents. Think of these as “thought leaders,” whether in the media, politics, fashion, etc. The dots (up to 200 agents plus the two “kings”) start in random positions with the agents having random opinion values.

simulation condition 2

Midway through a simulation: different agents are starting to coalesce in their opinions and locations.

In the simulation shown here, each agent has an association with about 10% of the others, and a 10% Breadth. As the simulation progresses, the two kings repel each other and move as far away from each other as they can. The other agents continue to influence each other, gradually finding their own place and adjusting their opinions based on what those around them say.

end condition with bridge

Late in the simulation: Red and Green factions have formed, in this case with a centrist bridge between them.

 

 

Eventually, the simulation settles into one of a few common patterns. Here, Red and Green factions are still both strong, but there’s also a “bridge” between them of agents who have more moderate opinions. This is the result we get when each agent only listens to a few others.

 

 

 

 

end condition factions 1

Two highly separated factions. Each is its own echo-chamber, with agents reinforcing each others’ opinions.

On the other hand, if we change the parameters of the simulation, we can get very different results. In particular, by making the agents far more connected – 100% associations, so everyone hears everyone’s opinions – the late-stage simulation looks markedly different as you can see here.

 

In cases like this, two distinct factions have emerged, all-red and all-green, with little or no common bridge between them. This didn’t happen every time, but it is common once you have a highly connected population.

end condition factions 2

Two distinct factions with a weak bridge between them.

Some variations include a weak bridge between the two main factions, and sometimes a variation of a “leaderless” third faction that is strong (or weak) in both, but not accepted by either faction.

 

 

end condition factions 3

Another variation with a highly connected population: two main factions with two “leaderless” factions (red+green and neither red or green)

 

 

 

 

The critical variable here is connectedness – how many of the agents are connected to and listening to others. It seems counter-intuitive, but in the first case above, when agents have only a few others that they listen to – a local “social horizon” – the eventual result is less polarized: there’s more variance of opinion, less echo-chamber effect, and more centrist agents bridging between different factions.

Conversely, when agents become more connected, they also become more polarized, more like those around them. This means that all or nearly all the agents retreat into an us-versus-them echo chamber where factions become deeply entrenched, self-reinforcing, and without contact with others who disagree even mildly. This reduces communication between factions, with all the attendant problems we see today.

What struck me in listening to Bill Weir on CNN was how pervasive and obvious this situation is to us now, and how unknown it was just twelve years ago. Today we all know about echo chambers and “fake news” and the entirely disparate narratives that different political factions hear. We have an idea that this is weakening our society, but maybe we don’t quite all see that yet. In 2005, these results were seen by people at DARPA and at various AI conferences, but induced more head-scratching than anything else: people didn’t understand the population dynamics at play, and they couldn’t see how a population could become so polarized, especially if they were also so deeply connected.

Today, I think we’re living out what this simulation shows. Maybe it’ll be more understandable now?

Advertisement

The societal effects of cognitive technologies

September 19, 2016
In Malaysia, Uber is easily available. It’s inexpensive, safe, and a great experience all around. Unfortunately, taxi drivers there don’t take kindly to Uber drivers — a few yelled at one of the cars I was in while visiting last week, and one slammed his fist into the window by my head as we drove past. You might say their rage at this technology-driven change is palpable.
 
Okay, now magnify the situation many times over: what happens, societally, when a significant portion of our existing jobs just evaporate in the space of a few years — enough to take unemployment in the US from 5% to 12% in less than a decade? Keep in mind the unemployment rate peaked at 10% in 2009 after the global financial crisis, and could easily be right back up there in just a few years. According to a recent Forrester Report, this is what we’re facing due to increased automation and “cognitive technologies.”
 
In fact it’s sort of worse than just going from 5% to 12% unemployment. According to Forrester’s projections, 9% of jobs in 2025 will be new ones enabled by automation, which is great — but 16% of existing jobs will have vanished forever. It’s not difficult to imagine that this might create a lot of economic and social dislocation along the way. All the displaced taxi drivers, truck drivers, customer service personnel, store clerks, fast food servers, and others will have to do something to keep themselves and their families going, and telling them to go back to their local community college is really not going to cut it. As Andy Stern, former president of the Service Employees International Union put it, that advice is “probably five to ten years too late.” He goes on to say that as a society “we don’t really have a plan and we don’t appreciate how quickly the future is arriving.”
 
There is a saying often attributed to Winston Churchill that “Americans can be counted on to do the right thing after they have tried everything else.” It seems that right now we’re still madly trying “everything else.” Jobs already lost or that will be lost to automation and globalization are not going to be magically brought back by “building a wall” on our border with Mexico, nor by instituting draconian protectionist measures or anything other backward-looking solution. We have to look forward to try to figure out what a radically different future actually means for us as a society. Until we decide to do so — until we finally decide to knuckle down and do the right thing — it’s going to be a difficult, bumpy time for a whole lot of folks. What’s coming at us now is going to make 2009, and maybe even the 1930s, look easy. The question, as posed by Stern, is “what level of pain do people have to experience and what level of social unrest has to be created before the government acts?”

Onward and Upward, Once Again

June 28, 2013

It may be fitting that it’s been over two years since I’ve posted here. That time was my tenure at social/mobile game developer Kabam. I started there in April of 2011 and ended my time there this week.

In those two-plus years we’ve seen the indie social game market be swallowed by the Big Developers (which is one of the reasons I went to Kabam), seen the apex and initial decline of the Facebook game ecology (arguably after Facebook poisoned the well with a 30% “tax” on sales on their platform), and seen the fast rise of games on mobile phones and tablets.

The span of time when indies were making viable games on phones and tablets was even shorter than it was for web-based social games; successful phone/tablet games are now approaching AAA/console quality, and budgets and schedules are once again skyrocketing, leaving all but the most resourceful developers behind. Free-to-play is no longer an anomaly; there is still a lot to be learned, but companies are reliably making hundreds of millions of dollars in very profitable revenue using this model.

Discoverability is now the big problem for developers: players have to know about your game among the hundreds or thousands coming out every single week, or all your work is for nothing. And this has put Apple and Google in the position of kingmakers more than any publisher or retailer was back in the days of retail-box games.

The big question for many of us is, where does game design fit in this back-to-the-future world of visual polish and revenue-creating pinch-points? I think it’s still an open question. It’s entirely possible to make good games that spread their ability to bring in revenue over a wide range of payment opportunities… but I have yet to see a design (even of my own) where this business model didn’t affect and to some degree twist the design off of its natural course.

I don’t know that this is inevitable, or that better designs necessarily need to avoid various forms of “pay to win,” but I think we will have to explore a lot more to figure this out. And meanwhile, the market moves on, rewarding companies with astounding riches if they manage to strike a balance between accessibility, visual fidelity, and some degree of fun.

In the past two years I’ve worked on some terrific projects and gotten to know a lot of great people. I also learned a ton by being on the front lines of social and mobile game design, development, and operations. But, as always the game market zigs and zags, and companies have to act fast and be nimble just to keep up.

I’ll let Kabam’s strategy speak for itself as it emerges over the coming months. For myself, I’m looking back to my roots as much as possible: real, deep game design and (in some combination) social AI.

I’ve managed to keep up some amount of AI work, even publishing a couple of papers (see the paper “Toward a comprehensive theory of emotions for biological and artificial agents“). I’m now in the process of stripping down and re-architecting the AI “People Engine” itself. I’m going to do my best to chronicle this re-development here, focusing on the more difficult questions I’m facing.

And oh yeah: I am looking for my next opportunity in games. I still believe that games are the vanguard of technology development and adoption. This is the place to be, in one form or another.

Where I’ve Been and Where I’m Going

April 11, 2011

“Some people try to turn back their odometers. Not me. I want people to know why I look this way. I’ve traveled a long way and some of the roads weren’t paved.”  – Will Rogers

A lot has happened since I last posted here.  We had one major project slowly grind to a halt, abandoned by the publisher. Not a fun story, even if we did learn a lot.  And we had another flash briefly, just long enough to prove out the design and technology, if not long enough to make back its production costs.

Social games have continued their astonishing fast-forward pace.  The game industry changes faster than any I know of, and I have never seen things change this fast.  One of my new mottos is

If you don’t have whiplash, you’re not paying attention.

What was a wide open blue-ocean part of the games industry a year ago is quickly consolidating and stratifying into Huge Players, Big Players, and Everyone Else.  There are good games and money to be made at each level, but on different scales and with different difficulties. And game designs or production practices that worked less than a year ago have to be discarded now to stay current with the market.

For myself and my company, Online Alchemy, the latest blows we endured were too much.  I’ve rebooted the company before — after a triple-play debacle in 2007 (DARPA project killed by world events, development contract pulled at the last moment, and the long-lamented demise of the Firefly MMO at the hands of Fox and Universal), so I know how to do it.  And I have an amazing team of people to work with.  But the costs of rebooting again now seemed too high and too risky.

So, time for a pivot: I have joined Kabam as an Executive Producer.  This is a terrific company with a clear focus and top-notch talent all around. I’ve been very impressed with the blend of agility and process I’ve found there. I can’t yet say what I’m working on, but as with everything in this part of the industry, all will be clear soon enough.

Online Alchemy will be sticking around, but will be returning to its focus on “social AI” research and development.  This is definitely an area for research, building on the company’s existing work in artificial emotions, relationships, and reputation, but as yet no real consumer market has appeared for such AI.  I still believe one will, but it may be ten or twenty years before it happens.  I’m content to be patient, and persistent.

So, what’s next?

 

Virtual Characters and Real Emotions

November 16, 2010

Jesse Schell is one of the most articulate, insightful people developing games and talking about their future.    At a recent keynote at a Unity3D conference, he talked about virtual characters  as a crucial part of the future of games and other online experiences.  As usual he makes a lot of excellent points about virtual characters remembering you and conversing with you, but on one — how we interact emotionally with virtual characters — I have to disagree with him:

“Emotions are easily recognized by humans, but computers must be part of that, said Schell. “Once we can do that we can sense your emotions,” said Schell, developers can create “a game where you actually have to act, or feel emotions. A game where someone tells you where there dog just died and if you can’t manage to cry then no, you’re not getting to the next level!”” (as covered by Gamasutra)

First, I appreciate Jesse stepping up with concrete predictions and other musings — as he says, this is a great way to predict (and create) the future.  That said, this one is exactly backwards: the emotional connection with virtual characters doesn’t come because we emote effectively, but because the characters themselves have and display emotions that we then relate to.  Their emotions make them more real to us, and allow us to feel something similar. (more…)

This Isn’t the Social AI You’re Looking For

May 26, 2010

I’m a huge proponent of what I call “social AI;” I’ve written and spoken about this before.  Social AI is in some ways a subset of“Artificial General Intelligence” in that it implies AI that acts in socially plausible ways (a phrase we use to avoid problematic terms like “realistic”) without having to include the complete range of human knowledge and nuance.

My vision for social AI is that it enables computer-driven agents (aka NPCs) to interact with each other and with human participants in socially plausible and satisfying ways.  This, I believe, is necessary for the “very long form story” and non-static worlds that I wrote about earlier, among other uses.

But there are also disturbing examples of what social AI isn’t, at least to my way of thinking.  I’m going to look at a few of these, and then come back to talk more about what social AI can do for us in more positive ways. (more…)

GDC Week & Predictions

March 8, 2010

Like a lot of others, I’m heading to GDC today.  I’m mainly going for a couple of summits, some meetings, and to see people who are good friends whom I see once or twice a year.  It’s a bit of an odd sort of relationship, as it feels sometimes like a deadly serious meeting of circus clowns.

Anyway, I’m not going for the talks (and am not giving one this year)… and so I have pretty low expectations of anything significant coming out of them.  But as this is also sort of the beginning of the game-year, I thought I’d take a few minutes for some pre-GDC predictions for the conference and for the rest of the year – social games, MMOs, 3D, AI, the works.

(more…)

Did Maslow get it wrong? (and why this matters for games)

November 23, 2008

You may be familiar with Maslow’s hierarchy of needs (more on this below the cut).  Maslow’s theory has heavily influenced the architecture of our AI technology, which is why I’m attuned to discussions of it or instances that support or undercut it.  Recently I ran across a theory in education known as “CBUPO,” an ungainly acronym for “Comptence, Belonging, Usefulness, Potency, Optimism” designed by Richard Sagor at Washington State University (an accessible introduction can be found here (pdf)). Sagor’s theory suggests some interesting modifications to Maslow that have consequences for how we understand ourselves — as well as the motivations for gamers and AIs.

(Warning: psychological theory leading to AI and game-relevant thoughts below.)

(more…)

The Uncanny Valley (yeah you should know this already)

November 15, 2008

From James Portnow’s blog, a terrific Zero-Punctuation-style video on the Uncanny Valley.  You probably know what that is, but it’s worth watching the video and passing this on to others who don’t.  And if you don’t know what that is and how it applies to games and AI, you really should watch it.

(more…)

The Future of AI: Social AI

November 14, 2008

I’ve been talking a lot about “social AI” recently as a way to differentiate what we have been developing from “typical” or traditional AI.

The easy way to say this is “we don’t do pathfinding.”  Which isn’t entirely true (we have a simple but effective pathfinding mechanism), but it shows where our focus is(n’t).  Agents need to move around a world, sure; and showing crowds of agents walking purposefully about makes for a great visual demo.  But to be interesting — or even more, meaningful — they need to do a lot more than that.

(more…)