The Future of AI: Social AI

I’ve been talking a lot about “social AI” recently as a way to differentiate what we have been developing from “typical” or traditional AI.

The easy way to say this is “we don’t do pathfinding.”  Which isn’t entirely true (we have a simple but effective pathfinding mechanism), but it shows where our focus is(n’t).  Agents need to move around a world, sure; and showing crowds of agents walking purposefully about makes for a great visual demo.  But to be interesting — or even more, meaningful — they need to do a lot more than that.

Another way we describe what we do is that we’re doing “heart and head” AI as opposed to “spinal” AI.  I believe that to be meaningful to players, or to human users in general, AIs will have to have robust personalities, bona fide emotions (and the ability to understand your emotions), memories, relationships, the ability to learn from their experience or from what others tell them, and of course the ability to (in some form) talk about what they’re wanting, thinking, and doing.  That’s a long list, but we’ve tackled most of these and are well on our way with the others.

‘Relationships’ is somewhere in the middle of that long list, and is a key aspect that shouldn’t be overlooked.  AIs need to be situated in a “social context” (the next step beyond a “social network”) just as humans are.  This is true in both game and non-game applications.  I get very excited whenever I think of the potential for socially plausible AIs that have relationships with those around them participating in social networks/contexts.  Whether this is Facebook or in an MMO, this adds hugely to the broad appeal, potential utility, and as-yet unknown forms of interaction.  I can’t wait to see where this takes us.

All I know is, the future of AI isn’t better pathfinding or strafing.  The future of AI is social — it’s us.

Explore posts in the same categories: AI

Tags: , ,

Both comments and pings are currently closed.

8 Comments on “The Future of AI: Social AI”

  1. Charles Cameron (hipbone) Says:

    Hi Mike:

    Welcome and congratulations!

    I’ve chosen this post to respond to because it comes closest to a question I believe we’re both very interested in, and which I hope you’ll be addressing and generally tossing around with your colleagues, friends and fellow bloggers over the coming months: the issue of if, when and how *consciousness* is present in developing AI of what you nicely term the “heart and head” variety.

    Soul too?


  2. onlinealchemy Says:

    Right to the heart of the matter, Charles! And probably worth a post of its own.

    Consciousness and soul… do AI have these? Are they anything more than just complex clockwork? Are we? _Can_ AI have consciousness? I hope to discuss this along the way.

    One thing I’ll say for now is that we’ll likely never know, just as if you dug through someone’s brain you’d never find the consciousness there either. The closest analogy I can come to is that you can cut down every tree and never find the forest.


  3. Bill Walker Says:

    So, Mike, what do you envision socially aware agents bringing to games and virtual worlds that augments the live human contribution?


  4. Mike Sellers Says:

    Bill, great question!

    Socially aware agents add a whole new dimension to games and virtual worlds. They’re always there (we’re not), and they can build relationships with each other and the human users. They create a living, breathing population that has its own trends and fads; they become valuable sources of information; present opportunities for influence; and become the primary sources for achievement-based gameplay (i.e., they lead us out of the grindy swamp of canned quests, since they have their own motivations and goals). They also become the scaffolding for online community, becoming vital parts of each player’s social context since they’re always around and have different relationships than does any given player.

    MMOGs right now are static, and non-game virtual worlds are sterile, because both lack a resident population. AIs can give us that, but only if they are autonomous within the world, and if they are socially aware and can interact with each other and the human users. With this, the promise of immersive online worlds finally begins to make sense.


  5. Bill Walker Says:

    So do these agents add something different than more humans, playing more of the time, would?

    I guess that agents whose entire identities center around game or world concerns are different than players dropping in and out. I can’t quite picture how that works, though, and I think it’s because I can’t really imagine autonomous agents capable of functioning as real and convincing characters, particularly in the realm of dialogue. Do you feel like you’re approaching that level of success with agents?


  6. Mike Sellers Says:

    AIs in a virtual world can and will do the things that make the world real, but which few humans really want to do — jobs, living, having families, and their own in-world goals. Think of the human players as the tourists, the AIs as the residents of the world.

    Re: dialog… yes, though it’s not a slam dunk. The three most difficult areas we’ve found are humor, deceit, and free-form dialog. We’re working on all three.


  7. Bill Walker Says:

    If you ever need a tester in the realm of free form dialogue think of me as sitting near the back of the class wildly waving my hand to volunteer!


  8. Allan Hill Says:

    Hello Mike,

    Thank you for posting. Can you say more about the difference between “Social Context” and “Social Network”?


Comments are closed.

%d bloggers like this: