Archive for the ‘AI’ category

The societal effects of cognitive technologies

September 19, 2016
In Malaysia, Uber is easily available. It’s inexpensive, safe, and a great experience all around. Unfortunately, taxi drivers there don’t take kindly to Uber drivers — a few yelled at one of the cars I was in while visiting last week, and one slammed his fist into the window by my head as we drove past. You might say their rage at this technology-driven change is palpable.
 
Okay, now magnify the situation many times over: what happens, societally, when a significant portion of our existing jobs just evaporate in the space of a few years — enough to take unemployment in the US from 5% to 12% in less than a decade? Keep in mind the unemployment rate peaked at 10% in 2009 after the global financial crisis, and could easily be right back up there in just a few years. According to a recent Forrester Report, this is what we’re facing due to increased automation and “cognitive technologies.”
 
In fact it’s sort of worse than just going from 5% to 12% unemployment. According to Forrester’s projections, 9% of jobs in 2025 will be new ones enabled by automation, which is great — but 16% of existing jobs will have vanished forever. It’s not difficult to imagine that this might create a lot of economic and social dislocation along the way. All the displaced taxi drivers, truck drivers, customer service personnel, store clerks, fast food servers, and others will have to do something to keep themselves and their families going, and telling them to go back to their local community college is really not going to cut it. As Andy Stern, former president of the Service Employees International Union put it, that advice is “probably five to ten years too late.” He goes on to say that as a society “we don’t really have a plan and we don’t appreciate how quickly the future is arriving.”
 
There is a saying often attributed to Winston Churchill that “Americans can be counted on to do the right thing after they have tried everything else.” It seems that right now we’re still madly trying “everything else.” Jobs already lost or that will be lost to automation and globalization are not going to be magically brought back by “building a wall” on our border with Mexico, nor by instituting draconian protectionist measures or anything other backward-looking solution. We have to look forward to try to figure out what a radically different future actually means for us as a society. Until we decide to do so — until we finally decide to knuckle down and do the right thing — it’s going to be a difficult, bumpy time for a whole lot of folks. What’s coming at us now is going to make 2009, and maybe even the 1930s, look easy. The question, as posed by Stern, is “what level of pain do people have to experience and what level of social unrest has to be created before the government acts?”

Onward and Upward, Once Again

June 28, 2013

It may be fitting that it’s been over two years since I’ve posted here. That time was my tenure at social/mobile game developer Kabam. I started there in April of 2011 and ended my time there this week.

In those two-plus years we’ve seen the indie social game market be swallowed by the Big Developers (which is one of the reasons I went to Kabam), seen the apex and initial decline of the Facebook game ecology (arguably after Facebook poisoned the well with a 30% “tax” on sales on their platform), and seen the fast rise of games on mobile phones and tablets.

The span of time when indies were making viable games on phones and tablets was even shorter than it was for web-based social games; successful phone/tablet games are now approaching AAA/console quality, and budgets and schedules are once again skyrocketing, leaving all but the most resourceful developers behind. Free-to-play is no longer an anomaly; there is still a lot to be learned, but companies are reliably making hundreds of millions of dollars in very profitable revenue using this model.

Discoverability is now the big problem for developers: players have to know about your game among the hundreds or thousands coming out every single week, or all your work is for nothing. And this has put Apple and Google in the position of kingmakers more than any publisher or retailer was back in the days of retail-box games.

The big question for many of us is, where does game design fit in this back-to-the-future world of visual polish and revenue-creating pinch-points? I think it’s still an open question. It’s entirely possible to make good games that spread their ability to bring in revenue over a wide range of payment opportunities… but I have yet to see a design (even of my own) where this business model didn’t affect and to some degree twist the design off of its natural course.

I don’t know that this is inevitable, or that better designs necessarily need to avoid various forms of “pay to win,” but I think we will have to explore a lot more to figure this out. And meanwhile, the market moves on, rewarding companies with astounding riches if they manage to strike a balance between accessibility, visual fidelity, and some degree of fun.

In the past two years I’ve worked on some terrific projects and gotten to know a lot of great people. I also learned a ton by being on the front lines of social and mobile game design, development, and operations. But, as always the game market zigs and zags, and companies have to act fast and be nimble just to keep up.

I’ll let Kabam’s strategy speak for itself as it emerges over the coming months. For myself, I’m looking back to my roots as much as possible: real, deep game design and (in some combination) social AI.

I’ve managed to keep up some amount of AI work, even publishing a couple of papers (see the paper “Toward a comprehensive theory of emotions for biological and artificial agents“). I’m now in the process of stripping down and re-architecting the AI “People Engine” itself. I’m going to do my best to chronicle this re-development here, focusing on the more difficult questions I’m facing.

And oh yeah: I am looking for my next opportunity in games. I still believe that games are the vanguard of technology development and adoption. This is the place to be, in one form or another.

Where I’ve Been and Where I’m Going

April 11, 2011

“Some people try to turn back their odometers. Not me. I want people to know why I look this way. I’ve traveled a long way and some of the roads weren’t paved.”  – Will Rogers

A lot has happened since I last posted here.  We had one major project slowly grind to a halt, abandoned by the publisher. Not a fun story, even if we did learn a lot.  And we had another flash briefly, just long enough to prove out the design and technology, if not long enough to make back its production costs.

Social games have continued their astonishing fast-forward pace.  The game industry changes faster than any I know of, and I have never seen things change this fast.  One of my new mottos is

If you don’t have whiplash, you’re not paying attention.

What was a wide open blue-ocean part of the games industry a year ago is quickly consolidating and stratifying into Huge Players, Big Players, and Everyone Else.  There are good games and money to be made at each level, but on different scales and with different difficulties. And game designs or production practices that worked less than a year ago have to be discarded now to stay current with the market.

For myself and my company, Online Alchemy, the latest blows we endured were too much.  I’ve rebooted the company before — after a triple-play debacle in 2007 (DARPA project killed by world events, development contract pulled at the last moment, and the long-lamented demise of the Firefly MMO at the hands of Fox and Universal), so I know how to do it.  And I have an amazing team of people to work with.  But the costs of rebooting again now seemed too high and too risky.

So, time for a pivot: I have joined Kabam as an Executive Producer.  This is a terrific company with a clear focus and top-notch talent all around. I’ve been very impressed with the blend of agility and process I’ve found there. I can’t yet say what I’m working on, but as with everything in this part of the industry, all will be clear soon enough.

Online Alchemy will be sticking around, but will be returning to its focus on “social AI” research and development.  This is definitely an area for research, building on the company’s existing work in artificial emotions, relationships, and reputation, but as yet no real consumer market has appeared for such AI.  I still believe one will, but it may be ten or twenty years before it happens.  I’m content to be patient, and persistent.

So, what’s next?

 

Virtual Characters and Real Emotions

November 16, 2010

Jesse Schell is one of the most articulate, insightful people developing games and talking about their future.    At a recent keynote at a Unity3D conference, he talked about virtual characters  as a crucial part of the future of games and other online experiences.  As usual he makes a lot of excellent points about virtual characters remembering you and conversing with you, but on one — how we interact emotionally with virtual characters — I have to disagree with him:

“Emotions are easily recognized by humans, but computers must be part of that, said Schell. “Once we can do that we can sense your emotions,” said Schell, developers can create “a game where you actually have to act, or feel emotions. A game where someone tells you where there dog just died and if you can’t manage to cry then no, you’re not getting to the next level!”” (as covered by Gamasutra)

First, I appreciate Jesse stepping up with concrete predictions and other musings — as he says, this is a great way to predict (and create) the future.  That said, this one is exactly backwards: the emotional connection with virtual characters doesn’t come because we emote effectively, but because the characters themselves have and display emotions that we then relate to.  Their emotions make them more real to us, and allow us to feel something similar. (more…)

This Isn’t the Social AI You’re Looking For

May 26, 2010

I’m a huge proponent of what I call “social AI;” I’ve written and spoken about this before.  Social AI is in some ways a subset of“Artificial General Intelligence” in that it implies AI that acts in socially plausible ways (a phrase we use to avoid problematic terms like “realistic”) without having to include the complete range of human knowledge and nuance.

My vision for social AI is that it enables computer-driven agents (aka NPCs) to interact with each other and with human participants in socially plausible and satisfying ways.  This, I believe, is necessary for the “very long form story” and non-static worlds that I wrote about earlier, among other uses.

But there are also disturbing examples of what social AI isn’t, at least to my way of thinking.  I’m going to look at a few of these, and then come back to talk more about what social AI can do for us in more positive ways. (more…)

GDC Week & Predictions

March 8, 2010

Like a lot of others, I’m heading to GDC today.  I’m mainly going for a couple of summits, some meetings, and to see people who are good friends whom I see once or twice a year.  It’s a bit of an odd sort of relationship, as it feels sometimes like a deadly serious meeting of circus clowns.

Anyway, I’m not going for the talks (and am not giving one this year)… and so I have pretty low expectations of anything significant coming out of them.  But as this is also sort of the beginning of the game-year, I thought I’d take a few minutes for some pre-GDC predictions for the conference and for the rest of the year – social games, MMOs, 3D, AI, the works.

(more…)

Did Maslow get it wrong? (and why this matters for games)

November 23, 2008

You may be familiar with Maslow’s hierarchy of needs (more on this below the cut).  Maslow’s theory has heavily influenced the architecture of our AI technology, which is why I’m attuned to discussions of it or instances that support or undercut it.  Recently I ran across a theory in education known as “CBUPO,” an ungainly acronym for “Comptence, Belonging, Usefulness, Potency, Optimism” designed by Richard Sagor at Washington State University (an accessible introduction can be found here (pdf)). Sagor’s theory suggests some interesting modifications to Maslow that have consequences for how we understand ourselves — as well as the motivations for gamers and AIs.

(Warning: psychological theory leading to AI and game-relevant thoughts below.)

(more…)