Joystiq: So OnLive ...
Steve Perlman: Yeah!
First, congrats on the launch ...
Everybody's asking, "What's going on at GDC? What's the thing this year?" You know, it's not LittleBigPlanet this year and it's not Gears of War 2. I think the biggest thing at the show, in terms of buzz, in terms of coverage, has to be OnLive. So, congratulations on that.
But there's a second side to that buzz story. Everyone's talking about it, but they're all saying the same thing: "Yeah, it sounds really good, but ... it also sounds like magic."
There's I think a lot skepticism – some might even say not enough skepticism – based on just the promise, which is really ambitious. What's your first response to that level of skepticism?
They damn well better be skeptical. When I set out to build this thing, we looked at it and said, "Look, in theory it's possible to do, but in practice we just didn't know if it could actually be done." You know, Rearden [Labs] is an incubator. You guys covered Mova, when we did the facial capture stuff, and you know, you saw it come out with The Incredible Hulk and The Curious Case of Benjamin Button. That also was a very ambitious thing, we said, "I don't know if we can really [get] the face with that level of realism, where you can't tell that it's a real person." Well, jeez, Brad Pitt was nominated for Best Actor. For a person, for a performance, but it was a computer-generated face for most of the movie, right? So we overcame that. It took over five years to get to that technology.
"They damn well better be skeptical."
OnLive took seven. Right? Very similar thing, is that we started out saying, "If it can work, it will be transformative to the industry, but we don't know whether or not we'll get there." And I'll tell you, for the first several years, we were making incremental progress, but we just didn't know whether we could. And finally we got a thumbnail – just a tiny little postage stamp window we had years ago – that we had Quake running in it, called MicroQuake, and you could kinda tell what was going on, sort of, it was hard to aim, but it was absolutely instantaneously responsive. And it was working on a server center remotely. We said, "Alright, it's theoretically possible. Now how do we make that full screen, and of course how can we deal with the fact that it was home-safe – this kind of firewall, this kind of router, or this wi-fi adapter, what have you – how can we get it that we can get a reliable stream that works, so that people can actually use it as a commercial service?"
So, it took years. So there's science that's behind this thing, that had to be developed, a lot of new ways of thinking about video compression. And then there's just practical engineering, just going into lots of different places. I mean, the thing, I don't think people quite get, and one of the reasons why it seems like you know "this is just too much, too big," is that the earlier stuff that you might've expected to first see come out – you know, running on a small window, running a simpler game than Crysis, remotely, and so on – that kinda stuff we did years ago we just decided to stay in stealth mode, until all the pieces were in place, and then we had all the publishers that had a chance to really run us through the ringer. Then, when we announced, we knew people were going to be skeptical. And they should be, you know? But they would go and say, hmm, we've got nine of the top publishers behind this thing; do we really think these guys are gonna take their top titles, commit them for release the same retail window as you know, the titles when they come out for the consoles, and they're gonna let us show games on the floor here, which are actually just being released the day that the OnLive booth opens, you know?
The only reason they're going to do that is, obviously, they've gone through tons and tons of testing as well.
When asked about this thing, Will Wright said something like, "At some point you run up against the actual laws of physics." Actually, it's not about the technology. I'm sure you can stream video, I'm sure you can compress it fast enough; I'm sure you can do 10,000 or 100,000 instances of Crysis all at once ...
... I don't want to know how much money it'd cost you guys for a hundred thousand instances of Crysis all at once, but you, you get it, it's fine. But at some point you run up against the actual barriers of physics, speed of light. What sort of limitations do you have that you literally can't overcome; that are impossible?
"... when you look at the location of the five service centers, and you draw a thousand mile radius around them, you get coverage of the entire US."
We're not gonna be able to get server centers much further away than 1,500 miles, and that's only if you have something like fiber. If you have cable modem or DSL, then the servers need to be within about a thousand miles of your home. And because of the speed of light through fiber. Theoretically, we could set up low orbiting satellites, because the speed of light through the air is faster than through fiber, and then I guess we could go maybe three thousand miles. I don't think anyone's going to erect that; I think we're going to be using fiber that's already in place through the internet. So, in practice, and what we've told people, is that you really need to be within 1,000 miles. So, right now, for beta, we have a west coast service center in Santa Clara and an east coast service center in Virginia. We're literally right now outfitting one in Texas. And then we'll be setting up a couple other ones in the US. And then when you look at the location of the five service centers, and you draw a thousand mile radius around them, you get coverage of the entire US. I think there's a little corner up North Dakota that we're outside of the 1,000 miles radius somewhere ...
North Dakota's a big market ...
Eurogamer had a story that I'm sure you've probably read it by now. One of the arguments they presented was that this technology – this ability to encode video this quickly and send it anywhere, within a thousand miles of the data center – why is video gaming the product that you wanted to sell with a technology that's that far ahead of anything else the industry's ever done, in terms of video compression?
Well, there's a couple reasons. I mean, to start out, Reardon is a technology and media incubator. So, in terms of what we like to do, one of the things that really motivated us was that we're creating photo-real people, and we're like, "Well, this is really cool for movies, but how are we going to get them into game systems?" We knew we needed more performance that was feasible at home, and so, this is very exciting to do that. But, another thing is that it was a real market, ripe for disruption. It's the last digital media format that is primarily distributed through packaged goods. Music is long gone, away from packaged goods. And video's on its way out, away from packaged goods. Well, look at video games. I think it's like 85 percent of the market goes through physical media. So, we're seeing downloads; as one, Steam is a great service, so there's been a lot of progress moving toward an online distribution model. But a download takes a long time, you've got to have the right drivers, right configuration, and so forth, and you certainly have to have the level of computing power in your local machine to keep up with whatever game you're trying to play. You've got students in dorm rooms, now, that have laptops, that have either no GPU or very low performance GPU's. So, we figured, "Hey, gaming is a good place to start".
Then the last reason we chose that as well, is that is was the hardest problem. This obviously would be awesome for video conferencing, right? You know, that delay you see, with your hands, with online, it's like you move your hand, and it moves instantly across the video conference link. But, games are way harder. I mean, in video conference, the camera doesn't move very much. There's tricks you could do to minimize the data rate, and so forth. Video games, they even defy what movies would do. Movies, you typically have regular pans, and a couple zooms, and things like that. Video games, you could have a pan, zoom, twist, explosion going on, right? Then when video compression algorithms are developed – you know, my team developed Quicktime, and it was involved in all those various other standards – what you do is you look at the various cases: okay, here's what we do with a zoom; here's what you do with fine detail; here's what we do where we have a random pattern of grass; or something like that. The kind of cases you have to deal with in video games go far beyond what occurs in cinema, or television programming, or home movies. So we say, "If we can get that solved, then we can solve anything."