As many of you have probably heard, either at the Fanfest last weekend or in the forums, we are working on a system to render and animate full-body characters within EVE. The system will be used to represent your character in-station and other locations where a capsuleer ( pod pilot ) would crawl out of his pod, take a quick shower, throw some clothes on and mingle with other capsuleers.
This opens up a huge amount of questions and speculation that have to do with technical issues and game design issues. As the project has just started within the company, many questions can not be answered at the moment, but I hope to clear things up a bit with this blog.
Technical issues: Animation
When we originally developed EVE, we took one look at our competition and existing space games and concluded that the quality of graphics and the level of detail space games were offering were far to limited, unappealing and in some cases bordered on being an insult to the people that were supposed to play them. We set out to create a shading and rendering engine capable of exceeding whatever was considered “good enough" at the time and never looked back. We’re quite proud of what we achieved back then, although as time has passed, technology and art direction in games in general has evolved so that the art we created in 2001 to 2003 isn’t as unique as it was back then. This is being addressed by our massive graphics engine update, already talked about and covered both on this website, in EON and countless interviews.
Now, today we’ve set our sights on character rendering and lifelike animation. It’s important to realize that in order to create realistic looking characters, you have to pay great attention to how they will be animated. We have examples of 3d rendered characters in films and digital media that look amazing when you see a screenshot or a still frame, but once they start moving, they look like zombies or animatronic RealDolls™. There’s actually a term that describes how cg or cartoon humanoid characters tend to look creepy and un-lifelike the closer you get to photorealism. A Japanese roboticist named Masahiro Mori gave it the name "Uncanny valley". If we plan to create close to photorealistic characters, we must ensure that they’re behavior matches the quality of the shading and rendering in order to keep them out of this valley of darkness.
So exactly how do we create lifelike animation? Well, for one, we will use state of the art motion capture. There are nuances in the biomechanics of the human body that only the most experienced and skilled animators are able to express. It takes them, however, days to create what you capture in minutes in mocap. The amount of animation needed for a project like walking in stations prohibits us from hiring an army of the worlds most talented animators and having them animate for years until their fingers bleed.
But motion capture is no magic wand. It’s a tool. You still have to process and work with the data.
There are also various issues with motion capture. It’s recorded animation from human motions that happened that day in the studio, perhaps years before you are playing the game, maneuvering around a crowded hall with your corpmates. A friend walks past you, a busty waitress bends down to pick up some stuff on the floor or a whiff of steam erupts from a pipe close by, engulfing your head for a moment in scorching hot, foul smelling chemicals. How would you react? Wouldn’t you at least nod to your passing friend, shoot a candid glance at the waitress, or put your hand up against your face as the steam hits it? What about the corp CEO? Will his corp mates look at him with more respect or admiration than they would a stranger?
And what does looking at someone entail? Do your eyes focus on him? Does your head move aim towards him? Or do you twist your entire torso so that you are facing him? The answer is different based on various social and psychological attributes.
This brings us to an area of computer graphics called dynamic avatar human-to-human interaction. It tries to apply knowledge derived from years of research of human body language into the actions of computer generated avatars, so that their behavior mimics human behavior without the user or NPC controller micro-managing every little twitch of the body or glance of the eyes.
This is one of the areas that we intend to research and apply to our animation system. To our assistance, we’ve teamed up with the Center for Analysis and Design of Intelligent Agents at the Reykjavik University (RU). Two brilliant guys, Dr Kristinn R Þórisson and Dr. Hannes Högni Vilhjálmsson, both MIT Media Lab PhDs with extensive research experience in these topics will lead research at RU that will eventually find its way into our game environment.
At the moment we’re working on a scriptable animation sequencer and procedural animation solver that will serve as the building blocks for our avatar’s dynamic behavior. The skinny white naked men shown at the Fanfest staring at geometric shapes and each other were primitive examples of the capabilities of this system.
Now, today we’ve set our sights on character rendering and lifelike animation. It’s important to realize that in order to create realistic looking characters, you have to pay great attention to how they will be animated. We have examples of 3d rendered characters in films and digital media that look amazing when you see a screenshot or a still frame, but once they start moving, they look like zombies or animatronic RealDolls™. There’s actually a term that describes how cg or cartoon humanoid characters tend to look creepy and un-lifelike the closer you get to photorealism. A Japanese roboticist named Masahiro Mori gave it the name "Uncanny valley". If we plan to create close to photorealistic characters, we must ensure that they’re behavior matches the quality of the shading and rendering in order to keep them out of this valley of darkness.
So exactly how do we create lifelike animation? Well, for one, we will use state of the art motion capture. There are nuances in the biomechanics of the human body that only the most experienced and skilled animators are able to express. It takes them, however, days to create what you capture in minutes in mocap. The amount of animation needed for a project like walking in stations prohibits us from hiring an army of the worlds most talented animators and having them animate for years until their fingers bleed.
But motion capture is no magic wand. It’s a tool. You still have to process and work with the data.
There are also various issues with motion capture. It’s recorded animation from human motions that happened that day in the studio, perhaps years before you are playing the game, maneuvering around a crowded hall with your corpmates. A friend walks past you, a busty waitress bends down to pick up some stuff on the floor or a whiff of steam erupts from a pipe close by, engulfing your head for a moment in scorching hot, foul smelling chemicals. How would you react? Wouldn’t you at least nod to your passing friend, shoot a candid glance at the waitress, or put your hand up against your face as the steam hits it? What about the corp CEO? Will his corp mates look at him with more respect or admiration than they would a stranger?
And what does looking at someone entail? Do your eyes focus on him? Does your head move aim towards him? Or do you twist your entire torso so that you are facing him? The answer is different based on various social and psychological attributes.
This brings us to an area of computer graphics called dynamic avatar human-to-human interaction. It tries to apply knowledge derived from years of research of human body language into the actions of computer generated avatars, so that their behavior mimics human behavior without the user or NPC controller micro-managing every little twitch of the body or glance of the eyes.
This is one of the areas that we intend to research and apply to our animation system. To our assistance, we’ve teamed up with the Center for Analysis and Design of Intelligent Agents at the Reykjavik University (RU). Two brilliant guys, Dr Kristinn R Þórisson and Dr. Hannes Högni Vilhjálmsson, both MIT Media Lab PhDs with extensive research experience in these topics will lead research at RU that will eventually find its way into our game environment.
At the moment we’re working on a scriptable animation sequencer and procedural animation solver that will serve as the building blocks for our avatar’s dynamic behavior. The skinny white naked men shown at the Fanfest staring at geometric shapes and each other were primitive examples of the capabilities of this system.
Technical issues: Rendering
With the advent of programmable shaders in modern day graphics cards, we are able to render truly amazing images in real time. Stuff that existed solely in the realm of high-budged Hollywood movies can now be rendered by a single square inch of silicon in your laptop. We see truly photorealistic demos from graphics cards vendors that make us think that soon CG and reality will be indistinguishable.
That is all very good. But once you take the technology used to render a single character on a high-end workstation with the most beefed up graphics card out there and try to apply to to fifty characters walking around in a dynamic environment, reality sets in an you realize that there’s still a great amount of optimizing and sacrificing to be done.
Earlier today, someone here at the office mentioned a fleet battle that happened today on TQ that had 450 ships engaged at the same location, and stressed our servers quite a bit. Now, imagine 450 high-quality photorealistic characters walking around in a detailed environment. Obviously, a lot of intelligence has to be put into LODing, optimization etc. With the advent of normal-mapping and per-pixel shading, we’re able to render humans, originally modeled with close to 400.000 polygons using around 8000 polygons without a noticeable lack in detail. 8000 polys is still a lot, and we’ll try to get this number down without sacrificing quality. The heads in the character portraits in EVE today range from 4000 to 6000 polys. So now we can have an entire body with a head in far more detail.
But that’s only the poly count. Obviously shading has evolved a lot as well. Subsurface scattering, anisotropic lighting have recently become available to us and we intend to use those technologies plus a plethora of other stuff to realistically render the faces of your characters.
Here is an example of what we have running in our engine at the moment. This character model is 8500 polygons and is rigged with a skeleton for limbs, fingers and face.
That is all very good. But once you take the technology used to render a single character on a high-end workstation with the most beefed up graphics card out there and try to apply to to fifty characters walking around in a dynamic environment, reality sets in an you realize that there’s still a great amount of optimizing and sacrificing to be done.
Earlier today, someone here at the office mentioned a fleet battle that happened today on TQ that had 450 ships engaged at the same location, and stressed our servers quite a bit. Now, imagine 450 high-quality photorealistic characters walking around in a detailed environment. Obviously, a lot of intelligence has to be put into LODing, optimization etc. With the advent of normal-mapping and per-pixel shading, we’re able to render humans, originally modeled with close to 400.000 polygons using around 8000 polygons without a noticeable lack in detail. 8000 polys is still a lot, and we’ll try to get this number down without sacrificing quality. The heads in the character portraits in EVE today range from 4000 to 6000 polys. So now we can have an entire body with a head in far more detail.
But that’s only the poly count. Obviously shading has evolved a lot as well. Subsurface scattering, anisotropic lighting have recently become available to us and we intend to use those technologies plus a plethora of other stuff to realistically render the faces of your characters.
Here is an example of what we have running in our engine at the moment. This character model is 8500 polygons and is rigged with a skeleton for limbs, fingers and face.
Game design stuff
So… This is probably the part people are most curious about. What exactly will we be able to do once we’re inside the stations?
First of, I want to stress that we intend to build the world and experience inside the stations incrementally. The first release will have limited functionality and is expected as a more of a socializing forum than a place for brutally strangling your rival corp members or tossing handgrenades into a crowd of newbies, although both would be rewarding experiences for many players ( not the ones being strangled or blown up, though ).
There are a lot of services within stations at the moment being offered via a UI that we are used to and enables us to perform complex actions quickly. There are no plans for disabling any of these UI elements or replacing them with “real world" experiences, such as having to walk to a repair shop to have your ship fixed or spend hours tracking down agents in dark corridors. We might have you doing similar tasks, for added immersion, but we’ll never ruin the existing experience for people who want to be quick about their business in station so that they can hurry out and get back to podding miners.
Socializing in society, however, is very often done around some kind of action, game or activity, that in themselves are perhaps dull or monotonic, but serve as some sort of excuse for people to gather and chat. Bars, for example, are social places for the most part. They often have pool tables and darts and… beer. All tools to break the ice and play around with casually while you mingle with people. We realize that it’s not enough to just build a 1000 square meter circle inside a station, put everybody there and wait for the party to start. There will be stuff to do inside the station, some of it practical, some of it mundane, and lots of it will tie into the roleplaying backstory of the game.
It is often said that EVE forces everybody to roleplay. Not so much with the words that they speak, but with the actions they take. Just to make money or get from point A to B, the game mechanics are based on a solid back story and setting so you find yourself performing actions that in themselves are roleplaying yet serve a pragmatic purpose for attaining your goals. This is a design pattern that we will continue to use inside the stations. The last thing we want is for the stations to become a venue for dancing the Macarena or other actions that are totally out of character for the game and the people in it. I think Reynir, our founder / creative director once said in an interview that there will be no dancing in EVE:
Then there is the question of combat. Personally, like I stated at the fanfest, I have a strong urge to perform violent acts upon other people when I’m with them in an online environment. It may have to do with bad upbringing, incorrect role models or just genes. But many people share this affliction with me. What will we do to meet their needs?
Combat is a pandora’s box of problems. If we introduce combat we need some sort of combat system. Now knowing EVE players, it’s very likely they won’t settle for a simple combat system. They want to train skills, combine equipment, use tactics and multi-pronged attack maneuvers. This sort of system is hugely complex. There’s also the issue of security, security status, CONCORD etc. Plus the issue of cloning, skill loss etc. I think it’s frank to say that there will not be life-threatening combat in the first release of this project. We’ve toyed with the idea of never being able to kill someone, only “wound them seriously" so that the docbots at the medical bay can patch you together to your same self, because you didn’t receive any brain damage. Hence no skill loss. But that still requires a system for combat. -Do we want to create a new MMPORG inside an existing MMPORG? Well, we’re halfway there with putting you in stations. But the short answer is: no combat to begin with.
After we launch the first release of walking in stations, if the forums explode with “can I plz kill ppl!?!?!" requests, we’ll consider them seriously and look at how to implement it in a reasonable way. Until then, it’s petting and hugging only ( not really but you get the idea ).
First of, I want to stress that we intend to build the world and experience inside the stations incrementally. The first release will have limited functionality and is expected as a more of a socializing forum than a place for brutally strangling your rival corp members or tossing handgrenades into a crowd of newbies, although both would be rewarding experiences for many players ( not the ones being strangled or blown up, though ).
There are a lot of services within stations at the moment being offered via a UI that we are used to and enables us to perform complex actions quickly. There are no plans for disabling any of these UI elements or replacing them with “real world" experiences, such as having to walk to a repair shop to have your ship fixed or spend hours tracking down agents in dark corridors. We might have you doing similar tasks, for added immersion, but we’ll never ruin the existing experience for people who want to be quick about their business in station so that they can hurry out and get back to podding miners.
Socializing in society, however, is very often done around some kind of action, game or activity, that in themselves are perhaps dull or monotonic, but serve as some sort of excuse for people to gather and chat. Bars, for example, are social places for the most part. They often have pool tables and darts and… beer. All tools to break the ice and play around with casually while you mingle with people. We realize that it’s not enough to just build a 1000 square meter circle inside a station, put everybody there and wait for the party to start. There will be stuff to do inside the station, some of it practical, some of it mundane, and lots of it will tie into the roleplaying backstory of the game.
It is often said that EVE forces everybody to roleplay. Not so much with the words that they speak, but with the actions they take. Just to make money or get from point A to B, the game mechanics are based on a solid back story and setting so you find yourself performing actions that in themselves are roleplaying yet serve a pragmatic purpose for attaining your goals. This is a design pattern that we will continue to use inside the stations. The last thing we want is for the stations to become a venue for dancing the Macarena or other actions that are totally out of character for the game and the people in it. I think Reynir, our founder / creative director once said in an interview that there will be no dancing in EVE:
“Macarena-dancing aliens have nothing to do with science fiction in my book. I recommend watching Aliens, Blade Runner and The Empire Strikes Back. This is what true science fiction is about and the reason we made EVE."The pod-pilots / capsuleers are the elite of EVE society. The chosen few who decide their own fate and often that of others, with the buying power of small contries and the military might of nations. As described in our stories, they are the rock-stars or the EVE universe. Normal people look upon them with awe, and those in power regard them with often envy, discontent and fear, as the pod-pilot, powerful as he is, answers to no-one than himself and his corporation. This is fact we want to capture and portray realistically within the stations. That’s for instance why pod-pilots mingle with each other and perhaps a select few of the NPCs in stations. They stay in VIP lounges and corporate offices, and far below in the small streets and corridors, you glimpse the thousands of normal people making their way around the station.
Then there is the question of combat. Personally, like I stated at the fanfest, I have a strong urge to perform violent acts upon other people when I’m with them in an online environment. It may have to do with bad upbringing, incorrect role models or just genes. But many people share this affliction with me. What will we do to meet their needs?
Combat is a pandora’s box of problems. If we introduce combat we need some sort of combat system. Now knowing EVE players, it’s very likely they won’t settle for a simple combat system. They want to train skills, combine equipment, use tactics and multi-pronged attack maneuvers. This sort of system is hugely complex. There’s also the issue of security, security status, CONCORD etc. Plus the issue of cloning, skill loss etc. I think it’s frank to say that there will not be life-threatening combat in the first release of this project. We’ve toyed with the idea of never being able to kill someone, only “wound them seriously" so that the docbots at the medical bay can patch you together to your same self, because you didn’t receive any brain damage. Hence no skill loss. But that still requires a system for combat. -Do we want to create a new MMPORG inside an existing MMPORG? Well, we’re halfway there with putting you in stations. But the short answer is: no combat to begin with.
After we launch the first release of walking in stations, if the forums explode with “can I plz kill ppl!?!?!" requests, we’ll consider them seriously and look at how to implement it in a reasonable way. Until then, it’s petting and hugging only ( not really but you get the idea ).
What next?
Well now, we retreat to our cave and continue coding, crafting and designing. You’ll read more about the project in a future issue of EON, and we’ll be sure to keep you posted as the project progresses. There’s a lot of engineering and game design issues that need to be resolved. We have our work cut out for us. But hey, this is what we love doing!
Thanks for your interest.
Torfi Frans
Thanks for your interest.
Torfi Frans