Maxunderground news unavailable
|Game levels, environments...how do we go about creating one.|
Been a very very long time I have done some game levels, environments. Due to various reasons I left doing environments and moved over to ArchViz. Kind of miss that now.
I will be taking out time and getting my so called modeling skills to refinement.
But I would like to know how do we go about it. It there a good tut or reference site.
For eg, a sci fi lab or a street in a city with loads of details. First I would like to go for a high poly which I am little comfortable with then move on to low-poly and texturing details.
So basically how do we go about it. I see so many details in a sci-fi lab like wires, pipes, machines etc. How do we go about adding detail like that. Do we need 100's of references before going even close to modeling. Any good start to finding that.
I am very inspired by the scenes created by Stefan Morrell and others like him. I have page sketchbook of his on game-artist website bookmarked.
Any and all thoughts and help welcome.
Facebook Photography page
read 1164 times
2/18/2011 7:15:32 AM (last edit: 2/18/2011 7:17:11 AM)
Depends on the game engine and system requirements (processing power required to render).
In terms of design, it depends what the game/level objectives are.
Visuals - as usual find reference art/photos & build from there.
Getting your head around normal mapping if you haven't already done so would be a good start.
Remember that you always want the level to run at 60fps on target hardware, so budget your polygon count accordingly, then build efficiently into that space.
Modelling efficiently - using the minimum number of polygons/textures to achieve the look you want is key to creating a high performance gaming environment.
Lighting/shadows - each real-time light & shadow caster eats into your performance budget so use the minimum necessary to create the look you're going for. Some features such as shadows can be turned off on low spec machines that lack ability to render them fast. Consider use of lightmaps instead of real-time lighting where lighting is static.
Efficient design = lower min spec requirement = larger potential audience = $$$.
Unreal Development Kit is the most popular high end gaming tech at the moment, so maybe grab that & experiment.
read 1150 times
2/18/2011 8:52:16 AM (last edit: 2/18/2011 9:43:25 AM)
It really does depend much on the engine used, from team to team, end goal etc. as AS said.
If you are not into programming, there really isn't much you can just go out and read up before you enter a game dev team other than how to build the assets as well and efficient as you can.
Here is how we in The Red Solstice work, and it's probably similar in many parts with most RTS development workflows (in 20x the scale and time spent :P).
My job is environment modeling mostly. Lets say I want to build a barrel. I could start with a high poly to later build a normal map from, but most of our smaller environment parts don't get a high poly because of the time constraints.
1. So I build a low poly barrel, with a poly count calculated and set by our programmers.
2. The mesh should be efficient, clean, welded, and checked for errors, polygons that are unseen are deleted, etc...
2. I unwrap the barrel, and export a tga texture tamplate from the edit unwrap window in the size also specified by a programmer. This will serve a texture guy to draw the texture on later. Tga is so he can use an alpha channel to set the boundaries of a texture easier, if needed.
3. I set the model to 0,0,0 coordinates while keeping the pivot point on the bottom of the barrel, not in the center, collapse the stack, reset the x-form, and export to open collada format (it's just what we use, not a standard in any way).
4. Then the model is exported from collada to our own proprietary model format, and couple of other files are created, but that is also specific to our engine
1. The texture template gets drawn on, and the standard texture is made.
2. Normal map is created by either projecting from a high poly mesh (usually refined in Z-brush), or for our barrel, with a photoshop plugin that converts textures to normal maps, and then edited some more to smooth some edges etc...
3. I create a vray light material with a vray dirt map in it, and texture bake an ambient map
4. The glow map is done in photoshop usually with a smooth brush
5. All of this is exported as .png, set to respective folders, and applied to a model with the additional files mentioned earlier
1. Now there is a folder structure of model categories in our level editor, where they can be dragged to the scene from, moved, scaled and rotated to do whatever we need.
2. The ground is done in the editor itself, effectively painting the height map with a brush (my simplified explanation), and is also painted by premade ground textures from within it.
3. Lights are set in the editor, with sliders provided for RGB and attenuation.
That's about it for the models. There are also animation, done in max and exported via collada, then called from within the source code.
I think this is the longest post I've ever written :P
Feel free to ask if you've got questions
read 1134 times
2/18/2011 9:30:31 AM (last edit: 2/18/2011 4:59:51 PM)
Thanks Advance and horizon for the detailed replies.
I guess that's too far a step that you have both mentioned though for me.
Presently, I do not worry a about lot of polygons. I am okay with a high poly scene at the moment
For me the important thing is -
I need to create a good environment or a good looking level. That's the main concern.
- Look into poly modeling in detail. Learn. Becase in ArchViz what I have been doing is basic building modeling and models are mainly picked and placed. So that does call for learning.
- Collect references.
- Sketch / create basic geometry of the layout
- Start adding details to each specific part
- High poly - normal mapping - texturing
I am sure it's easy to write it down like that, but much difficult in reality.
Any good ref sites for sci fi components or get those details would be a great help.
Facebook Photography page
read 1124 times
2/18/2011 9:56:21 AM (last edit: 2/18/2011 9:56:21 AM)
I have a question about this,
I've been asked to texture some spaceship models for the UNITY game engine.
I am going to unwrap and produce diffuse and bump maps. Can it also use normal maps? specular?
I've seen that plug in for PS and that seems a good way to go. for normals
Inxa - this is pretty good for ref:
and to a lesser extent
" Difficult, yes. Impossible , no..."
read 1112 times
2/18/2011 10:44:27 AM (last edit: 2/18/2011 10:44:27 AM)
@inxa - when you said level, I assumed for a real time engine, as did AS, because that's what it's called
What you want, as I understand from the last replay, is a high poly interior scene (doesn't matter if it's an office or a fictional lab)
The workflow you wrote seems good. Only part that stands out is normal mapping in a high poly scene. Since normal mapping is a mean to fake certain displacement and bumps in real time, I don't really see why wouldn't you just use that displacement and bumps in the materials. Maybe I'm missing something
@Davious - I can't find what maps unity can use on their site, which is weird. But in the what's new part there is a fix related to normal maps, so I guess it can use them (would be quite obsolete if it couldn't)
read 1108 times
2/18/2011 10:59:06 AM (last edit: 2/18/2011 10:59:06 AM)
Great info Horizon. I was actually wondering about that as well.
read 1088 times
2/18/2011 3:25:15 PM (last edit: 2/18/2011 3:25:15 PM)
im new to this game engine stuff, and average with my max knowledge so please bear with me :)
What speed/performance would you get from using different maps?
If I just use bump, as opposed to normals?
What about displacement ? is this gona slow things down ?
I kinda feel normal mapping would be the way to go, so start with a high poly and normal map that.
Do I need a bump as well?
whats a good workflow here ?
feeling outta my depth!
" Difficult, yes. Impossible , no..."
read 1078 times
2/18/2011 3:55:27 PM (last edit: 2/18/2011 3:55:27 PM)
> What about displacement ? is this gona slow things down ?
Yes. Don't use it. I did some R&D on a view dependant displacement mapping solution a couple of years ago that worked well in real-time. The problem comes when you start raytracing - the displacement has to be evenly tesselated so you can see reflected surfaces. Would like to take a look at this again when I get time.
Normal maps give you better quality results than basic bumps, so you may as well use them.
Speed performance of different maps - basically the more stuff you turn on, the slower it goes.
Most engines enable you to turn on/off features based on end user settings so design with all the maps you need & let the engine turn them off when it needs to.
Grab UDK & experiment.
read 1072 times
2/18/2011 4:29:40 PM (last edit: 2/18/2011 5:15:47 PM)
first you need something to take into whatever you're going to make the details with- just a rough mass with the volumes just about there and a robust mostly quad mesh
make this high poly and detailed (maybe texture at this stage and bake the diff along with the normals)
you'll need a low poly mesh with animatable/ deformable geometry around this stage- UV mapped most likely to save some time
bake the high to the low in max, xnormal, zbrush, mb etc (maybe texture at this stage using the extracted maps as a starting point etc)
import the low poly into your game engine with textures and maps
read 1046 times
2/18/2011 6:06:13 PM (last edit: 2/18/2011 6:06:13 PM)
I have downloaded the UDK. Will start soon.
Planning to spend at least 3 hours on this everyday. Starting from modeling high poly, edge loops, Edge connect etc. what not I have read and tuts I have downloaded.
Keep you updated as I progress.
Thanks once again.
Facebook Photography page
read 1012 times
2/19/2011 5:37:53 AM (last edit: 2/19/2011 5:39:11 AM)
I use an MPQ extractor to view Starcraft 2 models in Max and I was amazed at how, odd, the models looked. They have completely removed every unnecessary and unseen polygon and been extremely efficient with the overall geometry - and even more so with those which are animated (all units). The models only use a normal and a diffuse map - as far as I can tell.
The animations are all controlled through M3 sequences. And I presume these are called through the Actor events which the editor and game uses. For example, a marine unit has a variety of fidget animations, and these are all called with an actor message designating how often 'fidget' is played, and the chances of which animation get played.
The lighting, ambient occlusion, shadows, reflections, physics, shadow resolution, water and water reflections. particles, foliage, etc, etc, are all controlled by the end-user for variable performance across lots of machines. Most models range between 1000 and 2500 triangles.
@Inxa: good luck with this :) - this [link here] is a good starting point for getting to grips with the UDK. It's a nice thing and easy to learn, and really becomes impressive when you have a lot of your own content to populate levels with!
read 994 times
2/19/2011 1:12:50 PM (last edit: 2/19/2011 2:18:36 PM)