I want to take a look at developing my own pipeline further and in the process look at studio and individual’s pipelines, software and workflows to improve my own workflow as well as generally improve my ability to generate a unique art style. In the process, I will be creating a compendium of visual research, images and photos that I like to help influence and direct this style. I will later be communicating this to my team and working with team members to integrate this into a game.
I will be primarily looking at indie processes for 3D rather than AAA studios, the difference in approach between these is that triple AA use a specialist approach, a developer with a very specialized set of skills to perform a certain role without a large studio, this pipeline involves multiple people working together on one or multiple assets thus requiring developers to know a lot about a specific thing, sculpting, or hard surface modelling for example, while texturing or render setups may be handled by other team members while indie using a multiple hats one developer has a number of skills and sees an asset through sometimes from start to methodology, usually as the team has fewer members and in general the projects are smaller, usually finish by themselves, this can include having an in-depth knowledge of concept art composition to hard surface modelling all the way through to engine work like graphics programming and visual structuring. The latter is the approach I’m more interested in, because this is not only more useful in the team structure I would like to be in, but I am able to see assets and visuals through from beginning to end, this can be paired with a 2D artist to assist in a visual style or concept art providing secondary form.
This is a walkthrough on how I work in 3D, this is both reflective and will serve to document how
I work and can adjust the workflow to make it better and work well with others
This is the stage that I begin to craft a model or a scene, this usually starts as a vision or to demonstrate something, I often begin concepting with certain geometric principles in mind, what do I want my vert count to be? What tools like I use to create this? Will I require additional software, will this be a soft-edged model or a hard-edged model, With this I may require drawings or sketches, usually if the model is for a personal project I can visualise it well enough without drawings however scenes often require a base understanding of composition, assets within the scene and camera positions before beginning. If for a group project I can work and visualise from other artists concept work or drawings, model sheets are only necessary with very complicated geometric compositions like organics, this speeds the process up substantially.
A look into the development of Dead cells and how they use 3D animations and shaders to speed up their work flow
A look at how Hello games use voxel design in no mans sky to design new 3D environments
A generalised look at pipelines from triple A to indie
A window into designing a 3D level by a professional games artist
There are some interesting ways of speeding up processes here, especially when it comes to large elements of the game, Hello Games use voxelisation to build their environments because they wanted procedural generation and Dead cells developers use 3D animation to solve a 2D pixel art issue. These approaches outline the importance of thinking outside of the box and how to handle things differently.
I usually draw my concepts in photoshop using a graphics tablet, although this doesn’t always help with the creative process as my drawing talent can be lacking, I will usually create black and white sketches using greys to map out detail, I rarely use colour in concept work as I find it distracting, I prefer instead to draw colour pallets and annotate them how I wish to apply them.
Concepting in photoshop and using different greys to simulate lighting and angles is very helpful for modelling
For modelling I use primarily 3DS max, previously I have experience with Cinema 4D and Blender but I moved away from these, I intend to move to Maya at a given opportunity as I believe it would speed up my work.
I tend to gravitate towards three main methods of modelling, The simplest and one I use most often I refer to as box modelling, starting from a rectangle or similar compound shape and manipulating it by entering the editable poly mode. Tools within this modelling method include Extrude, Bevel, and Inset and is simply manipulation of an already existing model.
Extrusion is starting with a plane and extruding it over one axis and then wrapping to create a model, this is often used for character and more complex models, I used this last year to create all the character and organic models as model modelling is far more difficult.
Spline modelling is the process of using lines and rendering models on to them, similarly to how vector lines work in adobe illustrator but in 3D, this is a similar process to using the pen tool and can be used to create tricky geometry, rope or organically correct objects.
I often use these modelling methods to achieve my desired model, there are other tools for modelling but I tend to gravitate towards these as they are generally more effective and I understand them better
Texturing is by far my weakest field of the process, although I understand the UV process and have previously done it I rarely use it and complex models are beyond my skill set at the moment, I understand the layering of textures and their properties such as diffuse being essentially the colour layer, normal map building detail into this, specular/roughness highlighting and or metallic properties, height maps using either parallax or tessellation. I am able to create textures within photoshop and I have started using Substance software both designer and painter, Designer to create procedural textures for ease of use and understanding node based materials for use in engines, this practice is useful and I intend to use it throughout the year in creating this game, painter is used for specific models and allows users to paint directly onto UV maps or models using the maps I mentioned above.
The animation is something I gained experience in last year while working with the star beasts game, I was required to create several 3D animations both mechanical and organic, mechanical referring to things that are not alive such as cloth. The animation process is very tedious and I still have a long way to go in the field although I roughly understand the process.
A certain amount of pre-production goes into my animation, depending on what animation I want to make. For example, a short animation I may make as 15 frames, usually the fewer frames used the smoother the animation can play and the quicker it will be to make while others can be very long, the orrery for star beasts was over 300 frames to get a full 360 cycle and fortunately this was a mechanical model and wasn’t too hard to make.
Bones are used to attaching models to single elements, essentially giving model anatomy, sometimes even mechanical models use bones as it makes it easier to perform complete animations as if multiple animations are required from the same model putting in the bone structure allows for easy dynamic movement with ease later.
In organic models, the bone structures are usually both created for ease but can also be quite accurate when working on the lion model for last year I based the leg bones on real felines as I am attempting to replicate the movement and so I must understand how they move and how their bone structure works.
Although I understand there are easier ways of doing this and actual animators have developed pipelines that work well for them, it’s still a tedious task, weighting is essentially assigning all the vertices of a model a specific point to a bone often multiple and then assigning a value that instructs the vertice how much the bone impacts the movement of that vertice during motion. This often requires reworking and manipulating for hours to get exactly the right weight
I use keyframe animation to achieve 3D animation, using a frame count and dictating where vertices are coordinate-wise at a given frame, similarly to weighing this is a tedious process and often requires reworking and restarting elements to get the flow of the animation, the two major elements that have to cohesively work together are is the timing and the positioning, sometimes the model can be animated at the right time but the frame timing is wrong and requires reworking while others, when creating a run cycle, for example, the timing of the feet is perfect but the position of the bones isn’t correct and need redoing.
Above I created a procedural texture using substance designer, procedural textures are massively useful when there aren't many texture artists as it allows you to connect and configure them with a few clicks and generate entirely new textures while only having to make one
The engines I use so far are Unity and Unreal, I have used Unity more and know it better, I have become accustomed to Unity’s method of handling light, baking, occlusion, models and general set up.
There are often several different kinds of lights, directional light, Spotlight and area light.
Directional light is essentially the sun and is treated like a global light source that casts light from one direction across the entire scene. This is useful for emulating daylight and has several customisable properties such as intensity, colour and is impacted by post-processing values such as bloom and colour grading.
Spotlight does what it says on the tin, it casts light on a spot outlined and is cast from a single coordinate. This has similar properties to Directional but with more options such as range. Area lights simply act as Omni light casting transparently to cast even light in an area, it can be given a physical form but acts like an IES light.
Although also technically a shader subcategory these are visual settings that affect the visual rendering outcome of the scene, this is applied post render and arguments the render to the desired form, while renderer shaders directly affect the rendering of the graphics, post-processing is applied afterwards, this controls settings such as focus, colour grading, light bloom gaussian blur and motion blur.
Shaders are rendering scripts and visual scripts that are used to affect the rendering of the visual elements of the game, this can be anything from defining a cell shaded light render to a vertex tesselation to create wavy grass, shaders are used to define how things render. Unreal and Unity both have default shaders that render objects within the scene and most of the time this is enough to create a game however more complex art styles require custom shaders to create the desired style.
Lights can be in real time or baked, baked lights are less costly on the hardware however cannot dynamically move unlike realtime lights
A lot of companies use original middleware that’s specifically used for them, this aids in the prototyping phase of 3D conceptualisation, for example, Giant Squid Studios use middleware software for developing quick environment layouts for their underwater game Abzu. This video at 14 minutes looks at that middleware:
Here I take a look at my own pipeline and what some others are doing, although I didn't find a lot of specialised new techniques that I could apply, save for some I saw at the VA for whiteboxing and prototyping, I learnt how to think a little differently about solving a problem, This made me think about how I like apply object-oriented design to modelling by reusing and repurposing assets to a higher level