Some issues that need to be paid attention to when making realistic rendering with Unity


2020 is a really bumpy year. Whether it is an individual or a country, it has gone through difficult years to understand that it is ordinary and valuable.

Regardless of sadness, I have learned a lot this year, and I have learned a lot, whether knowledge or skills; learning is not only to enrich myself, but also to enable myself to see and understand more things in this world.

That’s right, Emmm, the first transnational cooperation, didn’t go far. It was in Japan, and I actually worked with Keijiro God. I’m so lucky (ㄒoㄒ)

Haha~ Show it off, close!

Today, let me talk about some problems that need to be paid attention to in the realization of realistic rendering in Unity. They are also some problems that I actually encountered~

Before we talk about the realization of the picture, let me briefly introduce the principle of rendering. The current real-time rendering engine still uses rasterization rendering. A small partner asked what is rasterization rendering. At present, it should be the one who has heard more. Two types: ray tracing rendering and rasterization rendering.

  • Ray tracing: light source –> object surface –> reflection and refraction…xN times –> camera
  • Rasterization: light source –> object surface –> camera

Ray tracing is essentially “let light decide everything”, not any simulation, approximation is completely real light ejection, generally, it is used more in the fields of film and television synthesis, film and television animation, interior design, etc. , Because the calculation is too complicated, the complete ray tracing algorithm can only be rendered offline. With the current technology, it is still difficult to realize all the characteristics of ray tracing in real time~~ But I believe this technology will be able to be completely breached soon. .

Okay, back to rasterization, what is “rasterization”? For performance, the default light is only reflected once, so that the entire reflection calculation can be done in the two-dimensional image space. The two-dimensional image is the so-called “grating”.

For example, it is best to illustrate the rasterized chestnuts below. When a spotlight goes down, there is no reflection of the light on the character.

In short, rasterized lighting rendering is unrealistic. But it doesn’t matter if it’s unreal. Instead, it provides a lot of artistic creation space for friends to form their own unique stylized rendering.

The beauty of the picture is more about the capture of the perceptual atmosphere and the emotional resonance.

Of course, friends can also rely on their own sensitivity to lighting to simulate the realism created.

To sum up, compared to ray tracing, rasterization rendering requires a lot of experimentation and simulation to approach reality.

The fastest way to approach the ray tracing effect is to bake it. For some small rooms or some small scenes, this aspect can easily achieve the ray tracing rendering effect.

This method also has disadvantages, too static, because the light map is baked statically, so all the objects after baking cannot be moved, and that is, if the scene is large, the baking efficiency will not work, and the light map is too high. More than it will affect the performance of real-time rendering. Then, as the number of objects increases and the number of ejections increases, the baking time also doubles.

Anyway, I am… I didn’t find a way to bake a scene that was too big/(ㄒoㄒ)/~~

I thought about it for a moment. I have played an impressive game that simulates beautiful real lighting.

  • Assassin’s Creed: Unity
  • Wilderness Redemption 2
  • Order: 1886

Of course, there should be a lot more. The chestnuts I raised are what I am quite impressed with. These games shouldn’t be simulated by baking lightmaps.

Yes, look at the great gods~ Although the rasterization technology is not real, it still relies on the powerful graphics technology of these great gods to restore real-time picture effects comparable to ray tracing rendering.

Of course, there are also many black technologies in Unity HDRP to restore the ray tracing effect to the greatest extent. This article I wrote before has an introduction~

Emmmm, it seems to have digressed.

Returning to the topic, I will come from my summary, of course, it is just my personal experience, the rasterization real-time rendering project in the Unity engine, some issues that need to be paid attention to in the screen implementation~

First of all, almost every link in the composition of the picture is inseparable and interlinked. Let me simplify it first. There are a few in general, of course, it must be more than that. What I mean is that it can be practical. The perceptual aspects that form the pixels will not be introduced first:

Characters, environment, animation, special effects, shaders, post

However, the four pixels that can be formed in the picture should be the character, environment, animation, and special effects. Together with the shader and post-production, these pixels in the picture are cooler.

Okay, here comes the point. After thinking about it, the focus is only two words: “whole”, the picture is the whole, the whole formed by the combination of various links, whether it is the character performance, the construction of the scene, the assistance of animation, the coordination of special effects, etc. These two words are inseparable from the class.

How to define the word whole?

It’s like a bucket of water. Each of the above parts is a wooden bucket, forming a wooden bucket. The water in the bucket is a visual pleasure. As long as one of the wooden boards is broken, it will affect the bucket. How much water can cause a fatal blow, no matter how high the other boards are.

Hmm, it looks like this, okay, I will use a video I made before to give a chestnut, and talk about some of the problems I encountered~

Video address of station B:

https://www.bilibili.com/video/BV1C54y1r71X

Then I will talk about some of the problems I have encountered one by one.

Characters, environment, animation, special effects, shaders, post

Character

As for the character, if you want to get the love and recognition of the players, you can either make it beautiful or have a very personal character. Here, it involves the problem of character concept design. There is more to say, here I am. Let’s not talk about this for now.

Let me talk about the problems I encountered:

This should be regarded as a technical problem of rasterization rendering, I hope my character is an Asian face.

However, the characters of Asian faces are actually very difficult to express in the engine, because the facial features of Asia are not very three-dimensional, and they look very flat whether they are front or side.

Especially at a distance of 5-10 meters from the camera, after the Mipmap, the texture details of the facial features of the character will be almost wiped out. At the same time, the light leakage of the less three-dimensional face is easy to appear after the light is projected, or, There are some very strange shadow areas, these problems will cause the character to appear very ugly.

Then the problem is here, how to solve it.

Different from ray tracing static frame rendering, there are various atmospheres and angles under real-time rendering. You have to look at this character as a whole. This character may appear indoors, may appear outdoors, may be day or night, maybe The camera may be very close or far away, so don’t just stare at one angle when doing it, you need to look at the problem from all angles, and some constraints will be created during the production.

The texture of the character must meet the physical material, and it needs to be natural under different lighting in different scenes
The model of the character has to be clear and good-looking at different distances

Well, with these constraints, let’s talk about the solution.

The solution I thought of myself is to stylize the character a little bit.

01 Beauty

Physical materials, then use physical solutions, nose shadow, eye shadow, eyeliner, lip gloss, glitter, blush, eyebrow line, etc., to enhance the three-dimensional sense of the face, facial features and other parts, adjust the shape and color, and conceal defects. How do you paint? Then you can paint whatever you want with makeup. If you can’t paint, you can refer to the video.Haha, of course, the texture of makeup also needs to be made, such as lipstick, moist lip gloss, and matte lip gloss. Makeup. The main problem to be solved is the texture of key parts after the mid-to-long distance Mipmap. The problem of obliterating details and filling in the lack of three-dimensional features of Asian faces

02 Plastic Surgery

To exaggerate the proportions of the character’s facial features, you can also add double eyelids and silkworms, which have obvious features of the model structure, while increasing the proportion of the eyes. The eyes are the window of the soul. Most people see people from their eyes. So the eye is especially important. By the way, bring another beautiful contact lens, perfect!!

03 Fill light

Set up a separate light layer for the character, and make a separate weak fill light for him. This is to solve the problem of some scenes where the lighting angle is not good, which will illuminate the character very ugly. Filling light can alleviate this problem.

Haha, this is the way I have summed up so far to make up for the rendering flaws of Asians.

Next, there may be another problem. Hair. For hair, I thought about it. The sorting problem should have been encountered by many friends. I use manual sorting because HDRP does not support multiple passes. So, I used a two-layer hair model to solve the problem of hair rendering ordering.

Tips: When forming a character into a prefab, the model must be used as a sub-object of the prefab. Do not directly save the SkiningMesh model as a prefab. This is beneficial. If the model is adjusted or the model is on If the script is deleted, Prefab will be updated correctly.

surroundings

The environment should be disassembled into scenes and lighting. I will call them environment collectively. I think the environment is very important for the picture. For example, you can freeze the frame at a certain frame of the picture, and the environment occupies the picture. The proportion should be at least 60%. If there is any dissonance in the environment, the immersion will be interrupted immediately, which is very fatal.

The environment is the best reference for a character, and you can be immersed in the role. If the materials produced are compatible with the worldview, with text or animation, it can be well brought into this worldview.

In terms of level design, the environment can also be a good guide. For example, the level of lighting can be used to guide the player’s vision and prevent getting lost.

At the same time, the environment is also a reflection of mood and rhythm, dangerous, relaxed and happy, etc., with music and special effects, you can be instantly brought into this situation.

Of course, an environment that wants to impress others and never forgets, it is not enough to rely on these few, I have gained some experience recently, I will talk about this later, I will not talk about it here~

Well, I just briefly introduced the role of the environment. Let me talk about the problems I encountered in setting up the environment:

01 scale

When setting up the environment, a character must be used as a reference in the scene to ensure that the ratio of the environment is correct, and there will be no coke bottle larger than a person. Haha~ I like to create a 1.6-meter Cube. The reference object, proportion is very important, because people are very sensitive to some known things, such as grass, bottles, bricks, tables, etc. If there are these objects in the scene, but the proportions are out of balance, they can feel them in an instant. come out.

02 light

There should be no one to take pictures of Mount Lu at night, it’s dark, no matter how good the scenery is, and the light doesn’t work, it will not look good. How to build a good-looking lighting? First of all, objects with self-luminous textures need light sources. Some objects in the scene that are clearly known to be light sources need to be illuminated for them, such as billboard ducks, searchlights, etc. Objects, but hey, what I want to say is that these need to be reversed. Why do you say this, or go back to the word “whole”, first look at the overall picture, you need to think about it first, what kind of contrast between cold and warm do you want to create? Where should lighting be used to create a sense of hierarchy? After thinking about it clearly, first start pendulum lights, and then pendulum the objects that can emit these lights based on these lights. This is both good-looking and not against the peace~~

03 level

A good-looking picture needs a hierarchical relationship. The light, dark, cold and warm light can well outline the level of the picture. Of course, some ready-made atmosphere components of Unity HDRP can well outline the level of the picture, such as Fog. Density Volume, these are the best tools for grading, firmly controlling the visual center I want to highlight in the position I want to express

04 Material

For realistic rendering, the material must be in place. For example, metal is metal, and cement floor is cement floor. That is to say, the metallicity and roughness of the model texture is the correct color. I thought about it, this should be what everyone calls it. The quality of the picture, most people are vague about the concept of beauty, but everyone has a stereotype about the material, he will use his common sense to judge the material, so the correctness of the material is quite important~

05 Overall

Be sure to consider the integrity, don’t show up: the environment is set up, it’s okay to look at, but if you put the character in it, it’s overexposed, it’s too dark, and the proportions are not correct. These problems are needed when building. Constantly review and test whether the character is natural in the environment, the intensity of the light, whether it is appropriate to illuminate the character, whether the proportion is inconsistent, and so on. Anyway, start from the whole, adjust the overall situation, and finally adjust the balance. That’s it.

If it is a game environment, facing different types, is it related to levels? The issues that need to be considered are more complicated, so I won’t talk about it here.

Tip: The object must be placed in the scene as a prefab, because there will definitely be adjustments later, such as LOD distance, collision layer, etc. If it is not used as a prefab, then batch modification will be very troublesome.

Animation

Animation is the best way to express narrative, and it is also the best tool to enhance the sense of picture!

Not only the character animation, there are some dynamic objects in the scene, but also add points to the atmosphere, for example, passing vehicles, pedestrians, swaying wires, etc., can enhance the fidelity of the picture.

Of course, animation is also rhythmical and hierarchical. Within a screen, you also need to look at the effect as a whole. It cannot be said that all objects are moving. In that case, there will be no visual center of gravity, a combination of movement and static. The best picture performance~~

For example, a walking animation is combined with a camera animation to form a picture of a retrograde person.

Okay, let’s talk about the character animation.

The production of character simulation animation is difficult, especially in the animation of human behavior.

Because people are really too familiar with people, a behavior animation, as long as there is a trace of rhythm problems and physical incompatibility, this kind of error can be immediately felt.

Of course, there are many technical methods to solve this problem, for example, motion capture, which can help a lot in the realization of this kind of animation, saving a lot of time and cost.

So the problem is that motion capture animation also has limitations, after all, there are two spaces.

First of all, the captured environment is definitely a completely flat ground, but the virtual environment, maybe, there may be steps, slopes, etc. on the ground, then the captured animation will definitely be worn out. .

Secondly, in the implementation, the face direction of the character or the direction of the eyes may be inaccurate, and these aspects may be adjusted according to the needs of the lens, including the posture of the character.

Also, there is a discrepancy between the body of the motion capture actors and the virtual characters. The virtual characters may be taller, with longer hands and feet, and inconsistent shoulder width and hip width.

How to solve this problem;

Aha, there are many ready-made tools in Unity to solve this problem.

For example, the Muscles setting that comes with the model component can adjust the extension of the bones, and there are inverse dynamics components such as Animation Rigging, Final IK, etc., which can easily solve these problems.

I won’t introduce the function of the tool in detail here, there are many ready-made tutorials.

For physics simulations such as skirts and hair, manual K-frames is not realistic.

then what should we do? There are also many ready-made tools in Unity to solve these kinds of problems. Components such as Magica Cloth and Dynamic bone are very convenient to solve the physical simulation problems just mentioned.

Of course, there are still some problems that may take a long time to solve, such as the animation of getting off the car. If the character is to completely match the getting off the car, it will take a long time to make this animation. I don’t think it’s worth it. Avoid it directly with the lens.

Tip: Remember to check Update when offscreen for Skinningmesh, so that when the bones are displaced, the mesh will not be removed by the camera.

Special effects

Special effects, to create a sense of visual freshness, the biggest role should be to assist the vision and create the atmosphere, right?

Auxiliary visual special effects.

When this kind of special effects appear, they usually appear together with the animation model. The rhythm is very strong and the visual impact is very strong. If it is matched with some post-effects, it is easy to attract the line of sight.

In production, how to say it, it is very necessary to think about the overall rhythm, the light and shade, especially the light and shade of the special effects, need to be very cleverly held, if it is too dark, the effect will not come out, if it is too bright, the visual discomfort, And it will destroy the overall picture.

Special effects to create atmosphere.

Generally, it refers to the special effects of the environment, such as rain and fallen leaves. The main function is to enrich the vision, and further strengthen the sense of substitution and richness of the picture.

If the weather effect is to be done, it is also inseparable from the special effects. With some matching shader effects, it is easy to create rain or snow weather.

In the production of special effects, I have summarized some of the problems I encountered:

01 Balance

In order to prevent the color of the special effect from being overexposed or too dark, it is best to set a range for the HDR value of the special effect color during production. At the same time, some of the Post Processing of each environment may affect the post-production effect of the color. For example, Bloom, Color Adjustments, Exposure, etc., the value given should not be too different, otherwise it will appear: putting special effects in this scene works great, but when put in another scene, it will be dark. No, or I can’t open my eyes when I burst out.

02 interspersed

At present, the effects of special effects that simulate fluids are all made with the method of filming. While saving performance, it can also fake the feeling of fluid. Therefore, in the production of special effects shaders, we must consider the ordering of special effects and interspersed models. Problem, turning on Soft particle can solve these problems. If it is a custom Shader, just use the depth to crop the Alpha of the texture.

Tips: You can try to use the functions of Sub Emitters, Trails, Custom Data when creating special effects for Particle System, and you can make unexpected effects.

Shader

The shader (shader) is still very important. The biggest role is to assist the picture. For example, the word “image quality” mentioned before, most people have no concept of beauty, but for the performance state of a material Very familiar, be sure to ensure the physical correctness of the material.

Huh, the purpose of the shader is to achieve these materials, simulate the material that the object should have, and simulate the performance state of the material in this atmosphere, the improvement of the image quality, and the sense of bringing the scene into it It is of great help.

Emmm, there should be no problems encountered. Most of the standardized shader effect implementations have ready-made cases.

How to say it, for some standardized material realization, there is a clear ceiling.

If you want to show a high degree, it is possible to make models and textures to continuously improve the accuracy.

But, if you want to make some ingenious effects, you can try more, combine various attributes to make some cool effects, haha, let your imagination space be used, of course, these visuals The shader also needs to consider the integrity of the picture, otherwise it can be used anywhere, destroying the overall visual hierarchy.

Tip: If you use Shader Graph to make a shader, you can make some general sub graphs to save time.

Late

In the later stage, it is very important to improve the effect of the picture, especially to make up for some defects of rasterization rendering.

For example, Bloom, AO, Screen Space Reflection, adding these effects can be closer to the effect of ray tracing~

Of course, you can also add some Custom Post Process yourself. For example, in my environment, I add screen rain effects and Glith punk effects. Because it is raining, there must be water drops on the lens. What’s more, Glith will also fit the definition of punk more, which enhances the sense of atmosphere substitution I want.

But hey, there are still some issues that need to be paid attention to.

Some parameters of the post effect cannot be overstated, because the post is a screen effect, which is rendered on the top layer. All the characters, scenes, and special effects you have made before, all these pixels need to be brushed together. Brush and stack these post effects again.

Therefore, when you adjust parameters, you must think about it as a whole.

Especially in the case of multiple scenes, different lighting parameters, and different Post Process, some parameters must be set in range. If not, then there will definitely be this situation: How can this character and this special effect work in this environment? , How could it be so ugly in another environment.

Tips: You can try the Local mode for the post effect, which can create a different feeling.

Alright, I’m almost finished writing, I’m touched, you actually finished reading (.^▽^)

Let me summarize the key points.

Holistic thinking, the same applies to the production of screens. The screen structure is disassembled into several rings. Each ring will affect the next one. The links are interlocked. It requires the small partners to think about each in a holistic and overall manner. ring.

Let’s take the chestnut I mentioned before. The picture itself is a big whole and a big wooden barrel. Each link is a small piece of wood as well as a small whole. Starting from each small whole, how can the whole barrel extend? It’s up to you to see how high it is, and how much water you can hold in it.


Source: Unity official platform
Original: https://mp.weixin.qq.com/s/Z7pbIsc4T09WkaOILvFuJw

.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *