844 0


发布于 2022-8-25 16:24:01 + 关注 4453

您需要 登录 才可以下载或查看,没有帐号?注册

Hi! I am Pontus Ryman. I've been working as an Environment Artist in the AAA game development industry for close to 12 years now.

I have had an interest in video games and 3D graphics since an early age and since getting hooked, it was always a goal for me to work with 3D art. I started early doing half-life 1 maps in the old hammer editor, moving on to making textures and modding for battlefield 1942, and eventually working on the Black Mesa game when it was still a mod in the Source engine.

Eventually, I studied Digital Graphics, and soon after graduating, I managed to land a position as a 3D Artist at DICE.
你好!我是Pontus Ryman。我在AAA级游戏开发行业担任环境美术师已经将近12年了。

我从小就对电子游戏和3D图形感兴趣,自从迷上它以来,我一直以3D艺术为目标。我很早就开始在旧的锤子编辑器中制作半衰期1地图,继续为1942年的战场制作纹理和模组,并最终在Source引擎中仍然是Mod的Black Mesa游戏。


Spending my first 8 years at DICE and having worked on games such as Battlefield 3,4, and 5 and Star Wars Battlefront 1&2. I've been fortunate enough to have gathered a lot of experience throughout the years, and with the games relying heavily on photogrammetry, I have had the chance to be a part of developing cutting-edge tech and workflows in the field.

With the establishment of fully photogrammetry-based environments on Battlefront 1, the core workflows and principles have not really changed since then, only tools and specific gear have really been changed to improve specific steps. Photogrammetry environments in games still benefit from the rule of curating a small but well-built set of photogrammetry content that can cover an entire biomes core visual look across multiple maps. Mix in a few specific elements in each map and you can spice up the same content to create sub-variants of the same environments.
随着在Battlefront 1上建立完全基于摄影测量的环境,从那时起,核心工作流程和原则并没有真正改变,只有工具和特定装备真正被改变以改进特定步骤。游戏中的摄影测量环境仍然受益于管理一组小而完善的摄影测量内容的规则,这些内容可以涵盖跨多个地图的整个生物群系核心视觉外观。在每个地图中混合一些特定元素,您可以为相同的内容添加趣味性,以创建相同环境的子变体。

The Forests of Valencia Project
The Forests of Valencia project came about as a follow-up to my last project Summer Archipelago. At the time, I just needed something to do while I was on parental leave. Spending time with our baby has been incredible but I had to have something productive to do in the evenings when my daughter was asleep. My father just bought a house in Spain and we were going there for vacation. I thought why not combine a vacation trip with a photogrammetry project with a theme that differs from the last project but followed the same framework?

In this project, I wanted to recreate the natural forests in the Valencia region where I captured the content, as I was out and about doing the actual scanning. I also took a lot of references of not just vegetation but also lighting, composition, placement of assets, and even sound.

I would later have a lot of use for these references to accurately assemble the content into a believable world. Some of my shots are even close to 1:1 of the reference images, where the images can almost act as concept art if the composition and lighting are striking enough.

Compared to the last project which was rendered in Unreal Engine 4, I wanted to switch over to 5 to test the new features, especially Lumen which has produced some amazing results.



The equipment I used was quite a basic set of cameras and supporting gear and was based on the kit we had used when going on photogrammetry trips for the AAA games i have worked on.

The main kit includes:

Canon 6D MKII;
X-rite Color Checker Passport;
24mm lens with image stabilizer for the majority of the scanning;
70-200mm lens with image stabilizer for references and scanning at a distance;

佳能6D MKII;

I also had a large white cloth of non-reflective fabric, sturdy and easy to clean, and a small blue cloth of non-reflective fabric, sturdy and easy to clean.

This time, I also took some extra gear which I do not always bring but can be handy at times:

Canon 750D;
18-55 mm lens with image stabilizer;
Polarizing Lens cap;
Gorilla tripod.


18-55 mm镜头,带图像稳定器;

In-engine collection of some of the gear:

This is the kit I use but photogrammetry these days can be done with any type of camera and the cost of getting into using the technology is significantly lower than a few years ago.

If there is any one piece of gear I would recommend getting then it is the color chart, having a correct color reference for your scanned assets is incredibly valuable once you start to build up a library of similar assets.


A simple but effective kit:

Lighting Conditions
When scanning the rule of thumb is to always avoid scanning in sunlight or in rain/wet weather. While scanning in sun is possible it introduces a whole lot of cleanup work in the content creation phase that sometimes simply cannot be cleaned up in a good way and will leave artifacts. So avoid the sun altogether if possible and stick to the shadows!

Wet weather can cause reflections from different angles and should be avoided since it can throw off the photogrammetry software alignment. A wet surface is also darker than a dry surface in the Albedo since the surface has soaked up the water, which does not accurately represent the right values you want for a balanced scene.

The best possible scenario is an overcast sky, this will give an even lighting condition without any color bounce from any nearby surface. While you can scan in shade or during a temporary cloud coverage, during a sunny day there can be shifts in the color cast on objects through indirect lighting. Generally, this is not a big issue and a color chart image can help calibrate this correctly, but it's good to keep in mind.


Capturing Content
Capturing photogrammetry content is a fairly straightforward process and the basics are easy to understand: take a lot of images from every possible angle of a real-life object so that the photogrammetry software can match up the images to create a 3D point cloud and generate mesh and textures from it.

In my process, I used the equipment I talked about earlier, and RealityCapture is always my software of choice when it comes to running the images through for a 3D model.

When capturing content there are a few key things to be mindful of, and while they are easy in concept, a mix of circumstances can make it challenging to uphold some of them.



These key things are:

Keep your images as sharp as possible, with high shutter speed and high f-stop values, preferably low ISO;
Cover the asset with as many images and angles as possible;
Make sure the asset captured is not moving in any way between images;
Check the histogram so you do not hit absolute black or absolute white, there will not be any information to gather in these parts of the images if it does and you cannot compensate when calibrating the images before running it;
Always take an image of a color chart with your asset.


Sharpness is incredibly important for the alignment of the images and will also affect the detail quality of the high poly mesh that is generated. Generally, this means having the correct focus point when scanning and also keeping your F-stop away from lower numbers. I usually try to stick between 9-11 on my Canon 6D. It can be hard to keep the F-stop high, however, as lower numbers give you a brighter image in low light situations, and balancing it with a higher ISO can introduce noise which is not preferable for your image.

A lower shutter speed can give you a brighter image as well since it will let in more light into the lens but that introduces risk in terms of sharpness loss since you are likely to have micro-movements of your hand while taking images. If in very low light situations it is recommended to use a tripod that can help stabilize with a low shutter speed. Scanning an object can take longer but it's important to stick to the sharpness rule as much as possible.


With a lower shutter speed, there is a risk of movement blurring

For hand-held scanning, I avoid going below 200 in shutter speed, and if I need to move the camera away from a position where I can look through the lens (when I need to photograph hard-to-reach places like high above my head or low to the ground) I try to keep it even higher, as keeping the camera steady becomes even harder.
It is possible to make up for lower sharpness images with a lot of image coverage/overlap, at least when it comes to alignment, but the smaller details will still not come out right if many of those images are still not sharp.


It's a good thing to keep in mind that if a feature or shape of an asset being scanned does not exist in any photo, it will never exist on the generated 3D mesh, there have to be more than at least a few images of each feature for it to be created. Nothing unique will be "generated" from extrapolation or procedural solutions in RealityCapture or other photogrammetry software, only stretched textures and flat geo will bridge empty pockets of missing information at the most.

Scanning vegetation is its own beast and requires a different approach. I lay out vegetation on a large white cloth as they will be intended in the engine as alpha cards in order to either scan them directly or use them as a reference for the creation of the alpha cards.


Sometimes using a blue cloth sheet can be efficient as it's easier to mask out in Photoshop, but the blue color can also cause problematic blue light bouncing up on the vegetation that is placed on the board. I have gone through trial and error and opted for white over blue or black sheets but it's really up to taste here. A black sheet is easier to mask and does not bounce blue onto the subject but it can become very dark instead making capturing harder.

Larger leaves and thicker sticks are easier to scan directly as the photogrammetry software has more clear details to align to, so I try to get a 3D mesh scan out of those directly, it will save a lot of time and the quality is always better but when there is vegetation with thinner details such as pine-like trees and bushes or grass, the risk of them moving in the wind while scanning or just that they are too thin can create a lot of issues in the final 3D model. In these cases, I use just a top-down image as a reference and then high poly model the branch in blender. Sometimes the branch can be scanned but the thinner ends of a branch do not turn out well, in these cases a manually made high poly model can be created and then baked down and combined with the scanned part of the branch.
较大的叶子和较厚的棍子更容易直接扫描,因为摄影测量软件具有更清晰的细节来对齐,所以我尝试直接从这些中取出3D网格扫描,这将节省大量时间并且质量总是更好,但是当有植被具有较薄的细节时,例如松树状的树木和灌木丛或草, 它们在扫描时在风中移动的风险,或者只是它们太薄的风险会在最终的3D模型中产生很多问题。在这些情况下,我只使用自上而下的图像作为参考,然后在Blender中对分支进行高多边形建模。有时可以扫描分支,但分支的较薄末端效果不佳,在这些情况下,可以创建手动制作的高多边形模型,然后将其烘烤并与分支的扫描部分组合。

Running Your Assets in Reality Capture and Cleanup
When creating the content from a photogrammetry scan there are a few steps to clean the asset and prepare it for in-engine use.

The amount of cleaning and prepping is dependent on how good the scan is and in which condition the scan was taken in. In the best-case scenarios, there is barely any cleanup to do at all.

The first step is to calibrate the images based on your color chart. The most straightforward way to do this is to simply white balance towards the white points on your color charts, this is a quick and easy way to put your asset in a good ballpark. However, if you want to get very exact calibration, calibrating towards the entire color chart color swatch is preferred.


Exact calibration can be valuable if you have complex colors in your scans that cross multiple assets, such as rocks with gradients of strong color and if they are captured in different lighting conditions (even in shadow, the surrounding lighting condition can cast colored bounce light).

While calibrating and aligning colors comes a good opportunity to remove some of the brightest and darkest parts of an image, this will help even out the albedo to a more mid-gray value

When the images have been calibrated, it's time to run the asset in your photogrammetry tool, in this case, RealityCapture.

校准图像后,即可在摄影测量工具(在本例中为 RealityCapture)中运行资产。

For the most part, I just run on medium image overlap in the align settings, if the asset for some reason does not align all the images, testing out the different align options can be useful, and also trying to use the down-res option for the images – it may help the alignment find other images and can sometimes resolve issues and connect detached components.

If this still does not work, and you by some lucky shot managed to have extra images of the same asset in the background when scanning another asset nearby, try adding those extra images into the mix, it might be just what's needed for RealityCapture to find and align the components.
在大多数情况下,我只是在对齐设置中的中等图像重叠上运行,如果资源由于某种原因没有对齐所有图像,测试不同的对齐选项可能很有用,并且还尝试对图像使用down-res选项 - 它可能有助于对齐找到其他图像,有时可以解决问题并连接分离的组件。


I run my models on High Detail from the point cloud to 3D mesh to get the most out of the asset, while the quality difference between normal and high is variable depending on your scan. I usually run at High just to be on the safe side.

When the High model is done, it's time to trim and filter out excess geo that will not be needed. There is 2 reasons to do this:

完成高模型后,是时候修剪和过滤掉不需要的多余地理区域了。这样做有 2 个原因:

You don't need that geometry cost when the mesh is exported, this can shave off millions of triangles, it will also make it easier to work with outside of RealityCapture;

If the model will be UV-mapped and textured, the more UV space for the area you will bake from the better. This point does not, however, matter if you bake from Vertex Colors.
如果模型将进行 UV 贴图和纹理处理,则烘焙区域的 UV 空间越大越好。但是,如果您从顶点颜色烘焙,这一点并不重要。

Once filtering is done I bake the texture to the high poly, either as an UVed high poly to an actual image texture (at the highest resolution possible) or to Vertex Colors, its case by case but more often than not I use Vertex Colors. Sometimes baking before the filtering can be useful if vertex color is the preferred method and I am unsure where and what to filter out, this gives some backstepping flexibility if needed.

After that, I export it to a .PLY and call it _HIGH, and then decimate the model down to 200-500k tris (higher numbers for assets with more complex geo shapes that need representation) and export that as a _MID.ply. Further filtering can be done here if I know exactly what my low poly will look like and skirt extensions will be added later.

之后,我将其导出到.PLY并将其称为_HIGH,然后将模型抽取到200-500k tris(对于具有需要表示的更复杂地理形状的资产,数字更高),并将其导出为_MID.ply。如果我确切地知道我的低多边形会是什么样子,并且稍后将添加裙部扩展,则可以在此处进行进一步的过滤。

This _MID model is used as a reference for the low poly creation as the high poly mesh can be too big for other software to handle.

At this stage, it's time to make a low poly to bake from. It's up to the user how to make the low poly over the MID as a reference – you can either opt for a procedural solution in Houdini, use a subdivided mesh from RealityCapture that is then cleaned up and UV-mapped, do your low poly in the preferred 3D app, or use the more manual but more accurate approach in TopGun for example. There is no right or wrong in terms of choice of approach. A regular low poly that represents the shape which we will bake to is what we are after. I tend to use a mix of RealityCapture decimation that I clean up in Blender, depending on the complexity of the mesh.

I usually stick to "game-friendly" budgets and not movie/VFX  budgets in terms of triangle count cost relative to objects size, while the content in this project is not meant for any one particular game or game type, I still kept within budgets learned from AAA experience and looked at the shape of the asset and give more triangles where the geometry is more complex.
在这个阶段,是时候制作一个低聚物来烘烤了。这取决于用户如何制作MID上的低多边形作为参考 - 您可以选择Houdini中的程序解决方案,使用RealityCapture的细分网格,然后进行清理和UV映射,在首选的3D应用程序中做低多边形,或者在TopGun中使用更手动但更准确的方法。在方法的选择方面没有对错之分。一个普通的低聚体,代表我们将要烘烤的形状是我们所追求的。我倾向于使用我在Blender中清理的RealityCapture抽取的混合,这取决于网格的复杂性。


Once the low poly is created, it's worth taking a look at the asset to see if it needs a cap (filling in the empty spot underneath if close to a solid shape) or if it needs a skirt extension in the case of a ledge or rock wall.

These skirt extensions are meant to help merge the asset into the terrain and into other assets better. Where the skirt extends will then later artificially be textured through a tiling texture by masking in a surface. This skirt can also, if that approach is used, work as a bridging area for Virtual texture blending. I usually just extrude the skirt straight backward or try to follow the angle of how the ground extended on the real-world asset. This area does not need to have perfect geometry as it is usually covered with other geometry from other assets, but should not have any obvious mesh issues either.


With a UV-mapped low poly, it's over to baking the maps.

As mentioned before, I often use .ply for the models because I will be baking Vertex Color to the diffuse low poly, while the highest texture quality you can get is to bake a texture to a UV on the high poly model. I still go for a .ply with Vertex Color as I know, down the line I will manipulate the texture and size it down to a game-friendly standard together with detail textures layered on, the highest resolution texture possible from the high poly bake for the albedo will not be too noticeable anymore because of this.


If you are after the highest possible resolution for a single asset, however, baking from a UVed high poly is the way to go.

I bake the Color, AO, Height, Cavity, and Normal (Tangent- and sometimes an Object-Space Normal, if needed for any cleanup or specific in engine cases). With a baked low poly, the asset can either be ready for importing into the engine or it can require some texture cleanup.

我烘烤颜色,AO,高度,腔体和法线(切线 - 有时是对象空间法线,如果需要任何清理或在发动机情况下特定)。使用烘焙的低多边形,资源可以准备好导入引擎,也可以需要一些纹理清理。

There are two main things to look for when cleaning up.

The first is Broken/Blurry spots on the mesh where there was missing info. These areas are often easy to clean by setting up a clone stamping layer in Substance 3D Painter and simply sourcing all the channels (Albedo, Height, AO, Normal, etc..) from another source on the mesh and clone stamping it onto the broken and missing areas.

第一个是网格上缺少信息的破碎/模糊点。通过在 Substance 3D Painter 中设置克隆冲压层,并简单地从网格上的另一个来源获取所有通道(反照率、高度、AO、Normal 等),并将其克隆冲压到破碎和缺失区域,通常很容易清洁这些区域。

The second is lighting information removal. If there is heavy AO darkness in cavities, it can be countered using a baked AO as a mask to brighten up the darkest spots, or if the asset has an even color you can invert the color and grayscale it and use that as a brightening mask, although it's generally not the best approach, but can help if an AO isn't matching where the dark areas on the Albedo are.

For strong lighting in your textures (if the asset was scanned in strong sunlight) – this is the hardest of the lighting artifacts to get rid of and can require quite a bit of work – I recommend Agisoft De-Lighter. It does a good job at finding and delighting overly strong sunlight. The best way to counter this issue is however to not have scanned in strong sunlight at all.

对于纹理中的强光照(如果资源是在强烈的阳光下扫描的) - 这是最难摆脱的光照伪像,可能需要相当多的工作 - 我推荐Agisoft De-Light。它在发现和愉悦过于强烈的阳光方面做得很好。然而,解决这个问题的最好方法是根本不在强烈的阳光下进行扫描。

Using the Content In-Game
After the cleanup is done is just a matter of exporting to Unreal Engine. At this stage in my project, I had master shaders and an import pipeline established from my last project with some minor updates and optimizations added.

Overall the important part here is to get to the point where a master shader covers your needs for the assets in a set. In a natural setting, for example, all rocks will use the same set of detailed textures and if present, skirt extension materials. My nature shader also covered the unpacking of the textures and contact shadow adjustment options.


With a master shader in place, it becomes very efficient to produce content through the entire "Scan – RealityCapture – Cleanup – Import" pipeline and you can quickly produce a lot of content.

The Normal map and diffuse texture usually get a slight detail blurring pass where I smooth out the smallest details because the detail textures will replace these details, leaving them in can cause quality conflicts, and blurring the texture actually makes it look better when combined with detail textures. The win here is that the texture can also be sized down to save on texture space.

I do texture packing and detail texture "slice" masks at this stage, where I pack the Color and masks into RGB + A and Normal Roughness and Height into an RG+B+A  setup.
有了主着色器,通过整个“扫描 - 现实捕获 - 清理 - 导入”管道生成内容变得非常高效,您可以快速生成大量内容。


在此阶段,我进行纹理打包和细节纹理“切片”蒙版,我将颜色和蒙版打包到RGB + A中,并将“正常粗糙度”和“高度”打包到RG + B + A设置中。

The "slice" mask is basically just a black/white image where a grayscale value will map a detail texture from a texture array onto a specific surface. In my case for a ledge asset, white will mask where a top-down surface layer of pine will mask in, this is driven in the shader where it will clamp the highest white and consider anything below white as black and then mask in the pine surface. Bright white (not full-white, but close to it), gray and black will mask in smooth, rough, and very coarse rocky detail textures on an asset.

Once the asset is imported it's just a matter of using it! When the asset is in I usually try to test the shape a bit and see if I need to bend the asset slightly to make it more useful or adjust any textures to align better with the other assets. Calibration in the start usually gets the textures so close to the correct value that this is not always needed, but a real-life rock might still look a bit off from the majority of the other rocks scanned for the set, so bringing them closer to each other is worth doing even if it can potentially stray slightly away from the "actual" colors.

When working with photogrammetry and sets of natural biomes in RealityCapture and Unreal, the most important aspect overall is to make sure that everything fits together.

Scanning everything from the same location, making sure that the assets are scanned in similar lighting conditions (overcast being preferred), that every asset is always scanned with a color chart for reference, and that different sizes of assets in a set are captured.


When a good set of assets are captured in the field, it's quite straightforward to establish a production line of sorts that moves assets through calibration, alignment, 3D mesh, and texturing in RealityCapture and then low poly baking to in engine result. You can efficiently get quite a large amount of content for a nature biome if the base setup and process for every step has been established for one asset, making a whole environment efficient to produce.

Big thanks to 80 Level for letting me do this breakdown, I had a lot of fun making this project, and it's been great to be able to share a part of that process.

非常感谢80 Level让我做这个分解,我在做这个项目时玩得很开心,能够分享这个过程的一部分真是太好了。


参与人数 6元素币 +80 活跃度 +41 收起 理由
phoneix + 18 + 8 强啊
yuankaitia... + 5 + 7 给楼主10086个赞
玖阑枢 + 15 + 3 看看
墨苍穹 + 12 + 10 先定一个小目标,赚它一个亿元素币!
浮世绘 + 12 + 4 这也太卷了吧
gxfc5688 + 18 + 9 这就离谱



回复 使用道具 举报 登录

您需要登录后才可以回帖 登录 | 注册



元素活动!上一条 /1 下一条

快速回复 返回顶部 返回列表