Mesh Nodes
Mesh nodes are computationally more intense and therefore slower to generate but allow for proper game design considerations including non-visual factors like optimizations, LODs, pivots, etc
Mesh Nodes operate on 3D geometry created within Atlas workflows.
They handle mesh generation, transformation, cleanup, optimization, and scene assembly, ensuring that assets become production-ready and correctly scaled.
Image → 3D
The Image → 3D node converts a single reference image into a 3D mesh.
Takes an image input and reconstructs a 3D model.
Multiple backends are available (low-poly, high-detail, quad-based, triangle-based, stylized, realistic).
Output quality varies based on backend.
Use this node whenever you want to turn a concept art, render, or isolated product image into a mesh.
More details are provided in the Image->3D sub-section.
Fast Image → 3D
Rapidly generates a 3D reconstruction from a single 2D image in approximately 30 seconds.
Prioritizes speed and rough volume, making it perfect for quick concept prototyping.
Features options for PBR (Physically Based Rendering) support to include lighting information in the output.
Backends: SAM3D or Hunyuan3D Rapid.
Ideal for: Creating background assets or quickly visualizing 2D concepts in 3D space.
Image → 3D (With Fallback)
A production-oriented 3D generation node that attempts reconstruction using multiple AI backends sequentially.
If the primary backend fails, the node automatically retries with secondary "fallback" models to ensure a result is always produced.
Offers deep customization for quality, face limits, and PBR textures across up to 4 different backend attempts.
Inputs: Input Image, Multiple Backend choices, Retry Count.
Ideal for: Production pipelines where reliability and successful generation are critical.
Ideal for UGC use cases where reliability is critical
Multi-Vıew → 3D
Reconstructs a 3D mesh from a set of consistent images showing an object from different standardized angles.
Provides significantly higher geometric accuracy and consistency than single-image generation.
Accepts specific views (Front, Right, Left, Back, Top, Bottom) to build a comprehensive 3D volume.
Backends: Includes industry leaders like Tripo v3.0 and Meshy v6.
Ideal for: Creating high-fidelity 3D models from character sheets or multi-angle photographs.
Re-texture Mesh
The Re-texture Mesh node regenerates or restyles the texture of an existing mesh.
Input:
A reference style image
An existing mesh
The resulting mesh receives a texture that matches the reference image.
Ideal for creating style variations or re-texturing an asset.
Example: Change the style of the asset by using a new 2D concept as the texture reference.
Extract Texture Maps
Separates the visual components of a GLB mesh into individual PBR (Physically Based Rendering) texture files.
Extracts the Base Color, Roughness, Metallic, and Normal maps as individual 2D images.
Allows you to isolate and edit specific material properties (like making a surface shinier or adding bump detail) in 2D.
Outputs: Base Color Image, Roughness Image, Metallic Image, Normal Map Image.
Apply Textures to Mesh
Replaces or assigns 2D texture images to the specific material slots of a 3D model.
Takes a geometry-only or existing mesh and applies provided images to its UV-mapped surfaces.
Allows for selective replacement; only the maps you provide (e.g., just the Base Color or just the Normal Map) will be updated.
Inputs: Input Mesh, Base Color, Roughness, Metallic, Normal Map.
Ideal for: Re-assembling a model after editing its texture maps in 2D.
Auto Transform Mesh
The Auto Transform Mesh node is one of the most essential post-processing tools.
It automatically:
Applies semantic world-scale correction
Adjusts dimensions based on object type
Repositions the origin
Prepares the asset for proper placement in a game engine, CAD tool, or Atlas pipeline
Use this node immediately after generating a model.
Mesh Multi-View Render
Captures standardized 2D snapshots of a 3D model from multiple specified camera angles.
Uses yaw (horizontal) and pitch (vertical) coordinates to define exact rendering perspectives.
Essential for creating character sheets or preparing views for texture re-projection workflows.
Outputs: An array of images and the corresponding camera matrices used for the render.
Note: This is a specialized tool typically used in advanced multi-view pipelines.
Create Occlusions Mask
Generates UV-baked visibility maps that identify which surface areas are visible from specific camera views.
Detects areas of the mesh that are hidden or shadowed, preventing textures from "smearing" onto unseen geometry.
Crucial for high-quality texture projection where multiple 2D views must be blended seamlessly.
Inputs: Input Mesh, Camera Matrices (from the Multi-View Render node).
Outputs: An array of grayscale occlusion masks.
Project Multi-View Images to Mesh
Transfers color data from a set of 2D images back onto the 3D surface of a mesh using UV coordinates.
Blends multiple perspectives together while using occlusion masks to ensure textures are only applied to visible surfaces.
Includes "Softmax Sharpening" to control the clarity and transition between different projected views.
Inputs: Mesh, Image Array, Camera Matrices, Occlusion Masks.
Ideal for: Texturing a 3D model using AI-generated or edited 2D reference images.
Reduce Polycount
Reduces the polygon count of a mesh.
Input: desired polycount value
Re-bakes textures automatically
Useful for optimizing AI-generated high-density meshes
This is essential for performance-friendly assets.
Optimize Mesh
Reconstructs the mesh topology and produces a cleaner, more manageable geometry structure.
Reduces or increases polygon count while maintaining the model’s overall shape.
Topology options:
Triangle for most general uses.
Quad for modeling workflows.
Input: target polygon count
This node is ideal for preparing AI-generated meshes that come in overly dense or irregular form.
Mesh BBox Fit
Scales a mesh non-uniformly so that its world-space bounding box exactly matches specified X, Y, Z dimensions.
Input: target width, height, depth
Output: scaled mesh with exact real-world bounding box
Useful for making an asset match required dimensions precisely.
Set Mesh Origin
Sets the mesh origin based on bounding-box parameters.
Input: desired origin position (e.g., upper bound, bbox center)
Output: repositioned mesh origin used for clean pivoting and placement
This is important for alignment, snapping, and scene assembly.
Text to Origin
Uses a text instruction to modify a mesh's origin.
Examples:
“Set origin to bottom center”
“Move pivot to the front face”
“Place origin at the geometric center”
Helpful when precise manual origin adjustments are needed.
Rotate Mesh Towards Axis
Rotates the mesh around (0, 0, 0) so that it faces a selected axis.
Supports presets such as:
Face −Y
Face +Y
Face −X / +X
Face −Z / +Z
Assumes the input mesh initially faces −Y.
Ensures consistent orientation across generated assets.
Compose 3D Scene
Takes your generated meshes and arranges them into a simple 3D scene based on a text prompt.
Input meshes + prompt
Output: arranged layout with placement, rotation, spacing
Useful for quick scene design or previews
Mask to Spline
Converts a binary mask into a spline.
Requires input binary mask where your desired object is white.
Outputs a spline based on the largest connected white region.
Mask can be prepared via Text+Image -> Image nodes beforehand.
Used for shape extraction, outline-based modeling, or path generation.
Seperate Object Parts
Automatically segment a 3D mesh into individual parts based on geometric and semantic analysis.
Uses AI to identify and detach discrete components (e.g., separating a character's clothing or a machine's parts) into a segmented GLB.
The primary backend supports up to 30k faces, while a fallback handles denser meshes up to 1.5M faces.
Inputs: Input Mesh, Backend Selection, Seed.
Ideal for: Modifying or isolating specific pieces of a combined AI-generated model.
Rig Humanoid Mesh
Automatically generates a skeletal structure and skin weights for humanoid character models.
Creates an armature that allows the character to be posed and animated.
Requires the model to be in a standard pose (T-pose or A-pose) and calibrated for height in meters for accurate skeletal placement.
Outputs: Rigged Mesh, plus preset "Walking" and "Running" previews for instant testing.
Animate Rigged Model
Applies motion data and specific animations to a previously rigged character mesh.
Utilizes a massive library of preset actions ranging from daily movements to combat and dancing.
Requires metadata from a compatible rigging node to correctly map animations to the skeleton.
Inputs: Rigged Model Metadata, Animation Selection (e.g., Fighting, Dancing, WalkAndRun).
Ideal for: Quickly bringing static characters to life for games or cinematics.
Omnipart
Segments an image and generates separate mesh parts.
Outputs:
Segmentation map
Individual GLBs for each part
Optional merged mesh
Ideal for breaking an asset into modules or components.
Perfect for kitbashing, modular asset workflows, or creating parametric components from a single image.
Last updated