# 3D Generation Best Practices

## Image **→** 3D Node

The **Image to 3D** node converts a single image into a fully generated **3D mesh**.\
This node is most effective when the input image is clean, isolated, and visually informative.

### Preparing the Input Image <a href="#id-1-preparing-the-input-image" id="id-1-preparing-the-input-image"></a>

For the best 3D results, the source image should:

* show **only the desired object**
* have a **neutral background** (white, gray, or solid color)
* include **depth cues** such as shadows
* preferably use a **perspective camera angle**
* avoid clutter, overlapping objects, or busy patterns

To achieve this, you can first process the original scene using:

* **Multimodal Node** → isolate the object with neutral background
* **Precise Text Edit Node** → remove all other elements cleanly

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2FXFjkqLy7YacY2Hxzb2EV%2Fimage.png?alt=media&#x26;token=dc875f8f-8ed0-4b8a-aecb-3e2f62b03ea1" alt="" width="563"><figcaption></figcaption></figure>

These steps help the 3D model generator correctly infer:

* thickness
* volume
* proportions
* silhouette
* material hints

### Backends & Output Variation <a href="#id-2-backends--output-variation" id="id-2-backends--output-variation"></a>

The Image to 3D node includes **multiple backend models**, each producing different types of meshes:

#### Backend Differences <a href="#backend-differences" id="backend-differences"></a>

* **High-detail models** → strong surface accuracy
* **Low-detail models** → simplified forms for fast iteration
* **Low-poly models** → game-ready stylized assets
* **Quad-based meshes** → ideal for sculpting or retopology
* **Triangle-based meshes** → standard for real-time engines
* **PBR-enabled meshes** → includes color + material maps
* **Mesh-only output** → no textures, only geometry

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2FFcD4L5UqAJKeI0l0K348%2Fimage.png?alt=media&#x26;token=1301a613-7a36-4c5f-aa17-7861616a2921" alt="" width="563"><figcaption></figcaption></figure>

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2F17Mgsrc2UaMINFhxnatY%2Fimage.png?alt=media&#x26;token=fd1c717c-b4cf-4c93-8a1c-8fa0f93be085" alt="" width="563"><figcaption></figcaption></figure>

You can toggle:

* **Disable PBR**
* **Disable Texture** for geometry only output.

This helps adapt the output to different pipelines (Blender, Unreal, Unity, CAD, etc.).

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2FtBIIc5LfCKkmRdu5CHS2%2Fimage.png?alt=media&#x26;token=12bcce01-4cdf-4863-8030-4657dd34fc82" alt="" width="563"><figcaption></figcaption></figure>

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2FcJW1C8MLwIAR5gxqb7VR%2Fimage.png?alt=media&#x26;token=41f6d7aa-ac8a-4065-9eb0-819a7094d224" alt="" width="563"><figcaption></figcaption></figure>

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2FnEpkTQEzXhSLls0S8VRh%2Fimage.png?alt=media&#x26;token=c791ca1f-3ae4-4f04-9138-844f8ad49072" alt="" width="563"><figcaption></figcaption></figure>

### Generating the Model <a href="#id-3-generating-the-model" id="id-3-generating-the-model"></a>

Once the image and settings are ready:

1. Connect your isolated image to the **Image Input** of the Image to 3D node
2. Select backend model
3. Click **Run**

The node returns:

* a **3D preview**
* a wireframe preview
* a downloadable **.glb** model

You can download it directly or refine it using the next stage of mesh nodes.

### After Generation: Next Steps <a href="#id-4-after-generation-next-steps" id="id-4-after-generation-next-steps"></a>

After generating the mesh, you can optionally pass it through:

#### **Auto Mesh Transform** <a href="#auto-mesh-transform" id="auto-mesh-transform"></a>

* fix origin point
* normalize scale
* reorient axes
* center the model
* prepare it for procedural assembly

#### **Texture Mesh** <a href="#texture-mesh" id="texture-mesh"></a>

* add or enhance textures
* refine materials

#### **Blender / Unreal / Unity via API Nodes** <a href="#blender--unreal--unity" id="blender--unreal--unity"></a>

* import for scene assembly
* integrate into procedural systems
* convert to kits or asset libraries

<figure><img src="https://3654894688-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FR7boiMixMhR4q36Ns33Y%2Fuploads%2Fxyr32veoYjR91Mxvbb7Q%2Foutput3D-867de6801ba8b998d0888dff0772b272.jpg?alt=media&#x26;token=51ec83a2-d4e2-4a73-a0ba-f78f1d93d149" alt="" width="563"><figcaption></figcaption></figure>
