🌟 3D Representation

One of the most fundamental questions in computer graphics is how a 3D object should be represented inside a computer. Since digital systems cannot store continuous geometry directly, we rely on mathematical and discrete representations to approximate shape.

Each representation reflects a different trade-off between geometric fidelity, memory efficiency, and computational complexity. These choices strongly affect rendering quality, simulation stability, and learning performance.

Contents

These notes summarize the most common 3D representations used in graphics and learning, together with their intuition and limitations.

🔹 Point-Based Representation

In point-based representations, a 3D object is described as a collection of discrete samples in space, typically given by coordinates (x, y, z). This form naturally arises from real-world sensing, such as LiDAR scans or depth cameras.

The object's shape is not explicitly defined by surfaces, but instead inferred from the density and distribution of points. As a result, geometric reasoning often requires additional processing.

  • Minimal structure: positions (and sometimes color or normals)
  • Easy to acquire from sensors
  • No explicit surface or topology

🔹 Mesh Representation

A mesh represents geometry through vertices connected by edges and faces, most commonly triangles. Unlike point clouds, meshes explicitly encode surface connectivity, making them well-suited for rendering and shading.

Because of their explicit structure and efficiency, meshes remain the dominant representation in real-time graphics pipelines.

  • Explicit surface geometry
  • Highly efficient for rendering
  • Standard representation in graphics APIs

🔹 Volumetric Representation

Volumetric representations discretize 3D space into a regular grid of voxels, where each voxel stores information such as occupancy or density. This approach treats space itself as the primary object of representation.

While volumetric methods are memory intensive, they offer advantages for physical simulation, collision detection, and modeling internal structure.

  • Captures both surface and interior
  • Simple spatial reasoning
  • High memory cost at high resolution

🔹 Implicit Representation

Implicit representations define geometry using a function f(x, y, z), where the surface corresponds to the zero-level set f(x, y, z) = 0.

This formulation enables continuous and resolution-independent surfaces. Recent advances in neural implicit models have made this representation central to modern 3D learning and reconstruction.

  • Continuous and smooth geometry
  • Compact representation for complex shapes
  • Requires surface extraction or ray evaluation

✨ Summary

Each 3D representation embodies a different view of geometry: sampling, surfaces, space, or functions. No single representation dominates all tasks.

In modern graphics and physical AI systems, representations are often combined — for example, learning implicit geometry, extracting meshes for rendering, and reasoning with volumes for physics.