Web-Based Visualization of Gravestone Scans Using The ATON-Framework

Beginners Practical of Botond Jakab

Under the Supervision of Dr. Susanne Krömker



About

The purpose of this practical was to create a new website, which allows for interactive visualization of high-resolution 3D scans of historic gravestones from the Jewish Cemetery in Worms. The focus was on implementing an interactive browser-based viewer using the ATON Web3D framework in a static client-side deployment.

The resulting website can be found here.

Background and Motivation

The scans were obtained as a part of a project of the Visualization and Numerical Geometry Group of the Interdisciplinary Center for Scientific Computing (IWR) back in 2010/2011. The main objective of this project was to document and analyze the weathered gravestones of the Jewish Cemetery "Heiliger Sand" in Worms. The aim was to digitally preserve the inscriptions that have become difficult to read due to centuries worth of erosion.

Data Acquisition

The technique applied was structured light scanning, an optical scanning technique that captures the three-dimensional shape of an object by projecting a known light pattern onto its surface. As it hits a non-planar surface, the pattern distorts according to the object's contours. This distortion, alongside color information, is captured by two cameras from a fixed baseline angle. Specialized software then analyzes the captured data to calculate 3D coordinates, generating a dense point cloud. Structured light scanning remains a standard technology for high-precision, non-contact 3D measurements.

Previous Visualization

The previous visualization helps motivate this project, as it did not use the actual meshes. Instead it consisted of a stack of 361 images per stone. This not only implies a large volume of data, larger than the corresponding meshes in most cases, but it also provides very limited interactivity. The GIF on the right illustrates this limitation, showing a basic animation with varying light conditions.

Structured Light Scanning
Scanning of a gravestone using structured light

Previous Visualization
The previous animation using a picture stack

Data Processing

Schematic Illustration of MSII Filtering

A first step in the processing of the data was the enhancement of surface details using Multi-Scale Integral Invariants filtering (MSII) with the software GigaMesh. This method generates an additional visualization layer that highlights subtle geometric features, making weathered inscriptions more legible by enhancing surface variations. The resulting data provided an alternative material representation ideal for detailed inspection.

The illustration on the right is supposed to shine some light on this process. GigaMesh applies a filtering algorithm to each vertex of the mesh, where multiple concentric spheres of varying radii are centered at that point. By calculating the volume of the intersection of these spheres with the 3D model, the software generates a feature vector for every vertex. These vectors effectively describe the "shape" of the surface at different scales, allowing for the visualization of extremely faint features.

Due to the high resolution of the original 3D scans, several meshes contained a very large number of polygons, which would have significantly increased loading times in a web-based environment. Therefore, mesh size reduction was performed using MeshLab, simplifying the geometry while preserving relevant surface details.

Finally, the processed meshes were converted into the glTF format using Blender. This step ensured compatibility with the web-based visualization framework and enabled efficient rendering within the browser.

Processing Pipeline

A total of 32 stones were scanned, resulting in 32 meshes containing the original vertex colors. In addition, 32 meshes containing the computed MSII function values were generated, bringing the total to 64 meshes that required conversion. To streamline this process, I automated the workflow using a series of Blender Python scripts. The processing pipeline consisted of four main scripts, which are illustrated in the following flowchart. The rectangles represent the individual processing steps, while the hexagons shows the corresponding output data.



Data Batch-Processing Pipeline

While blender offers support for PLY Files, which was the original format of the meshes, extra steps need to be added into the first script, like recentering and scaling down the meshes. Following snippet showcases how this was achieved.


        obj = bpy.context.selected_objects[0]
        
        # Function to recenter: Origin to Geometry, then World-Center
        recenter_object(obj)

        # Scale down if oversized
        if max(obj.dimensions) > SCALE_THRESHOLD:
            obj.scale = (SCALE_FACTOR, SCALE_FACTOR, SCALE_FACTOR)
            # Apply the scale so that dimensions and transforms are correct
            bpy.ops.object.transform_apply(scale=True)
            print(f"Scaled down {file}")

        # Create matte vertex-color material
        mat = bpy.data.materials.new(name=f"{file}_Mat")
        mat.use_nodes = True
        nodes = mat.node_tree.nodes
        links = mat.node_tree.links
        bsdf = nodes.get("Principled BSDF")

        attr = nodes.new("ShaderNodeAttribute")
        attr.attribute_name = VERTEX_COLOR_NAME
        links.new(attr.outputs["Color"], bsdf.inputs["Base Color"])

        # Set to matte finish
        bsdf.inputs["Specular IOR Level"].default_value = 0.0
        bsdf.inputs["Roughness"].default_value = 1.0
        bsdf.inputs["Clearcoat"].default_value = 0.0
        
        # Apply material
        obj.data.materials.clear()
        obj.data.materials.append(mat)

        # Save the processed object as a new .blend file
        export_name = os.path.splitext(file)[0]
        export_path = os.path.join(OUTPUT_FOLDER, f"{export_name}_processed.blend")
            

Automating the workflow ensured consistent results, particularly when datasets had to be reprocessed. This was especially beneficial when the final two scans were retrieved at a later stage, as they had been stored in a separate directory.


About The Framework

ATON-Logo

ATON is an open-source Web3D/WebXR framework developed by the CNR ISPC (Institute of Heritage Science of the National Research Council of Italy). It is designed for the interactive visualization of complex 3D datasets directly within a web browser and is particularly oriented toward cultural heritage and research applications. By enabling browser-based access without specialized software, ATON supports accessible and sustainable digital heritage presentation.

Technically, ATON follows a web-based client–server architecture. The frontend is built on Three.js, which provides real-time 3D rendering via WebGL. In a full-stack configuration, backend services can be implemented using Node.js to manage data, scenes, and additional functionality. However, ATON also allows a purely static deployment in which the application runs entirely client-side within the browser.

In this project, the static version of ATON was used, since the host server of the final page cannot host node.js applications. The focus was also on interactive visualization rather than server-side data management or collaboration features, so a client-side deployment was sufficient. This approach simplifies hosting, reduces infrastructure requirements, and still enables efficient rendering of optimized glTF assets for high-resolution 3D gravestone scans. The main drawback is that data cannot be served dynamically to the client, but has to be loaded all at once before any interaction can take place. The resulting additional waiting time is negligible.

The Graphics Library Transmission Format

For web-based visualization, the processed models were exported to the glTF (Graphics Library Transmission Format) standard. glTF is designed specifically for the efficient transmission and loading of 3D scenes and models in real-time applications. It has become a widely adopted format for WebGL-based environments due to its performance-oriented structure.

The format separates scene description and geometry data. A JSON-based file stores the scene structure, including materials, textures, transformations, and animations, while a compact binary file contains the vertex data such as positions, normals, and texture coordinates. This separation enables efficient parsing and reduces overall file size compared to traditional text-based formats.

To further optimize loading performance, Draco compression was applied to the mesh geometry. Draco is a geometry compression algorithm that significantly reduces the size of 3D mesh data, typically by approximately 50% to 90%, depending on mesh complexity. This reduction is particularly important for high-resolution 3D scans, as it improves loading times and ensures smoother interaction within the browser-based viewer.

The following graphic illustrates the differences in total file size between the image stacks used in the previous visualization, the PLY files, and the compressed glTF files. Comparing the PLY and compressed glTF formats, an average size reduction by a factor of approximately 8.47 was achieved.

File Size Comparison in MB

Implementation

The implementation built on the existing ATON framework and adapted it to the specific requirements of gravestone visualization. While ATON already offers a wide range of built-in features, its complexity and default configuration can make it difficult for inexperienced users to work with. Instead of developing a new viewer from scratch, this project concentrated on modifying the existing framework and extending selected features to improve usability, interaction, and surface inspection.


Camera and Measurement Tool

To improve navigation stability, the camera controls were adjusted by limiting the azimuth angle. This prevents excessive rotations and makes it easier to inspect the models from meaningful viewpoints. The existing measurement tool was also modified to make it clearer and easier to use. A separate button was added to activate and deactivate the tool. Additionally, upon deactivation, both completed and incomplete measurements are cleared. These changes help reduce visual clutter and make the measurement process more straightforward.



Texture Switch

A central goal of the implementation was to improve interactivity and allow for closer inspection of the surface details. For this reason, a texture-switching option was added so users can toggle between different surface representations. The MSII texture makes small-scale geometric details more visible, while the grayscale version emphasizes shading and self-shadowing effects. Offering these visualization modes makes it easier to adjust the rendering and helps reveal subtle surface features more clearly.

Texture Switching

Light Control

In addition to texture switching, dynamic lighting controls were integrated to enhance engraving visualization. The direction of the main light source can be adjusted interactively via mouse input, allowing users to modify illumination angles in real time. Since low-relief inscriptions are highly sensitive to lighting conditions, this functionality significantly improves the visibility of engraved details, especially when applied to the grayscale version. Furthermore, a light intensity control was added to allow fine adjustment of the illumination strength. This makes it possible to optimize contrast and improve the perception of surface details under different viewing conditions. Following code snippet showcases how the dynamic light control was achieved.

Light Control


    document.addEventListener('mousemove', (event) => {
        if (!movingLight) return;

        const mouse = new THREE.Vector2();
        const direction = new THREE.Vector3();
        const camera = ATON.Nav._camera;

        // Normalize mouse to range [-1, 1]
        mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
        mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;

        // Project mouse to 3D direction from camera
        direction.set(mouse.x, mouse.y, 0.5); // z = 0.5 to go into the screen
        direction.unproject(camera);
        direction.sub(camera.position).normalize();

        // Set the main light direction
        ATON.setMainLightDirection(direction);
    });
            

To reduce the learning barrier associated with the framework, an integrated help panel was added directly within the application. This HTML-based popup explains the available controls and navigation options inside the viewer itself. By providing guidance within the interface, users can familiarize themselves with the system without relying on external documentation.

Overall, the implementation demonstrates how a complex and feature-rich Web3D framework can be adapted to a specific cultural heritage use case. The focus was not on altering the core architecture of ATON, but on refining interaction mechanisms, improving usability, and supporting visual inspection of the gravestone scans.

The Presentation