Skip to content

3D Technologies for Manufacturing and Maintenance

An image used as an example for people to understand the summary of a DADTMA
An image of three cogs with three different colors on them

DADTMA

FTL has worked with most every kind of image acquisition platform, including UAV’s, UUV’s, and USV’s. Emerging “digital twin” approaches for maintaining electronic records of asset maintenance require the efficient combination of many types of images and data. To help the Navy electronically manage asset maintenance, FTL developed “DADTMA”, or Distributed Acquisition Digital Twin Maintenance Architecture, a software tool for acquiring and managing maintenance data in the field. FTL’s DADTMA quickly and inexpensively enables fleetwide trend monitoring, predictions, and informed planning. DADTMA acts as middleware between existing maintenance and data reporting software currently in use, automates maintenance procedures with serial connected inspection tools and intelligent software tools, and stores the design and maintenance history of military and commercial assets on the cloud in a hierarchical, relational database architecture. The result is a vast, secure, searchable asset digital twin consisting of disparate datatypes throughout asset and fleet lifetimes, accessible to any authorized personnel for planning, data collection and analysis activities. This new technology is asset, datatype, and procedure agnostic, aiding maintenance and assessment of any asset type from military fighter jets to offshore wind turbines, fitting seamlessly into current maintenance procedures. DADTMA is currently being piloted at the Navy’s Fleet Readiness Center Southeast in Jacksonville, Florida.

SDARIT: Structural Damage And Repair Inferencing Tool

FTL Labs’ SDARIT (Structural Damage and Repair Inferencing Tool) is a software application that ingests 3D point clouds such as those gathered via LIDAR as input and automatically detects damage and generates a repair part inventory for facilities including Navy pier assets. SDARIT will assist FTL’s Navy customer with pier damage assessment, especially in remote sites and with no prior design information. This is especially valuable in cases where sending a Subject Matter Expert (SME) to assess pier damage onsite is difficult, dangerous, and costly due to the wide range of pier locations and possible state of disrepair. With SDARIT, deployment of a LIDAR device by one or two technicians is all that is required to capture the data necessary to make a detailed report on the state of repair of any pier in any location. The software is designed to work with any point cloud data; LIDAR or even pier structures are not specifically required.

The SDARIT program’s primary objectives are to develop technology for the automatic detection of architectural elements for inventory purposes, together with damage detection and estimation based on the ingestion of 3D field survey data such as LIDAR point clouds and 360-degree videos. SDARIT utilizes both custom-trained neural networks and algorithmic solutions to meet these objectives, and the automatic pier element inventory and damage detection results are to be used by the Navy for estimation of required repair kits. The software is designed to detect and quantify certain types of damage such as scouring, breakages, spalling, section loss, and battle damage without requiring user input. The damage is visualized through a “heatmap” display and intuitive 3D fly-through of the data. In addition, 360-degree video captured along with the LIDAR data, if available, is aligned and viewed in the same interface, giving users access to both damage-highlighted 3D point cloud data and photographic verification anywhere on the structure. This data is saved and is rapidly browsable by remote subject matter experts from the safety of their offices and desktop computers.

SHR3DR: Sub-seafloor High Resolution 3D Reconstruction

FTL Labs’ SHR3DR (Sub-Seafloor High Resolution 3D Reconstruction) is a
hardware and software solution for the automatic detection of problems such as voids forming in the subterranean environments beneath Navy waterfront facilities. It is a tool intended to be used by NAVFAC and commercial site maintenance crews in the assessment of these areas, which are hidden and typically require expensive, destructive, and time-consuming dredging and digging to fully inspect. With SHR3DR, technician-level personnel can quickly and easily collect data for an entire wharf in a wide range of waterfront environments. The SHR3DR software acquires and analyzes seismic data using advanced acoustic signal data processing algorithms and a custom trained neural network, and outputs void locations and problem areas to a user.

FTL developed a proof of concept for the combined hardware technologies, automation process, and software pipeline that were developed and demonstrated. The SHR3DR system includes a custom survey rig with both a portable seismic impulse generator and a hydrophone reel that can be extended into the desired water depth. The entire apparatus can be deployed on an ATV or truck for delivery and ease of movement down the length of a wharf or pier. At regular intervals along the surface, the hydrophone array is lowered into the water, the impulse generator excites acoustic waves, and the signals are sensed and saved. The full signal dataset for a wharf is then imported into the SHR3DR software, which processes the data to reduce noise and highlight important features. The neural network then detects possible voids and calculates information such as pile depths. This data can be registered with a 3D LIDAR surface scan and then presented to the user in an intuitive interface.

The SHR3DR Phase I program’s primary objective has been to develop a proof-of-
concept technology for the automatic detection of voids and calculation of pile depths in the subterranean areas below wharfs and piers. SHR3DR utilizes active seismic sensing with specialized algorithmic acoustic signal data processing and a custom trained neural network to meet these objectives, and the resulting survey tool is intended to be used by the Navy for determining whether facility repairs or maintenance are necessary.

The Navy has awarded FTL a Phase II contract to continue R&D on the SHR3DR program, during which the acoustic signal generation and sensing hardware will be built and tested, and a full software application will be developed.

MetaWrap

MetaWrap is a general-purpose software tool that uses CAD-compatible solid models as inputs and outputs, and that can wrap any planar periodic structure over any surface while preserving the electromagnetic performance of the unit cells. This technology is critical for next-generation airborne platforms that make extensive use of planar periodic lattice structures to manipulate electromagnetic interactions with the vehicles’ surfaces.

Using naïve methods, wrapping inevitably alters a metamaterial lattice’s electromagnetic properties in an undesirable way. In MetaWrap, this effect is minimized or reversed with specialized algorithms that vary the lattice geometry in response to its new conformation. Furthermore, MetaWrap ensures that the final lattice is interoperable with key manufacturing and simulation software. MetaWrap has a modern user interface to wrap any planar periodic structure over any surface is desired and will enable large scale parallelization for computationally intensive jobs as the technology matures.

FTL’s MetaWrap system applies custom wrapping of array textures (a) based on wrapping algorithms to minimize spatial distortion (b) and enables real-time estimation of electromagnetic properties (c). The resulting spatially modified arrays can be cut and flattened for fabrication (d) with algorithms that automatically minimize distortion when wrapped. An example is a volleyball (e) with standard design (left) and a distortion minimized design (right).

AM4Sight: Additive Manufacturing, Model-based, Multi-resolution, Machine Learning defect risk visualization tool

While AM systems, especially metal AM, bring revolutionary capabilities and have the potential to reduce supply chain issues and enable new designs through unique layer-by-layer fabrication capabilities, AM technologies currently suffer from defects that exist within the components. Defects such as porosity, inclusions, large-scale voids, and chemical inconsistencies can inhibit the functional performance of a part and reduce confidence in designing parts for AM.

While NDE methods exist to identify defects, such as X-ray CT, they are very costly and time consuming. FTL’s previous work, Volumetric AM Metadata Engine (VAME), is Air Force-funded analysis software that provides a framework for AM knowledge capture that is adaptable to different metallic AM processes and design pipelines. Building on that code base, the AM4Sight (AM4 refers to AM-targeted Model-based, Multi-resolution, Machine Learning) tool adds novel 3D build-time data aggregation, Machine Learning (ML) defect detection, and probabilistic defect risk mapping to guide the CT operator and test designer, improving the efficiency, cost-effectiveness, and successfulness of AM NDE/I.

AM4Sight uses FTL’s voxel visualization engine to identify the probability of a defect at every volume sample of the resulting AM part, as well as the severity of the defect in terms of associated failure modes of the part while in service. This provides “foresight” of defect type and location to the NDE/I technician to guide decisions on resolution, integration time, and test setup. This is a significant improvement to current commercial efforts to quantify the effects of defects on additively manufactured components that focus on expensive destructive testing to qualify a printed component.

FRAMER: Fast Reconstruction of Architectural Models from Existing Resources

FTL’s “FRAMER” (Fast Reconstruction of Architectural Models from Existing Resources) system has been developed for the Chemical and Biological Defense (CBD) program of the DoD. It combines specially trained neural networks with advanced 3D processing algorithms and intelligent data exports to enable high quality building model generation from both blueprints and photographs, and parameter exports for accurate transport and dispersion simulation.

FRAMER leverages FTL’s ongoing research utilizing neural networks for image and 3D data processing, object detection, and high-quality 3D model generation for rapid development. This software system provides the highest fidelity 3D building models based on the data used as input and the most accurate possible exports to AR and VR applications, in addition to T&D software such as NIST’s CONTAM.

Blueprints constitute a unique problem for neural networks due to their widely varying range of quality, frequent lack of relevant information, and ambiguous distinctions between room types. FTL overcomes these challenges with a unique synthetic data training step that leverages cutting-edge research with a large dataset of accurate and annotated building models proven to increase neural network accuracy for building part and object detection. This data, which will be extended to include accurate blueprint output, enables new and existing neural networks to be trained easily and repeatedly, increasing the robustness of detection for typical objects such as walls, doors, and windows. The result of this NN-processed blueprint is a data structure containing all the building’s relevant features, from which a 3D building model can be procedurally generated.

FRAMER’s generated 3D models will optionally be segmented and include objects labeled using state of the art neural network research, bringing a richer experience to the existing virtual and augmented reality applications in use at CBD. These key developments also include the use of an additional neural network to automatically augment a 3D indoor scene with new objects and furnishings that match their surroundings.

This exciting research will enable FRAMER to provide true-to-life indoor building areas even when photos or scans of those rooms do not exist. Through FTL’s collaboration with the developers of CONTAM at NIST, FRAMER’s exported building parameters will support high quality physically accurate transport and dispersion modeling. The exported building data will be usable directly in CONTAM through the creation of building templates and automatic editing of project files. Additionally, FTL will leverage its extensive experience with the development of AR and VR applications for FRAMER’s high fidelity 3D building model exports.

Group of images showing a FODHAT

FODHAT

FTL has current and active machine vision programs for the Navy, Air Force, and Northrop Grumman, most notably FTL’s FODHAT (FOD Or Defect Hazard Analysis Twin) machine vision and inspection systems. FODHAT uses neural network-enhanced 2D and 3D imaging for autonomous inspection guidance during manufacturing. FTL has experience designing high fidelity vision capture and sensor systems including video, still, time-of-fight, and LIDAR. For FODHAT, FTL has combined high-resolution video capture with artificial intelligence algorithms to enable adaptive target identification detecting millimeter-sized discrepancies in fasteners, brackets, and connectors. This system has the potential to save aircraft manufacturers millions in manual inspection, which is currently a time-consuming process highly susceptible to human error.

An image of something being 3D printed through Vame

VAME

FTL works with the Center for eDesign to develop tools for next-generation manufacturing systems. VAME (Volumetric Additive Manufacturing Metadata Engine) is a software tool that brings metallic Additive Manufacturing (AM) capabilities and defect detection into the digital design process. It uses a voxel representation on top of conventional CAD part models to store design and process information that affects the viability of AM parts. VAME provides actionable alerts on AM design issues and pinpoints exact voxels where an issue has arisen, improving dependability and certification of AM parts.

Using VAME, 3D printer data can be recorded during the fabrication of an AM part and stored geometrically by voxel to provide a pedigree that details the fabrication process. The software tool is mature and real-time data collection and voxelization of 3 million data points during an AM test fabrication has been demonstrated, capturing laser power, melt pool diameter, melt pool temperature, surface finish, build head dynamics, and other parameters. Using VAME, a part data package with 1TB of data can be easily browsed, correlated, and evaluated.

An image of an iPad mockup where the iPad is displaying a DADTMA experimentation

For more information on how we can find the solution for you, get in touch today.