Found 33 AI tools
Click any tool to view details
3D Mesh Generation is an online 3D model generation tool launched by Anything World. It uses artificial intelligence technology to allow users to quickly generate 3D models through simple text descriptions or uploading pictures. The importance of this technology is that it greatly simplifies the 3D model creation process, allowing users without professional 3D modeling skills to easily create high-quality 3D content. Product background information shows that Anything World is committed to providing innovative 3D content creation solutions through its platform, and 3D Mesh Generation is an important part of its product line. Regarding price, users can view specific pricing plans after registration.
MakerLab is an online platform that provides a variety of 3D model design tools, including a vase generator, sign customizer, etc. Users can quickly and easily create personalized 3D models according to their needs. The platform supports users to create works using templates, and also provides a creative testing ground where users can try cutting-edge technologies such as AI scanners. MakerLab's background information shows that it is operated by BamBam Lab and aims to provide users with a space to freely create and share ideas. Currently, the platform provides free and paid services, and users can choose the appropriate service according to their needs.
3DTopia-XL is a high-quality 3D asset generation technology built on the Diffusion Transformer (DiT), using a novel 3D representation method PrimX. The technology is capable of encoding 3D shapes, textures and materials into a compact N x D tensor. Each marker is a volumetric primitive anchored on the shape surface, encoding signed distance fields (SDF), RGB and materials with voxelized payloads. This process takes only 5 seconds to generate 3D PBR assets from text/image input, suitable for graphics pipelines.
Phidias is an innovative generative model that utilizes diffusion technology for reference-enhanced 3D generation. The model generates high-quality 3D assets from images, text or 3D conditions and can be completed in seconds. It significantly improves the generation quality, generalization ability and controllability by integrating three key components: Meta-ControlNet that dynamically adjusts the strength of conditions, dynamic reference routing, and self-reference enhancement. Phidias provides a unified framework for 3D generation using text, images and 3D conditions, and has a variety of application scenarios.
MeshAnything is a model that utilizes autoregressive transformers for artist-grade mesh generation that can convert any 3D representation of an asset into artist-created meshes (AMs) that can be seamlessly applied to the 3D industry. It generates meshes with a lower face count, significantly improving storage, rendering, and simulation efficiency while achieving comparable accuracy to previous methods.
MaPa is an innovative approach to generating materials for 3D meshes based on textual descriptions. This technology creates segmented procedural material maps to represent appearance, enables high-quality rendering, and provides significant flexibility in editing. Utilizing pre-trained 2D diffusion models, MaPa bridges the gap between textual descriptions and material maps without requiring large amounts of paired data. This technology decomposes the shape into multiple parts and designs a diffusion model of the control segments to synthesize a 2D image aligned with the mesh part, then initializes the parameters of the material map and fine-tunes it through the differentiable rendering module to produce a material that conforms to the text description. Extensive experiments show that MaPa outperforms existing technologies in terms of fidelity, resolution, and editability.
Interactive3D is an advanced 3D generative model that provides users with precise control through interactive design. The model adopts a two-stage cascade structure, utilizing different 3D representation methods, allowing the user to modify and guide at any intermediate step of the generation process. Its importance lies in enabling users to have fine control over the 3D model generation process, thereby creating high-quality 3D models that meet specific needs.
GRM is a large-scale reconstruction model that can recover 3D assets from sparse view images in 0.1 seconds and generate them in 8 seconds. It is a feed-forward Transformer-based model that can efficiently fuse multi-view information to convert input pixels into pixel-aligned Gaussian distributions. These Gaussian distributions can be back-projected into a dense 3D Gaussian distribution collection representing the scene. Our Transformer architecture and use of 3D Gaussian distribution unlocks a scalable and efficient reconstruction framework. Extensive experimental results demonstrate the superiority of our approach over other alternatives in terms of reconstruction quality and efficiency. We also demonstrate the potential of GRM in generative tasks such as text to 3D and image to 3D, by combining with existing multi-view diffusion models.
3D AI Studio is an online tool based on artificial intelligence technology that can easily generate customized 3D models. Suitable for designers, developers and creative people, providing high-quality digital assets. Users can quickly create 3D models through the AI generator and export them in FBX, GLB or USDZ format. 3D AI Studio features high performance, user-friendly interface, and automatic generation of real textures, which can significantly shorten modeling time and reduce costs.
CSM 3D Viewer is an online 3D model viewer that allows users to view and interact with 3D models on the web. It supports a variety of 3D file formats and provides basic operations such as rotation and scaling, as well as more advanced viewing functions. CSM 3D Viewer is suitable for designers, engineers and 3D enthusiasts, helping them display and share 3D works more intuitively.
ComfyUI-3D-Pack is a powerful collection of 3D processing plug-ins. It provides ComfyUI with the ability to process 3D models (grids, textures, etc.), and integrates various cutting-edge 3D reconstruction and rendering algorithms, such as 3D Gaussian sampling, NeRF different iable rendering, etc., which can quickly reconstruct 3D Gaussian models from single-view images and convert them into triangular mesh models. It also provides an interactive 3D visualization interface.
ComfyUI-3D-Pack is a powerful 3D processing node plug-in package. It provides ComfyUI with the ability to process 3D inputs (grids, UV textures, etc.), using the most cutting-edge algorithms, such as 3D Gaussian sampling, neural radiation fields, etc. This project allows users to quickly generate a 3D Gaussian model using only a single image, and convert the Gaussian model into a grid to achieve 3D reconstruction. It also supports multi-view images as input, allowing texture maps for multi-view rendering to be mapped on a given 3D mesh. The plug-in package is under development and has not yet been officially released to the ComfyUI plug-in library, but it already supports functions such as large multi-view Gaussian models, three-plane Gaussian transformers, 3D Gaussian sampling, depth mesh triangulation, 3D file loading and saving, etc. It aims to be a powerful tool for ComfyUI to handle 3D content.
BlockFusion is a diffusion-based model that generates 3D scenes and seamlessly integrates new blocks into the scene. It is trained on a dataset of 3D patches randomly cropped from a complete 3D scene mesh. Through block-by-block fitting, all training blocks are converted into hybrid neural fields: triahedrons containing geometric features, followed by a multilayer perceptron (MLP) for decoding signed distance values. A variational autoencoder is used to compress the triahedrons into a latent trihedral space, subjecting them to a denoising diffusion process. Diffusion is applied to latent representations, which enables high-quality and diverse 3D scene generation. When extending a scene during generation, simply append empty blocks to overlap the current scene and extrapolate existing potential triahedrons to fill the new blocks. Extrapolation is accomplished by tuning the generation process using feature samples from overlapping triahedrons during denoising iterations. Latent trihedral extrapolation produces semantically and geometrically meaningful transitions that blend harmoniously with the existing scene. Use the 2D layout adjustment mechanism to control the placement and arrangement of scene elements. Experimental results show that BlockFusion is capable of generating diverse, geometrically consistent, and high-quality indoor and outdoor large-scale 3D scenes.
TIP-Editor is an accurate 3D editor that supports text and image hints, allowing users to precisely control the appearance and position of the editing area through text and image hints and 3D bounding boxes. Employs a stepwise 2D personalization strategy to better learn representations of existing scenes and reference images, enabling precise appearance control through local editing. TIPEditor utilizes an unambiguous and flexible 3D Gaussian splash as a 3D representation for local editing while keeping the background intact. Extensive experiments have proven that TIP-Editor accurately edits according to text and image prompts within the specified bounding box area, and the editing quality and alignment with the prompts are both qualitatively and quantitatively better than the baseline.
3DTopia is a two-stage text-to-3D generative model. The first stage uses a diffusion model to quickly generate candidates. The second stage optimizes the assets selected in the first stage. This model enables high-quality text-to-3D generation in under 5 minutes.
Make-A-Shape is a new 3D generative model designed to train on large-scale data in an efficient manner, capable of leveraging 10 million publicly available shapes. We innovatively introduce a wavelet tree representation to compactly encode the shape by formulating a subband coefficient filtering scheme, and then arrange the representation in a low-resolution grid by designing a subband coefficient packing scheme, making it generative of diffusion models. Furthermore, we propose a subband adaptive training strategy that enables our model to effectively learn to generate coarse and fine wavelet coefficients. Finally, we extend our framework to be controlled by additional input conditions to enable it to generate shapes from various modalities, such as single/multi-view images, point clouds, and low-resolution voxels. In extensive experiments, we demonstrate various applications such as unconditional generation, shape completion, and conditional generation. Our method not only surpasses the state of the art in providing high-quality results, but also efficiently generates shapes in seconds, typically only 2 seconds under most conditions.
AnimatableDreamer is a framework for generating and reconstructing animatable non-rigid 3D models from monocular videos. It is able to generate different categories of non-rigid objects while following object motions extracted from videos. The key technology is the proposed canonical fraction distillation method, which simplifies the generation dimension from 4D to 3D, performs noise reduction on different frames in the video, and performs the distillation process in a unique canonical space. This ensures temporally consistent generation and morphological fidelity in different poses. With the help of differentiable deformation, AnimatableDreamer upgrades the 3D generator to 4D, providing a new perspective for the generation and reconstruction of non-rigid 3D models. Furthermore, combined with the inductive knowledge of the consistency diffusion model, canonical fractional distillation can regularize the reconstruction from a new perspective, thereby closing the loop and enhancing the generation process. Extensive experiments show that this method can generate highly flexible text-guided 3D models from monocular videos, while the reconstruction performance is better than typical non-rigid body reconstruction methods.
HexaGen3D is an innovative method for generating high-quality 3D assets from text prompts. It leverages a large pre-trained 2D diffusion model by fine-tuning a pre-trained text-to-image model to jointly predict 6 orthogonal projections and corresponding latent triahedrons, and then decodes these latent values to generate texture meshes. HexaGen3D does not require per-sample optimization and can infer high-quality and diverse objects from text prompts in 7 seconds, providing a better quality and latency trade-off compared to existing methods. In addition, HexaGen3D has strong generalization capabilities to new objects or combinations.
InseRF is a method for generating new objects in NeRF-reconstructed 3D scenes via text cues and 2D bounding boxes. It generates new 3D objects from a user-supplied text description and a 2D bounding box in a reference viewpoint and inserts them into the scene. This method enables controlled, 3D-consistent object insertion without the need for explicit 3D information. Through experiments in multiple 3D scenes, the effectiveness of the InseRF method relative to existing methods is demonstrated.
This product is a 3D GAN technology that can parse fine-grained 3D geometry with unprecedented detail by learning a method based on neural volume rendering. The product uses a learning sampler to accelerate 3D GAN training, use less depth sampling, and directly render each pixel of the full-resolution image during training and inference. At the same time, it learns high-quality surface geometry and synthesizes images with high-resolution 3D geometry and strict viewing angles. The product demonstrates state-of-the-art 3D geometry quality at FFHQ and AFHQ, setting a new standard for unsupervised learning in 3D GANs.
Tinkercad is a free, easy-to-use 3D design, electronic circuit and coding application. It provides project-based learning to help students build confidence in STEM in the classroom. Tinkercad is suitable for the learning and practice of three-dimensional design, circuits and coding. It can be used to make product models, printable parts, electronic circuits and coding programs.
SceneWiz3D is a novel method for synthesizing high-fidelity 3D scenes from text. It uses a hybrid 3D representation, with explicit representation for objects and implicit representation for scenes. Users can generate objects through traditional text-to-3D methods or by providing objects themselves. To configure the scene layout and automatically place objects, we applied particle swarm optimization technology during the optimization process. Furthermore, in the text-to-scene case, it is difficult to obtain multi-view supervision for certain parts of the scene (e.g., corners, occlusions), resulting in inferior geometry. To alleviate this lack of supervision, we introduce the RGBD panoramic diffusion model as an additional prior, thus achieving high-quality geometries. Extensive evaluation supports that our method achieves higher quality than previous methods, generating detailed and perspective-consistent 3D scenes.
A collection of 3D Gaussian splatter technology resources, covering ecosystem and tools, research papers, Unity Gaussian scattering projects, etc. This technology is widely used in 3D editing, real-time point cloud relighting, inverse rendering, data compression, anti-aliasing and other fields. It has a high reference value for people interested in 3D Gaussian splatter technology.
Spline AI is a tool for quickly generating 3D objects, animations and textures through AI. Using simple tips, designers can turn ideas into reality faster. Product functions include: generating 3D objects and scenes, editing objects, applying materials, adding lighting, generating seamless textures, etc. Spline AI also provides AI texture functionality that can generate seamless textures based on text prompts. This product is suitable for designers, artists and creative teams.
MeshGPT creates triangular meshes by autoregressively sampling from a transformer model trained to generate labels from a learned geometric vocabulary. These markers can then be decoded into faces of a triangular mesh. Our method generates clean, coherent and compact meshes with sharp edges and high fidelity. MeshGPT performs significantly better in shape coverage than existing mesh generation methods, with FID scores improving by 30 points across various categories.
3D Paintbrus is a technology that automatically adds texture to local semantic areas on a mesh via text description. This method operates directly on the mesh, producing texture maps that are seamlessly integrated into the standard graphics pipeline. At the same time, a localized map of the specified editing area and a matching texture map are generated. We utilize multiple stages of a cascaded diffusion model to oversee local editing techniques that enhance detail and resolution in textured areas. The technique, called Cascade Fractional Distillation (CSD), is able to simultaneously distill scores at multiple resolutions in a cascade manner, enabling control over the granularity and global understanding of supervision. We demonstrate the effectiveness of 3D brushes in locally texturing various shapes within different semantic regions.
Genie is a research preview of Luma's 3D generative base model. It can generate various three-dimensional models for use in design, creation, entertainment and other fields. Genie provides a wealth of features, including shape generation, texture drawing, animation creation, and more. It can be used in many fields such as game development, virtual reality, and movie special effects. Genie pricing and positioning will be determined ahead of official launch.
JianE.com is a comprehensive tool specially created for architectural designers. It provides a variety of design resources such as 3D models, SU models, textures, construction drawings, etc., and supports panoramic, cloud rendering, AI color plan and other functions, aiming to improve design efficiency and quality. JianE.com also provides design cases, video courses, certificate examinations and other services with flexible pricing suitable for individual and corporate users.
SketchUp is a powerful 3D modeling software that can be used in architecture, interior design, landscape design, mechanical design and other fields. It provides easy-to-use tools that enable users to quickly create beautiful 3D models. SketchUp also has a rich extension library, allowing users to customize the 3D workspace according to their needs. SketchUp supports exporting files in multiple formats, including 3D printing files, CAD files, image files, etc. SketchUp's pricing is flexible, and users can choose to subscribe or purchase a perpetual license.
Farm3D is a software that can generate controllable 3D models from a single image. It learns a monocular reconstruction network by using the image generator Stable Diffusion to generate training data. The network can generate detailed 3D models from a single input image, including shape, appearance, viewing angle and lighting direction. Farm3D is suitable for designers, artists and model makers to quickly generate high-quality 3D models.
Masterpiece Studio is the first complete VR 3D creative suite for independent creators, using generative artificial intelligence technology to easily generate, edit and publish 3D works. It provides a range of powerful tools that allow users to quickly create amazing virtual reality experiences. Masterpiece Studio has an intuitive and easy-to-use interface that supports various 3D creation needs, including modeling, texturing, animation, etc. Users can directly perform interactive editing through VR devices, and can also export their works to various platforms for display and publication. Masterpiece Studio also provides a rich material library and templates to help users get started quickly and realize unlimited creative possibilities.
GET3D is a generative model based on image learning that can directly generate 3D models with complex topology, rich geometric details, and high-fidelity textures. We trained the model from a collection of 2D images by combining differentiable surface modeling, differentiable rendering, and 2D generative adversarial networks. GET3D can generate high-quality 3D texture models, covering various forms such as cars, chairs, animals, motorcycles and people, which is a significant improvement compared to previous methods.
Spline is a free 3D design software that collaborates in real time in the browser to create interactive experiences for the web. Simple 3D modeling, animation, texturing and other functions. Suitable for teamwork and individual creation. Please visit the official website for pricing and more information.
Explore other subcategories under design Other Categories
753 tools
302 tools
237 tools
127 tools
96 tools
93 tools
61 tools
57 tools
AI 3D tools Hot design is a popular subcategory under 33 quality AI tools