3D

From Things and Stuff Wiki
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


General

  • https://en.wikipedia.org/wiki/3D_computer_graphics - graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques.



  • https://en.wikipedia.org/wiki/3D_modeling - the process of developing a mathematical representation of any surface of an object (either inanimate or living) in three dimensions via specialized software. The product is called a 3D model. Someone who works with 3D models may be referred to as a 3D artist. It can be displayed as a two-dimensional image through a process called 3D rendering or used in a computer simulation of physical phenomena. The model can also be physically created using 3D printing devices.


  • https://en.wikipedia.org/wiki/3D_rendering - the 3D computer graphics process of automatically converting 3D wire frame models into 2D images on a computer. 3D renders may include photorealistic effects or non-photorealistic rendering.

Rendering software


POV-Ray

  • POV-Ray - The Persistence of Vision Raytracer, a high-quality, Free Software tool for creating stunning three-dimensional graphics. The source code is available for those wanting to do their own ports.


Ptex

  • Ptex - a texture mapping system developed by Walt Disney Animation Studios for production-quality rendering:


Aqsis

  • Aqsis - a cross-platform photorealistic 3D rendering solution, adhering to the RenderMan interface standard defined by Pixar Animation Studios. The Aqsis project itself consists of a number of components, each useful in their own right, but contributing to the larger aims of the project as a whole. At this time, there are two such components: the aqsis rendering tools and the RIBMosaic exporter for Blender. Each component has its own area detailed below.

Kerkythea

  • Kerkythea - a freeware software that can produce high quality renders without spending a cent on software licensing. Kerkythea is using physically accurate materials and lights, aiming for the best quality rendering in the most efficient timeframe, with target to simplify the task of quality rendering by providing the necessary tools to automate scene setup, such as staging using the GL real-time viewer, material editor, general/render settings, editors, etc., under a common interface.

Sunflow

  • Sunflow - an open source rendering system for photo-realistic image synthesis. It is written in Java and built around a flexible ray tracing core and an extensible object-oriented design.

Grasshopper

  • Grasshopper - graphical algorithm editor tightly integrated with Rhino’s 3-D modeling tools. Unlike RhinoScript, Grasshopper requires no knowledge of programming or scripting, but still allows designers to build form generators from the simple to the awe-inspiring. 


Crystal Space 3D

  • Crystal Space 3D - a mature, full-featured Software Development Kit (SDK) providing real-time 3D graphics for applications such as games and virtual reality. It is free (LGPL) and cross-platform (Windows, GNU/Linux, Mac OS X).

Irrlicht

  • Irrlicht Engine - an open source high performance realtime 3D engine written in C++. It is completely cross-platform, using D3D, OpenGL and its own software renderers, and has all of the state-of-the-art features which can be found in commercial 3d engines. We've got a huge active community, and there are lots of projects in development that use the engine. You can find enhancements for Irrlicht all over the web, like alternative terrain renderers, portal renderers, exporters, world layers, tutorials, editors, language bindings for java, perl, ruby, basic, python, lua, and so on. And best of all: It's completely free.


OGRE

  • OGRE - (Object-Oriented Graphics Rendering Engine) is a scene-oriented, flexible 3D engine written in C++ designed to make it easier and more intuitive for developers to produce applications utilising hardware-accelerated 3D graphics. The class library abstracts all the details of using the underlying system libraries like Direct3D and OpenGL and provides an interface based on world objects and other intuitive classes.


Mitsuba

  • Mitsuba - research-oriented rendering system in the style of PBRT, from which it derives much inspiration. It is written in portable C++, implements unbiased as well as biased techniques, and contains heavy optimizations targeted towards current CPU architectures. Mitsuba is extremely modular: it consists of a small set of core libraries and over 100 different plugins that implement functionality ranging from materials and light sources to complete rendering algorithms.

Ephtracy

  • Ephtracy - free lightweight 8-bit voxel art editor and interactive path tracing renderer, Windows/Mac [2]

VoxelShop

  • VoxelShop - an extremely intuitive and powerful software for OSX, Windows and Linux to modify and create voxel objects. It was designed from the ground up in close collaboration with artists.

SpriteStack

  • SpriteStack - a 3D pixelart editor based on sprite stacking technique [3]

Random dot stereogram raycaster

  • https://github.com/ammonb/stereogram-raycaster - a real-time 3D engine (ray caster) that renders to single-image random-dot stereogram (the images made popular in the Magic Eye books). I wrote this because I was curious if the brain would be able to follow a stereogram in motion. Click on the screen and press 3 after the program starts to render in stereogram.

to sort



  • https://github.com/kosua20/Rendu - a rendering engine designed for experimentation. The computer graphics academic and industrial litterature is full of interesting techniques and approaches than can be cumbersome to implement without some basic building blocks. This project aims to provide those building blocks, along with examples of interesting methods or papers. It also contains more general demo applications, such as a small snake game or a gamepad configurator.




Modelling and rendering

  • Open Game Engine Exchange - Wikipedia - a text-based file format designed to facilitate the transfer of complex 3D scene data between applications such as modeling tools and game engines. The OpenGEX format is built upon the data structure concepts defined by the Open Data Description Language (OpenDDL), a generic language for the storage of arbitrary data in human-readable format. The OpenGEX file format is registered with the Internet Assigned Numbers Authority (IANA) as the model/vnd.opengex media type. The OpenGEX format is defined by the Open Game Engine Exchange Specification, which is available on the official website opengex.org.Export plugins that write the OpenGEX format are available for Autodesk Maya, 3D Studio Max, and Blender.


  • https://en.wikipedia.org/wiki/GlTF - GL Transmission Format - a file format for 3D scenes and models using the JSON standard. It is described by its creators as the "JPEG of 3D." It is an API-neutral runtime asset delivery format developed by the Khronos Group 3D Formats Working Group and announced at HTML5DevConf 2016. The intention is that glTF be an efficient, interoperable asset delivery format that compresses the size of 3D scenes and minimizes runtime processing by applications using WebGL and other APIs. glTF also defines a common publishing format for 3D content tools and services.


  • https://en.wikipedia.org/wiki/COLLADA - COLLAborative Design Activity is an interchange file format for interactive 3D applications. It is managed by the nonprofit technology consortium, the Khronos Group, and has been adopted by ISO as a publicly available specification, ISO/PAS 17506. COLLADA defines an open standard XML schema for exchanging digital assets among various graphics software applications that might otherwise store their assets in incompatible file formats. COLLADA documents that describe digital assets are XML files, usually identified with a .dae (digital asset exchange) filename extension.


Blender



  • Blend Swap - Open Source 3D models by Blender users for Blender users. Come swap some blends with friends!



Sweet Home 3D

  • Sweet Home 3D - a free interior design application that helps you draw the plan of your house, arrange furniture on it and visit the results in 3D.


Render engines


Addons



Render farms



  • Crowdrender - helps you build a free render farm using your own hardware. Whether you render using GPU or CPU, our network rendering add-on will help you render your scenes amazingly quick using multiple computers.
  • Golem - a global, open source, decentralized supercomputer that anyone can access. It is made up of the combined power of users’ machines, from PCs to entire data centers.Golem is capable of computing a wide variety of tasks, from CGI rendering, through machine learning to scientific computing. Golem’s limitations are only defined by our developer community’s creativity. Golem creates a decentralized sharing economy of computing power and supplies software developers with a flexible, reliable and cheap source of computing power.



Unity

  • Unity - features multiple tools that enable rapid editing and iteration in your development cycles, including Play mode for quick previews of your work in real-time. All-in-one editor: Available on Windows, Mac, and Linux, it includes a range of artist-friendly tools for designing immersive experiences and game worlds, as well as a strong suite of developer tools for implementing game logic and high-performance gameplay. 2D & 3D: Unity supports both 2D and 3D development with features and functionality for your specific needs across genres. AI pathfinding tools: Unity includes a navigation system that allows you to create NPCs that can intelligently move around the game world. The system uses navigation meshes that are created automatically from your Scene geometry, or even dynamic obstacles, to alter the navigation of the characters at runtime. Efficient workflows: Unity Prefabs, which are preconfigured Game Objects, provide you with efficient and flexible workflows that enable you to work confidently, without the worry of making time-consuming errors. User interfaces: Our built-in UI system allows you to create user interfaces fast and intuitively. Physics engines: Take advantage of Box2D, the new DOTS-based Physics system and NVIDIA PhysX support for highly realistic and high-performance gameplay. Custom tools: You can extend the Editor with whatever tools you need to match your team’s workflow. Create and add customized extensions or find what you need on our Asset Store, which features thousands of resources, tools and extensions to speed up your projects.






  • https://github.com/keijiro/MidiAnimationTrack - a custom timeline/playables package that provides functionality to control object properties based on sequence data contained in a standard MIDI file (.mid file). This allows you to create musically synchronized animation using a DAW (digital audio workstation) that is easy to manage accurately synchronized timings compared to other non-musical timeline editors like Unity's one.



  • https://github.com/aras-p/UnityGaussianSplatting - SIGGRAPH 2023 had a paper "3D Gaussian Splatting for Real-Time Radiance Field Rendering" by Kerbl, Kopanas, Leimkühler, Drettakis that is really cool! Check out their website, source code repository, data sets and so on. I've decided to try to implement the realtime visualization part (i.e. the one that takes already-produced gaussian splat "model" file, in Unity.

Unreal

Unreal




Wings 3D

  • Wings 3D - an advanced subdivision modeler that is both powerful and easy to use. Originally inspired by Nendo and Mirai from Izware, Wings 3D has been developed since 2001, when Björn Gustavsson (bjorng) and Dan Gudmundsson (dgud) first started the project. Richard Jones (optigon) maintained Wings and coded many new features between 2006 and 2011. Wings 3D is currently maintained by Dan and Richard with the help of the great community. Wings 3D offers a wide range of modeling tools, a customizable interface, support for lights and materials, and a built-in AutoUV mapping facility. There is no support in Wings for animation.

SketchUp

  • SketchUp - friendly and forgiving 3D modeling software: we don’t sacrifice usability for the sake of functionality. Start by drawing lines and shapes. Push and pull surfaces to turn them into 3D forms. Stretch, copy, rotate and paint to make anything you like. Windows.


MagicaVoxel

  • MagicaVoxel - A free lightweight GPU-based voxel art editor and interactive path tracing renderer. [8]


  • MagicaVoxel - A lightweight signed distance field editor and path tracing renderer.

PlayCanvas

  • PlayCanvas - WebGL Game Engine, The Web-First Game Engine Collaboratively build stunning HTML5 games and visualizations

Web

VECTARY

  • VECTARY - the free, online 3D modeling software

p3d.in

  • p3d.in - sharing your 3D models online should be an enjoyable experience. p3d.in is simple, real-time and just works. Goodbye screenshots!

LambdaCube 3D

  • LambdaCube 3D - Haskell-like purely functional domain specific language for programming the GPU (graphics processing unit). The purpose of LambdaCube 3D is to provide a platform and host language independent graphics API. It allows the programmer to define the rendering pipeline with a single language, which is compiled into shaders and CPU-side setup code. Other parts of the program (e.g. networking or application logic) can be implemented in Haskell, C++ or JavaScript. LambdaCube 3D can be used for desktop and mobile applications, either in standalone applications or in browsers too.

webglstudio.js

WebGL




  • GPU text rendering with vector textures · Will Dobbie - A bezier curve shader. Our shader will run for every pixel we need to output. Its goal is to figure out what fraction of the pixel is covered by the glyph and assign this to the pixel alpha value3. If the glyph only partially covers the pixel we will output an alpha value somewhere between 0 and 1 — this is what gives us smooth antialiasing. [12]


  • stack.gl - an open software ecosystem for WebGL, built on top of browserify and npm. Inspired by the Unix philosophy, stackgl modules "do one thing, and do it well". It is easy to use parts of stackgl à la carte, and because it is written from the bottom up, you can always drill down a layer. Unlike many 3D engines, stackgl emphasizes writing shader code, and provides powerful tools like glslify which bring the modularity and productivity of npm to GLSL!


WebGLStudio.js

  • WebGLStudio.js - 3D Development environment for the web. WebGLStudio.js is a platform to create interactive 3D scenes directly from the browser. It allows to edit the scene visually, code your behaviours, edit the shaders, and all directly from within the app. [13]

Clara.io

  • Clara.io - a full-featured cloud-based 3D modeling, animation and rendering software tool that runs in your web browser. With Clara.io you can make complex 3D models, create beautiful photorealistic renderings, and share them without installing any software programs. This is the perfect Three.JS or Babylon.JS editor for creating and tweaking your Web game content.

deck.gl

Models




  • Sketchfab - Your 3D content on web, mobile, AR, and VR.


  • https://github.com/f3d-app/f3d - a fast and minimalist 3D viewer. It supports many file formats, from digital content to scientific datasets (including glTF, STL, STEP, PLY, OBJ, FBX, Alembic), can show animations and support thumbnails and many rendering and texturing options including real time physically based rendering and raytracing. It is fully controllable from the command line and support configuration files. It can provide thumbnails, support interactive hotkeys, drag&drop and integration into file managers. F3D also contains the libf3d, a simple library to render meshes, with C++ and Python Bindings, as well as experimental Java and Javascript bindings.


Formats

.OBJ

  • https://en.wikipedia.org/wiki/Wavefront_.obj_file - a geometry definition file format first developed by Wavefront Technologies for its Advanced Visualizer animation package. The file format is open and has been adopted by other 3D graphics application vendors.The OBJ file format is a simple data-format that represents 3D geometry alone — namely, the position of each vertex, the UV position of each texture coordinate vertex, vertex normals, and the faces that make each polygon defined as a list of vertices, and texture vertices. Vertices are stored in a counter-clockwise order by default, making explicit declaration of face normals unnecessary. OBJ coordinates have no units, but OBJ files can contain scale information in a human readable comment line.

Universal Scene Description

  • USD - a high-performance extensible software platform for collaboratively constructing animated 3D scenes, designed to meet the needs of large-scale film and visual effects production. USD enables robust interchange between digital content creation tools with its expanding set of schemas, covering domains like geometry, shading, lighting, and physics. USD’s unique composition ability provides rich and varied ways to combine assets into larger assemblies, enables collaborative workflows so that many creators can work together with ease, and more.



to sort

  • https://github.com/vyv/psn-cpp - An Open Protocol for On-Stage, Live 3D Position DataInitially developed as a means for VYV's Photon Media Server to internally communicate the position of freely-moving projection surfaces, PosiStageNet became an open standard through a close collaboration between VYV and MA Lighting, makers of the world-reknowned GrandMA2 lighting console.The result is a combined positioning and lighting system that allows for effects of an unparalleled scale, where large numbers of moving lights can precisely follow multiple performers on stage. The protocol’s applications do not stop at lighting – sound designers can use its data to accurately pan sound effects and music automatically according to the action on stage, and automation operators can obtain another level of feedback on the position of motor-driven stage elements – or even set targets. And that’s just the start; the applications of 3D stage positioning systems are only beginning to be explored.








  • Dust3D - a cross-platform open-source modeling software. It helps you create a 3D watertight model in seconds. Use it to speed up your character modeling in game making, 3D printing, and so on. [18]



  • https://github.com/Immersive-Foundation/IMM - an API-neutral runtime immersive media delivery format. IMM provides an efficient, extensible, interoperable format for the transmission and loading of immersive 3D and 2D animated content of mixed media types (geometry, pictures, 360 panoramas, stroke based paintings, etc).


Typography

  • https://github.com/fetisov/ttf2mesh - Standalone library for TrueType font tessellation. Allows to load ttf-file and convert its glyphs to 2D or 3D mesh objects without rasterization.

CV


Drone

  • https://github.com/OpenDroneMap/ODM - an open source command line toolkit for processing aerial drone imagery. Typical drones use simple point-and-shoot cameras, so the images from drones, while from a different perspective, are similar to any pictures taken from point-and-shoot cameras, i.e. non-metric imagery. OpenDroneMap turns those simple images into three dimensional geographic data that can be used in combination with other geographic datasets. [19]
  • https://github.com/OpenDroneMap/WebODM - A free, user-friendly, extendable application and API for drone image processing. Generate georeferenced maps, point clouds, elevation models and textured 3D models from aerial images. It uses ODM for processing.


Machine learning

NeRF





  • SMERF - Recent techniques for real-time view synthesis have rapidly advanced in fidelity and speed, and modern methods are capable of rendering near-photorealistic scenes at interactive frame rates. At the same time, a tension has arisen between explicit scene representations amenable to rasterization and neural fields built on ray marching, with state-of-the-art instances of the latter surpassing the former in quality while being prohibitively expensive for real-time applications. In this work, we introduce SMERF, a view synthesis approach that achieves state-of-the-art accuracy among real-time methods on large scenes with footprints up to 300 m^2 at a volumetric resolution of 3.5 mm^3. Our method is built upon two primary contributions: a hierarchical model partitioning scheme, which increases model capacity while constraining compute and memory consumption, and a distillation training strategy that simultaneously yields high fidelity and internal consistency. Our approach enables full six degrees of freedom (6DOF) navigation within a web browser and renders in real-time on commodity smartphones and laptops. Extensive experiments show that our method exceeds the current state-of-the-art in real-time novel view synthesis by 0.78 dB on standard benchmarks and 1.78 dB on large scenes, renders frames three orders of magnitude faster than state-of-the-art radiance field models, and achieves real-time performance across a wide variety of commodity devices, including smartphones. [21]

Gausian splatting

  • 3D Gaussian Splatting for Real-Time Radiance Field Rendering - We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (≥ 100 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.
  • Introduction to 3D Gaussian Splatting - 3D Gaussian Splatting is a rasterization technique described in 3D Gaussian Splatting for Real-Time Radiance Field Rendering that allows real-time rendering of photorealistic scenes learned from small samples of images. This article will break down how it works and what it means for the future of graphics.



  • https://github.com/antimatter15/splat - a WebGL implementation of a real-time renderer for 3D Gaussian Splatting for Real-Time Radiance Field Rendering, a recently developed technique for taking a set of pictures and generating a photorealistic navigable 3D scene out of it. As it is essentially an extension of rendering point clouds, rendering scenes generated with this technique can be done very efficiently on ordinary graphics hardware- unlike prior comparable techniques such as NeRFs.


  • DreamGaussian - Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.


  • Splatter Image - We introduce the Splatter Image, an ultra-fast approach for monocular 3D object reconstruction which operates at 38 FPS. The Splatter Image is based on Gaussian Splatting, which has recently brought real-time rendering, fast training, and excellent scaling to multi-view reconstruction. For the first time, we apply Gaussian Splatting in a monocular reconstruction setting. Our approach is learning-based, and, at test time, reconstruction only requires the feed-forward evaluation of a neural network. The main innovation of the Splatter Image is its surprisingly straightforward design: it uses a 2D image-to-image network to map the input image to one 3D Gaussian per pixel. The resulting Gaussians thus have the form of an image, the Splatter Image. We further extend the method to incorporate more than one image as input, which we do by adding cross-attention views. Owning to the speed of the renderer (588 FPS), furthermore, we can easily generate entire images during training, to optimize perceptual metrics like LPIPS. Furthermore, we use a single GPU for training. On standard benchmarks, we demonstrate not only fast reconstruction but also better results than recent and much more expensive baselines in terms of PSNR, LPIPS, and other metrics. [22]

VR




WebVR

  • WebVR - Bringing Virtual Reality to the Web - an open specification that makes it possible to experience VR in your browser. The goal is to make it easier for everyone to get into VR experiences, no matter what device you have. You need two things to experience WebVR: a headset and a compatible browser. A JavaScript API for creating immersive 3D, Virtual Reality experiences in your browser. Works with the HTC VIVE, Oculus Rift, Samsung Gear VR, Google Daydream, and Google Cardboard.
  • WebVR Rocks - Your guide to Virtual Reality in the browser


AR / XR

Latk

  • Lightning Artist Toolkit - provides a common file format and suite of tools that help you integrate hand-drawn XR animation into any project. Readable in any popular 3D graphics application and ready to use in real-world production scenarios, its goal is to make creation in 3D as expressive and intuitive as drawing in 2D.

ChatARKit


3DMMs


  • 3D Morphable Models (3DMMs) - Metaphysic.ai - 3DMMs) are parametric human-focused CGI models that are increasingly being used as a way to interact with the content of the latent space of trained neural image synthesis networks.