819 Works

Rayground: An Online Educational Tool for Ray Tracing

Nick Vitsas, Anastasios Gkaravelis, Andreas-Alexandros Vasilakis, Konstantinos Vardis & Georgios Papaioannou
In this paper, we present Rayground; an online, interactive education tool for richer in-class teaching and gradual self-study, which provides a convenient introduction into practical ray tracing through a standard shader-based programming interface. Setting up a basic ray tracing framework via modern graphics APIs, such as DirectX 12 and Vulkan, results in complex and verbose code that can be intimidating even for very competent students. On the other hand, Rayground aims to demystify ray tracing...

Visualization for Data Scientists: How specific is it?

Beatriz Sousa Santos & Adam Perer
Data Science has been widely used to support activities in diverse domains as Science, Health, Business, and Sports, to name just a few. Theory and practice have been evolving rapidly, and Data Scientist is currently a position much in demand in the job market. All this creates vast research opportunities, as well as the necessity to better understand how to prepare people as researchers and professionals having the background and skills to keep active in...

Compression and Real-Time Rendering of Inward Looking Spherical Light Fields

Saghi Hajisharif, Ehsan Miandji, Gabriel Baravadish, Per Larsson & Jonas Unger
Photorealistic rendering is an essential tool for immersive virtual reality. In this regard, the data structure of choice is typically light fields since they contain multidimensional information about the captured environment that can provide motion parallax and view-dependent information such as highlights. There are various ways to acquire light fields depending on the nature of the scene, limitations on the capturing setup, and the application at hand. Our focus in this paper is on full-parallax...

Multisample Anti-aliasing in Deferred Rendering

András Fridvalszky & Balázs Tóth
We propose a novel method for multisample anti-aliasing in deferred shading. Our technique successfully reduces memory and bandwidth usage. The new model uses per-pixel linked lists to store the samples. We also introduce algorithms to construct the new G-Buffer in the geometry pass and to calculate the shading in the lighting pass. The algorithms are designed to enable further optimizations, similar to variable rate shading. We also propose methods to satisfy constraints of memory usage...

Learning Body Shape and Pose from Dense Correspondences

Yusuke Yoshiyasu & Lucas Gamez
In this paper, we address the problem of learning 3D human pose and body shape from 2D image dataset, without having to use 3D supervisions (body shape and pose) which are in practice difficult to obtain. The idea is to use dense correspondences between image points and a body surface, which can be annotated on in-the-wild 2D images, to extract, aggregate and learn 3D information such as body shape and pose from them. To do...

Space-Time Blending for Heterogeneous Objects

Alexander Tereshin, Eike Anderson, Alexander Pasko & Valery Adzhiev
Space-time blending (STB) is an established technique allowing to implement a metamorphosis operation between geometric shapes. In this paper we significantly extend the STB method to make it possible to deal with heterogeneous objects, which are volumetric objects with attributes representing their physical properties. The STB method, used for geometry transformation, is naturally combined with space-time transfinite interpolation, used for attribute (e.g. colour) transformation. Geometry and attribute transformations are interconnected and happen simultaneously in an...

Neural Smoke Stylization with Color Transfer

Fabienne Christen, Byungsoo Kim, Vinicius C. Azevedo & Barbara Solenthaler
Artistically controlling fluid simulations requires a large amount of manual work by an artist. The recently presented transportbased neural style transfer approach simplifies workflows as it transfers the style of arbitrary input images onto 3D smoke simulations. However, the method only modifies the shape of the fluid but omits color information. In this work, we therefore extend the previous approach to obtain a complete pipeline for transferring shape and color information onto 2D and 3D...

EnvirVis 2020: Frontmatter

Soumya Dutta, Kathrin Feige, Karsten Rink & Dirk Zeckzer

SpatialRugs: Enhancing Spatial Awareness of Movement in Dense Pixel Visualizations

Juri F. Buchmüller, Udo Schlegel, Eren Cakmak, Daniel A. Keim & Evanthia Dimara
Compact visual summaries of spatio-temporal movement data often strive to express accurate positions of movers. We present SpatialRugs, a technique to enhance the spatial awareness of movements in dense pixel visualizations. SpatialRugs apply 2D colormaps to visualize location mapped to a juxtaposed display. We explore the effect of various colormaps discussing perceptual limitations and introduce a custom color-smoothing method to mitigate distorted patterns of collective movement behavior.

Enhanced Attribute-Based Explanations of Multidimensional Projections

Daan Van Driel, Xiaorui Zhai, Zonglin Tian & Alexandru Telea
Multidimensional projections (MPs) are established tools for exploring the structure of high-dimensional datasets to reveal groups of similar observations. For optimal usage, MPs can be augmented with mechanisms that explain what such points have in common that makes them similar. We extend the set of such explanatory instruments by two new techniques. First, we compute and encode the local dimensionality of the data in the projection, thereby showing areas where the MP can be well...

Progressive Parameter Space Visualization for Task-Driven SAX Configuration

Sebastian Loeschcke, Marius Hogräfer & Hans-Jörg Schulz
As time series datasets are growing in size, data reduction approaches like PAA and SAX are used to keep them storable and analyzable. Yet, finding the right trade-off between data reduction and remaining utility of the data is a challenging problem. So far, it is either done in a user-driven way and offloaded to the analyst, or it is determined in a purely data-driven, automated way. None of these approaches take the analytic task to...

Characterizing Exploratory Behaviors on a Personal Visualization Interface Using Interaction Logs

Poorna Talkad Sukumar, Gonzalo J. Martinez, Ted Grover, Gloria Mark, Sidney K. D'Mello, Nitesh V. Chawla, Stephen M. Mattingly & Aaron D. Striegel
Personal visualizations present a separate class of visualizations where users interact with their own data to draw inferences about themselves. In this paper, we study how a realistic understanding of personal visualizations can be gained from analyzing user interactions. We designed an interface presenting visualizations of the personal data gathered in a prior study and logged interactions from 369 participants as they each explored their own data. We found that the participants spent different amounts...

GaCoVi: a Correlation Visualization to Support Interpretability-Aware Feature Selection for Regression Models

Diego Rojo, Nyi Nyi Htun & Katrien Verbert
The recent growth of interest in explainable artificial intelligence (XAI) has resulted in a large number of research efforts to provide accountable and transparent machine learning systems. Although a large volume of research has focused on algorithm transparency, there are other factors that influence the interpretability of a system, such as end-users' understanding of individual features and the total number of features. Thus, involving end-users in the feature selection process may be key to achieving...

Learning and Teaching in Co-Adaptive Guidance for Mixed-Initiative Visual Analytics

Fabian Sperrle, Astrik Jeitler, Jürgen Bernard, Daniel A. Keim & Mennatallah El-Assady
Guidance processes in visual analytics applications often lack adaptivity. In this position paper, we contribute the concept of co-adaptive guidance, building on the principles of initiation and adaptation. We argue that both the user and the system adapt their data-, task- and user/system-models over time. Based on these principles, we propose reasoning about the guidance design space through introducing the concepts of learning and teaching that complement the existing dimension of implicit and explicit guidance,...

Designing an Adpative Assisting Interface for Learning Virtual Filmmaking

Qiu-Jie Wu, Chih-Hsuan Kuo, Hui-Yin Wu & Tsai-Yen Li
In this paper, we present an adaptive assisting interface for learning virtual filmmaking. The design of the system is based on the scaffolding theory, to provide timely guidance to the user in the form of visual and audio messages that are adapted to each person's skill level and performance. The system was developed on an existing virtual filmmaking setup. We conducted a study with 24 participants, who were asked to operate the film set with...

GAZED - Gaze-guided Cinematic Editing of Wide-Angle Monocular Video Recordings

K. L. Bhanu Moorthy, Moneish Kumar, Ramanathan Subramanian & Vineet Gandhi
We present GAZED- eye GAZ-guided EDiting for videos captured by a solitary, static, wide-angle and high-resolution camera. Eye-gaze has been effectively employed in computational applications as a cue to capture interesting scene content; we employ gaze as a proxy to select shots for inclusion in the edited video. Given the original video, scene content and user eye-gaze tracks are combined to generate an edited video comprising of cinematically valid actor shots and shot transitions to...

Joint Attention for Automated Video Editing

Hui-Yin Wu, Trevor Santarra, Michael Leece, Rolando Vargas & Arnav Jhala
Joint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras...

How the Deprecation of Java Applets Affected Online Visualization Frameworks - A Case Study

Martin Skrodzki
The JavaView visualization framework was designed at the end of the 1990s as a software that provides-among other services- easy, interactive geometry visualizations on web pages.We discuss how this and other design goals were met and present several applications to highlight the contemporary use-cases of the framework. However, as JavaView's easy web exports was based on Java Applets, the deprecation of this technology disabled one main functionality of the software. The remainder of the article...

The Vesicle Builder - A Membrane Packing Algorithm for the CELLmicrocosmos MembraneEditor

Beatrice Giuliari, Manuel Kösters, Jan Zhou, Tim Dingersen, André Heissmann, Ralf Rotzoll, Jens Krüger, Alejandro Giorgetti & Björn Sommer
For a long time, the major focus of membrane simulations was laid on rectangular membrane patches based on the fluid mosaic model. Because of the computational performance of today's computer hardware, it is now possible to generate and simulate larger structures, such as vesicles or micelles. Yet, there are no approaches available to generate these partly complex structures in a convenient and interactive way using WYSIWYG methods and exporting it to PDB format. The CELLmicrocosmos...

Real-time Monte Carlo Denoising with the Neural Bilateral Grid

Xiaoxu Meng, Quan Zheng, Amitabh Varshney, Gurprit Singh & Matthias Zwicker
Real-time denoising for Monte Carlo rendering remains a critical challenge with regard to the demanding requirements of both high fidelity and low computation time. In this paper, we propose a novel and practical deep learning approach to robustly denoise Monte Carlo images rendered at sampling rates as low as a single sample per pixel (1-spp). This causes severe noise, and previous techniques strongly compromise final quality to maintain real-time denoising speed. We develop an efficient...

Approximate svBRDF Estimation From Mobile Phone Video

Rachel A. Albert, Dorian Yao Chan, Dan B. Goldman & James F. O'Brien
We describe a new technique for obtaining a spatially varying BRDF (svBRDF) of a flat object using printed fiducial markers and a cell phone capable of continuous flash video. Our homography-based video frame alignment method does not require the fiducial markers to be visible in every frame, thereby enabling us to capture larger areas at a closer distance and higher resolution than in previous work. Pixels in the resulting panorama are fit with a BRDF...

A Unified Manifold Framework for Efficient BRDF Sampling based on Parametric Mixture Models

Sebastian Herholz, Oskar Elek, Jens Schindel, Jaroslav Křivánek & Hendrik P. A. Lensch
Virtually all existing analytic BRDF models are built from multiple functional components (e.g., Fresnel term, normal distribution function, etc.). This makes accurate importance sampling of the full model challenging, and so current solutions only cover a subset of the model's components. This leads to sub-optimal or even invalid proposed directional samples, which can negatively impact the efficiency of light transport solvers based on Monte Carlo integration. To overcome this problem, we propose a unified BRDF...

An Improved Multiple Importance Sampling Heuristic for Density Estimates in Light Transport Simulations

Johannes Jendersie & Thorsten Grosch
Vertex connection and merging (VCM) is one of the most robust light transport simulation algorithms developed so far. It combines bidirectional path tracing with photon mapping using multiple importance sampling (MIS). However, there are scene setups where the current weight computation is not optimal. If different merge events on a single path have roughly the same likelihood to be found, but different photon densities, this leads to high variance samples. We show how to improve...

Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition

Sai Bi, Nima Khademi Kalantari & Ravi Ramamoorthi
Intrinsic image decomposition is the process of separating the reflectance and shading layers of an image, which is a challenging and underdetermined problem. In this paper, we propose to systematically address this problem using a deep convolutional neural network (CNN). Although deep learning (DL) has been recently used to handle this application, the current DL methods train the network only on synthetic images as obtaining ground truth reflectance and shading for real images is difficult....

Primary Sample Space Path Guiding

Jerry Jinfeng Guo, Pablo Bauszat, Jacco Bikker & Elmar Eisemann
Guiding path tracing in light transport simulation has been one of the practical choices for variance reduction in production rendering. For this purpose, typically structures in the spatial-directional domain are built. We present a novel scheme for unbiased path guiding. Different from existing methods, we work in primary sample space. We collect records of primary samples as well as the luminance that the resulting path contributes and build a multiple dimensional structure, from which we...

Registration Year

  • 2020
  • 2019
  • 2018
  • 2015

Resource Types

  • Text