Welcome to PEARC20! PEARC20’s theme is “Catch the Wave.” This year’s theme embodies the spirit of the community’s drive to stay on pace and in front of all the new waves in technology, analytics, and a globally connected and diverse workforce. We look forward to this year’s PEARC20 virtual meeting, where we can share scientific discovery and craft the future infrastructure.
The conference will be held in Pacific Time (PT) and the times listed below are in Pacific Time.
The connection information for all PEARC20 workshops, tutorials, plenaries, track presentations, BOFs, Posters, Visualization Showcase, and other affiliated events, are in the PEARC20 virtual conference platform, Brella. If you have issues joining Brella, please email pearcinfo@googlegroups.com.
Nowadays deep neural networks have been applied widely in many applications of computer vision including medical diagnosis and self-driving cars. However, deep neural networks are threatened by adversarial examples usually in which image pixels were perturbed unnoticeable to humans but enough to fool the deep networks. Compared to 2D image adversarial examples, 3D adversarial mod- els are less invasive in the process of attacks, and thus more realistic. There have been many research works on generating 3D adversarial examples. In this paper, we study the robustness of 3D adversarial attacks when the victim camera is placed at different viewpoints. In particular, we find a method to create 3D adversarial examples that can achieve 100% attack success rate from all viewpoints with any integer spherical coordinates. Our method is simple as we only perturb the texture space. We create 3D models with realistic tex- tures using 3D reconstruction from multiple uncalibrated images. With the help of a differentiable renderer, we then apply gradient based optimization to compute texture perturbations based on a set of rendered images, i.e., training dataset. Our extensive experi- ments show that even only including 1% of all possible rendered images in training, we can still achieve 99.9% attack success rate with the trained texture perturbations. Furthermore, our thorough experiments show high transferability of the multiview robustness of our 3D adversraial attacks across various state-of-the-art deep neural network models.