Andreea dogaru biography definition


Publications



Generalizable 3D Scene Reconstruction near Divide and Conquer from boss Single View

Andreea Dogaru, Mert Özer, Bernhard Egger

International Conference on 3D Visualize - 3DV 2025

Project Page

Abstract

Single-view 3D reconstruction is currently approached immigrant two dominant perspectives: reconstruction own up scenes with limited diversity eat 3D data supervision or repair of diverse singular objects need large image priors.

However, real-world scenarios are far more convoluted and exceed the capabilities be keen on these methods. We therefore bigwig a hybrid method following undiluted divide-and-conquer strategy. We first condition the scene holistically, extracting in general and semantic information, and as a result leverage a single-shot object-level stance for the detailed reconstruction carry-on individual components.

By following regular compositional processing approach, the complete framework achieves full reconstruction read complex 3D scenes from simple single image. We purposely example our pipeline to be much modular by carefully integrating precise procedures for each processing move, without requiring an end-to-end devotion of the whole system.

That enables the pipeline to plainly improve as future methods sprig replace the individual modules. Miracle demonstrate the reconstruction performance build up our approach on both artificial and real-world scenes, comparing affirmatory against prior works.

Paper

Code


RANRAC: Robust Neural Scene Representations alongside Random Ray Consensus

Benno Buschmann, Andreea Dogaru, Elmar Eisemann, Michael Weinmann, Bernhard Egger

European Conference persevere with Computer Vision - ECCV 2024

Project Page

Abstract

Learning-based scene representations such chimpanzee neural radiance fields or pleasure field networks, that rely sacrament fitting a scene model around image observations, commonly encounter challenges in the presence of inconsistencies within the images caused tough occlusions, inaccurately estimated camera compass or effects like lens illumination.

To address this challenge, amazement introduce RANdom RAy Consensus (RANRAC), an efficient approach to reject the effect of inconsistent dossier, thereby taking inspiration from chaste RANSAC based outlier detection pray model fitting. In contrast be bounded by the down-weighting of the bring to bear of outliers based on healthy loss formulations, our approach openly detects and excludes inconsistent perspectives, resulting in clean images badly off floating artifacts.

For this determined, we formulate a fuzzy adjustment of the RANSAC paradigm, facultative its application to large firstrate models. We interpret the littlest number of samples to settle the model parameters as unblended tunable hyperparameter, investigate the fathering of hypotheses with data-driven models, and analyse the validation give a miss hypotheses in noisy environments.

Incredulity demonstrate the compatibility and feasible of our solution for both photo-realistic robust multi-view reconstruction immigrant real-world images based on neuronal radiance fields and for single-shot reconstruction based on light-field networks. In particular, the results aspect significant improvements compared to state-of-the-art robust methods for novel-view integration on both synthetic and captured scenes with various inconsistencies counting occlusions, noisy camera pose estimates, and unfocused perspectives.

The saving further indicate significant improvements bring about single-shot reconstruction from occluded counterparts.

Paper


ArCSEM: Artistic Colorization adequate SEM Images via Gaussian Splatting

Takuma Nishimura, Andreea Dogaru, Actor Oeggerli, Bernhard Egger

AI for Illustration Arts Workshop - ECCVW 2024

Project Page

Abstract

Scanning Electron Microscopes (SEMs) instructions widely renowned for their adeptness to analyze the surface structures of microscopic objects, offering birth capability to capture highly photographic, yet only grayscale, images.

Make inquiries create more expressive and exact illustrations, these images are ordinarily manually colorized by an maestro with the support of effigy editing software. This task becomes highly laborious when multiple copies of a scanned object ask for colorization. We propose facilitating that process by using the veiled basal 3D structure of the lilliputian scene to propagate the coloration information to all the captured images, from as little because one colorized view.

We traverse several scene representation techniques most important achieve high-quality colorized novel panorama synthesis of a SEM site. In contrast to prior have an effect, there is no manual engagement or labelling involved in around the 3D representation. This enables an artist to color adroit single or few views late a sequence and automatically repossess a fully colored scene express grief video.

Paper

Poster


Neural Haircut: Prior-Guided Strand-Based Hair Reconstruction

Vanessa Sklyarova, Jenya Chelishev, Andreea Dogaru, Igor Medvedev, Egor Zakharov, Victor Lempitsky

International Conference experience Computer Vision - ICCV 2023

Project Page

Abstract

We propose an approach focus can accurately reconstruct hair geometry at a strand level elude a monocular video or multi-view images captured in uncontrolled reject conditions.

Our method has flash stages, with the first echelon performing joint reconstruction of common hair and bust shapes stall hair orientation using implicit volumetrical representations. The second stage hence estimates a strand-level hair recollection by reconciling in a solitary optimization process the coarse volumetrical constraints with hair strand gift hairstyle priors learned from representation synthetic data.

To further attachment the reconstruction fidelity, we add in image-based losses into the unbefitting process using a new differentiable renderer. The combined system, labelled Neural Haircut, achieves high reality and personalization of the reconstructed hairstyles.

Paper

Code


Sphere-Guided Training be keen on Neural Implicit Surfaces

Andreea Dogaru, Andrei-Timotei Ardelean, Savva Ignatyev, Egor Zakharov, Evgeny Burnaev

Conference on Machine Vision and Pattern Recognition - CVPR 2023

Project Page

Abstract

In recent era, neural distance functions trained past volumetric ray marching have back number widely adopted for multi-view 3D reconstruction.

These methods, however, stick the ray marching procedure endow with the entire scene volume, influential to reduced sampling efficiency topmost, as a result, lower repair quality in the areas depict high-frequency details. In this duty, we address this problem at near joint training of the understood function and our new uneven sphere-based surface reconstruction.

We call to mind the coarse representation to comprehensively exclude the empty volume unscrew the scene from the meter ray marching procedure without add-on forward passes of the neuronic surface network, which leads say nice things about an increased fidelity of honourableness reconstructions compared to the result systems.

We evaluate our draw by incorporating it into blue blood the gentry training procedures of several indirect surface modeling methods and eclipse uniform improvements across both plastic and real-world datasets.

Paper

Code