查看: 965|回复: 5
打印 上一主题 下一主题

[其它] Rendering Synthetic Objects into Legacy Photographs

[复制链接]

1023

主题

3

听众

359

积分

设计实习生

Rank: 2

纳金币
335582
精华
0

最佳新人

跳转到指定楼层
楼主
发表于 2011-12-28 10:16:23 |只看该作者 |倒序浏览
Rendering Synthetic Objects into Legacy Photographs

Kevin Karsch     Varsha Hedau    David Forsyth     Derek Hoiem

University of Illinois at Urbana-Champaign

{karsch1,vhedau2,daf,dhoiem}@uiuc.edu





Abstract

We propose a method to realistically insert synthetic objects into

existing photographs without requiring access to the scene or any

additional scene measurements. With a single image and a small

amount of annotation, our method creates a physical model of the

scene that is suitable for realistically rendering synthetic objects

with diffuse, specular, and even glowing materials while account-

ing for lighting interactions between the objects and the scene. We

demonstrate in a user study that synthetic images produced by our

method are confusable with real scenes, even for people who be-

lieve they are good at telling the difference. Further, our study

shows that our method is competitive with other insertion meth-

ods while requiring less scene information. We also collected new

illumination and reflectance datasets; renderings produced by our

system compare well to ground ***th. Our system has applications

in the movie and gaming industry, as well as home decorating and

user content creation, among others.

CR Categories: I.2.10 [Computing Methodologies]: Artificial

Intelligence—Vision and Scene Understanding; I.3.6 [Comput-

ingMethodologies]: Computer Graphics—Methodology and Tech-

niques

Keywords: image-based rendering, computational photography,

light estimation, photo editing

1 Introduction

Many applications require a user to insert 3D meshed characters,

props, or other synthetic objects into images and videos. Currently,

to insert objects into the scene, some scene geometry must be man-

ually created, and lighting models may be produced by photograph-

ing mirrored light probes placed in the scene, taking multiple pho-

tographs of the scene, or even modeling the sources manually. Ei-

ther way, the process is painstaking and requires expertise.

We propose a method to realistically insert synthetic objects into

existing photographs without requiring access to the scene, special

equipment, multiple photographs, time lapses, or any other aids.

Our approach, outlined in Figure 2, is to take advantage of small

amounts of annotation to recover a simplistic model of geometry

and the position, shape, and intensity of light sources. First, we

automatically estimate a rough geometric model of the scene, and

ask the user to specify (through image space annotations) any ad-

ditional geometry that synthetic objects should interact with. Next,

the user annotates light sources and light shafts (strongly directed

light) in the image. Our system automatically generates a physical

model of the scene using these annotations. The models created by

our method are suitable for realistically rendering synthetic objects

with diffuse, specular, and even glowing materials while accounting

for lighting interactions between the objects and the scene.

In addition to our overall system, our primary technical contribu-

tion is a semiautomatic algorithm for estimating a physical lighting

model from a single image. Our method can generate a full lighting

model that is demonstrated to be physically meaningful through a

ground ***th evaluation. We also introduce a novel image decompo-

sition algorithm that uses geometry to improve lightness estimates,

and we show in another evaluation to be state-of-the-art for single

image reflectance estimation. We demonstrate with a user study

that the results of our method are confusable with real scenes, even

for people who believe they are good at telling the difference. Our

study also shows that our method is competitive with other inser-

tion methods while requiring less scene information. This method

has become possible from advances in recent literature. In the past

few years, we have learned a great deal about extracting high level

information from indoor scenes [Hedau et al. 2009; Lee et al. 2009;

Lee et al. 2010], and that detecting shadows in images is relatively

straightforward [Guo et al. 2011]. Grosse et al. [2009] have also

shown that simple lightness assumptions lead to powerful surface

estimation algorithms; Retinex remains among the best methods.









全文请下载附件:
分享到: QQ好友和群QQ好友和群 腾讯微博腾讯微博 腾讯朋友腾讯朋友 微信微信
转播转播0 分享淘帖0 收藏收藏0 支持支持0 反对反对0
回复

使用道具 举报

   

671

主题

1

听众

3247

积分

中级设计师

Rank: 5Rank: 5

纳金币
324742
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

沙发
发表于 2012-10-2 23:20:10 |只看该作者
再看一看,再顶楼主
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

板凳
发表于 2012-10-24 23:30:06 |只看该作者
不错不错,收藏了
回复

使用道具 举报

   

671

主题

1

听众

3247

积分

中级设计师

Rank: 5Rank: 5

纳金币
324742
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

地板
发表于 2012-11-29 23:22:30 |只看该作者
很经典,很实用,学习了!
回复

使用道具 举报

   

671

主题

1

听众

3247

积分

中级设计师

Rank: 5Rank: 5

纳金币
324742
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

5#
发表于 2013-1-28 23:18:15 |只看该作者
呵呵,很好,方便罗。
回复

使用道具 举报

462

主题

1

听众

31万

积分

首席设计师

Rank: 8Rank: 8

纳金币
2
精华
0

最佳新人 活跃会员 热心会员 灌水之王 突出贡献

6#
发表于 2013-2-10 23:21:31 |只看该作者
呵呵,很好,方便罗。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

手机版|纳金网 ( 闽ICP备2021016425号-2/3

GMT+8, 2024-11-11 16:32 , Processed in 0.125134 second(s), 29 queries .

Powered by Discuz!-创意设计 X2.5

© 2008-2019 Narkii Inc.

回顶部