If i was wearing a little lifestreaming camera, and I post-processed the video data, it would be pretty easy to find all the rectangle-ish things it saw. For each one, it could correlate them across them between frames, transform into parallelograms, and composited together all the low resolution images into a single much higher resolution image, I would end up with a digital gallery of every rectangular sign, poster, painting, photograph, door, wall, and window that was in that video stream.
It would be best if the video was just a sequence of DNGs, or an MJPEG stream, but this would also work for MPEG type video as well.
It gets even better if you get access to lots of people's video lifestreams when they are at the same venue. It wouldn't be too hard at all to correlate them all together.
Making it work for non-rectangles, and building a gallery of every distinct object and image in the lifestream is just a matter of more CPU and slightly smarter software.
This is going to happen. It's going to be ubiquitous. The hardware already exists, and the software I describe is, at best, a thesis for a master's degree in CS.