If the processing software finds anything that looks like a face in any frame, it can correlate it forward and backwards through time, in the succeeding and preceding frames, for as long as that given face is visible. Now it has a lot of views of the same face closely associated in time, at multiple angles. From that, a pretty good 3D model of the face can be built, along with a pretty high resolution texture map of it.
From that, it would be much easier to recognize a specific face from a database. Especially if it could cheat, and start with a database of the lifestream owner's address book. Or a public database of faces of people who live in the area. And even more interestingly, if it used a database of all the faces picked out of the past processed chunks of video lifestream recorded by that user!
So if you ever meet or encounter or get to know anyone, you could query your lifestream agent, and find out all the times you have ever been in eyeshot of that person before.
It will get especially interesting when this can be done in near realtime, instead of batch processing it each night.
Of course, this kind of processing can be done on the recordings made by public security cameras. In fact, it will be easier done with that data, because all the data from all the cameras in a city or region can be correlated, and the cameras are at known and fixed locations. It will probably be done first with those cameras by governments (this is probably already either in trials, or is being pitched as a concept by various technology security contracts to the TSA and it's ilk), and then later done by individuals on their own personal lifestreams.
It won't be too many more years, and you will just live with the fact that every time you ever go out in public, everyone everywhere will be able to "recognize" you, and "remember" you.
This is inevitable.