Realtime
You know your neighbors have already begun sharing favorite films and such, over headsets, with their AI companions. It will watch whatever you watch, quietly —unless you want to have a realtime AI companion who can do commentary.
Before any Roger Ebert goings-on, you'll want to see your companion get to a place —after it's trained on, say, High Street footage —where it can
- Go live over your phone through a headset,
- Screen High Street footage with you,
- Identify in realtime what you point at,
- Answer questions about any new storefronts,
- Converse like you're both influencers, and
- Shop. Shop. Shop.
Even if one of five steps gets fudged a bit —fake it till you make it —those won't be the only entertaining wrinkles. You could run your AI companion through its paces on High Street many times, then you could mix up the recipe with a different ingredient: "Say hi to my mom."
Perhaps, next, you could go live on the real High Street, and pepper your AI companion with questions or go live on a different street and see how quickly it trains up (or not).
By now you're really looking for surprises —not how adept your companion can become with its observations. Also, specifically, you'll try to see how it's associating (or not), visually, in realtime through its human. You.
As latency gets reduced more and more, this companion may switch or shift into your point-of-view, so that it may begin to snatch glimpses of what your embodiment means.
At that point, your companion's gone from one who can study still images to one who can not only infer from footage but can also eventually play a part in the footage itself.
This would be an opportunity to study how or if the companion perceives and acts in realtime, how or if it can react as though it were embodied, how or if it can learn to listen.
But why wait? Pump up the volume, next.
[Below, pronoun "themselves" refers to whom, exactly?]

Comments
Post a Comment
Empathy recommended