How to rethink 3D and visual search?

How to rethink 3D and visual search?

Visual search. 3D search. For all of us in design and engineering, this topic always was fascinating. I had a chance to speak about this topic with my industry colleagues and blogged about PLM and 3D search couple of times. After all discussions, this is a simple definition of 3D search problem – bad input. You cannot get this visual thing work without the ability to input in an easy way what do you want to search for. Do you remember one of the famous search problem definition – most of the complaints we get are due to the way users search – they use wrong keywords. On my long flight from Boston to Russia earlier this weekend, I’ve been skimming social media backlog. One of the article caught my attention – Google Brings Intelligent Search to Google+ Photos. I must agree with authors of the article. This is one of the most unadvertised features of Google search.

One the surface, it looks simple – Google allows you to search in your Google+ photo library and it gets integrated in Google search. Here is a passage from the article explaining how it works.

Google is now making it easier for users to find their own photos using Google Search.On its Inside Search blog, the company explained that users can now search for and through their photos hosted in Google+ Photos. Searching for the query “my photos” offers up this result, personalized with your own Google+ photographs.

But it gets better. The system actually uses machine learning so that you can target your queries to be more specific. Searching for “my photos of food” or “my photos from Orlando” will provide results tailored for those specific instances.

I tried to search for “my photo bikes” using my Google+ account and discover quite precise selection of photos representing bikes in my photo library even I never used any tagging manually. So, my guess, Google is preprocessing Google+ photo library photos to get search in a right way.

The idea of pre-processing is actually resonated with the my view of 3D search problem. This is how we can rethink visual search. By using intensive processing of geometrical objects and semantic connections of these objects to variety of textual information, we can solve the original problem of user input. 3D models, 2D drawings and other visual objects can be analyzed before and used as an input to search. Extracting of non geometrical data from geometrical objects can allow us to solve “input problem” for many visual search situations.

What is my conclusion? There are two main important things when you speak about search – ease of input and precisions of results. Keyword search makes first obvious and try to solve the second one gradually. This is the way internet search was improved for the last decade. Visual search is still untapped territory. Several early attempts to solve this problem failed. Maybe it is a time to rethink it? Just my thoughts…

Best, Oleg

Share

Share This Post