function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
Sumit Sharma 191Sumit Sharma 191 

Einstein

Hello Everyone

I was going through the Einstein Vision API's and the process its working on. Its a really cool application of AI. I was going through a particular use case where I was trying to count the number of objects in a particular image, but the result that the API's returns is in some probability number, I could not figure out how to count the no. of objects using the output given by the Vision API's. Could anyone of you help me with this particular problem. Thanks in advance.

Thanks And Regards
Sumit Sharma
Mirketa Inc
 
Michael Machado 22Michael Machado 22
Hi Sumit,

Glad you are enjoying the service.  The current service offers custom classification.  Classification provides a probablilty score for each class that adds up 100% across all your classes and analyzes the image as a whole.  To count each class, we would need to enable a new feature called detection.  We are currently piloting that feature and will be making it available later this year.  If you'd like to tell us more about your use case (direct or on the forum) we may be able to help you decipher if the classification service could serve your needs a different way.

Thanks,
Michael
Sumit Sharma 191Sumit Sharma 191
Hi Michael,

Sorry for the delayed response and thanks for making me aware about detection feature, it really helped me to clear my doubts. I am really excited about this feature to come in action.

We are intented to provide a demo to one of our client about Einstein Vision through a small application. So for testing purpose I created a sample dataset of Mountain and Beaches and for classification I sent few images with each one having part of mountain and beach to Vision Api. But after analysing the responses I found inconsistancy in probability score among the classes. For some images it was around 0.96 and 0.04 and for others it was 0.64 and 0.36. So According to me the latter one makes more sense than earlier one. 

Therefore, due to this we are not able to work further on this and likely to put it on hold.

I am sure your guidance will really help.

Regards
Sumit

 
Zineb Laraki 21Zineb Laraki 21
Hi Sumit, 

The Mountain and Beaches dataset we provide is a very small dataset intended to get users comfortable with using the APIs; that dataset is too small to expect consistent results especially if the test images are very different from the examples in the dataset. Using a good dataset that has variety and is representative of the images the model will need to make predictions on is key to getting better accuracy. What is the use case that you are trying to solve? Would be happy to provide you guidance (eg. setting confidence thresholds, feedback to improve the model ect.). 

You can find a list of best practices here: https://metamind.readme.io/docs/dataset-and-model-best-practices

Looking forward to hearing back from you!

Best, 
Z