function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
Thamana BhatiaThamana Bhatia 

Einstein Analytics and Discovery Insights Specialist - Step 5

I'm stuck on step #5 of Einstein Analytics and Discovery Insights Specialist superbadge

Deliver a Solution to Reduce Subscriber Attrition
Note: The requirements for this challenge changed because of new functionality introduced in recent releases. Analyze a dataset, create a story, and add its add it's prediction and prescriptions to a Salesforce object.

Can anyone help in completing this step.

Thnaks in advance!
SwethaSwetha (Salesforce Developers) 
HI Thamana,
The below links should give an insight on the issue you are facing
https://success.salesforce.com/answers?id=9063A000000aCcQQAU
https://developer.salesforce.com/forums/?id=9062I000000g5xRQAQ

If you still need assistance, please use the below link to reach out to trailhead team so that one of the Engineers will get in touch with you.
Support: https://trailhead.salesforce.com/en/help?support=home

Please mark this answer as best if it helps so that others facing same issue will find it useful.Thanks
Thamana BhatiaThamana Bhatia
Hi Swetha,

Thanks for your response, what you have posted is the old link and the challenge has been changed for that module.
SwethaSwetha (Salesforce Developers) 
HI Thamana,
Have you been able to reach out to trailhead team https://trailhead.salesforce.com/en/help?support=home on this? Thanks
Anthony Torrero Collins 23Anthony Torrero Collins 23
Before I ask Trailhead support about this, I want to check with the community.

As Tharmana notes above, the challenge requirements have changed since the Challenge was first published:
 
"Analyze a dataset, create a story, and add its prediction and prescriptions to a Salesforce object."

The Challenge 5 instructions point you to the Deploy Models topic. In that topic, it notes that if you want to use automated prediction writeback, you have to follow the instructions on Display Einstein Predictions Using Automated Prediction Writeback. That page walks you through field level security, adding the new field to list views, and adding the prediction values and reasons to the Lightning page.

Question #1 -- with the new process, do we only need to write the Predict Tenure value to the subscriber record?
Question #2 -- is there a magic value level for model accuracy? Can there be any warnings? 
Question #3 -- do our writeback only have to write to the Predicted_Tenure__c field? The earlier process had us create three other Discovery fields on the Subscriber object. 
Anthony Torrero Collins 23Anthony Torrero Collins 23
Following this up with more information.

The trailhead challenge is not well enough defined. The requirements are vague. The challenge itself is brief:

"Analyze a dataset, create a story, and add its prediction and prescriptions to a Salesforce object"

I have a couple of issues with this requirement.
  1. You are asking that we add a prediction and prescriptions to a salesforce object. But the instructions in "Deliver a Solution to Reduce Subscriber Attrition" have us only create a field in the Subscriber object for Predicted Tenure. There are no instructions for adding any other information. If you are checking for other information in the Subscriber object, you need to tell us which other fields to add and how to populate them.
  2. There is nothing in this requirement about quality of result. However, the error I'm seeing is includes an implied standard about the quality of the result:  "We can't confirm you mapped the correct fields on the Tenure model. Make sure you only have one model under your prediction, check the model fields, avoid using fields that may cause data leakage, and try again."  You're saying 'may cause leakage.' But to what extent? How do we know. I've looked at many of the comments in the community and people get R2 values of between .6 and .8. Does this affect the outcome? Do we need to be within a particualar R2, MAE, or RMSE range?
  3. Does the success metric include the data being populated (scored)? How do we know if our scoring of, say, Arvy Holtaway has the right number for you? And what format should it be displayed in? A predicted date, or some number value?
  4. Can we complete the steps if there are any data alerts after building the model, or is only an 'all green' deployment readiness acceptable? And what if the deployment readiness is all green before deployment, but the deployed model has an alert?
  5. I'm assuming you can confirm that fields are mapped. Are you looking for a specific set of fields? A key field?

It seems to me that there should be two parts of this challenge. First is just mechanical -- set up the model, build it, deploy it, and show the results on the Subscriber object Layout page (this is what the challenge wording is asking for, by the way). The second is qualitative -- we should be generating useful information with a high likelihood of being correct -- but you provide no guidance to us on what that measure is. The result is we are all just swapping fields in the model, or deleting models, starting over time and again hoping that some unknown thing will trigger a win.

Just look at all the comments from people who have completed this final step of the superbadge -- how many finished it, but don't know why. This challenge isn't teaching us anything about how to use Einstein Discovery.
Ashish KhairkarAshish Khairkar

@Anthony Torrero Collins 23

Can you link to me any forums where this Last step has been completed. I am finding many issues in uderstanding the context of this requirement

Ashish KhairkarAshish Khairkar
Thanks, i completed the last step at last. Only a set of some fields work for the model prediction to not receive the error during CHeck CHallenge 
peter bond 5peter bond 5
Hi @Ashish, Can you please also help me with the last step. It is really hard to get the context.
Ritesh Gupta 144Ritesh Gupta 144
Hi @Ashish Khairkar , Please provide last step. I created model but I am getting error "We can't confirm you mapped the correct fields on the Tenure model. Make sure you only have one model under your prediction, check the model fields, avoid using fields that may cause data leakage, and try again"
Stephanie EbertStephanie Ebert
@ashish, did you simply try a different mix of fields to get the model prediction to not produce the error when checking the challenge?  Since it only lets you keep 13 columns in the dataset, it's hard to know what exactly to remove, if that is indeed why the verification results in the error.