cancel
Showing results for 
Search instead for 
Did you mean: 

What are your top scheduled scenarios?

LeonaBelkMSSU
Frequent Contributor

I am working on cleaning up an org with hundreds of thousands of contacts. There is a LOT of dirty data unfortunately. What are some of your favorite scenarios you have scheduled to run each day? I am currently running some that clean up mailing address formats, states, and country codes, but I am sure there are many others we could be running. Reaching out to fellow DT users for recommendations. What has made your life simpler when it comes to good, clean data? 

2 ACCEPTED SOLUTIONS

klabisje
Contributor

Once we have proven out our scenario logic its solid & tested it (very important step), We have nightly jobs that a single owner manages & ensures run (my laptop should never not be on, so I personally ensure DT V is open at all times & can run locally nightly) 

The jobs are based on what they are doing to ensure they are not too large & could cause other issues in SFDC. 

 

Screenshot 2023-05-17 at 3.49.07 PM.png

Screenshot 2023-05-17 at 3.53.15 PM.png

View solution in original post

JonG_Validity
Validity Team Member
Validity Team Member

Hi @LeonaBelkMSSU, it's great to hear you're getting started on cleaning up that dirty data!

It sounds like you've already got the standardization part established which is a great start - by standardizing your records, you'll be able to identify duplicate records easier. If interested in reviewing some additional standardization/cleaning tips, you can review the following playboks for some suggestions:

If you're in a place where you're comfortable with beginning to identify and merge duplicate records, that would be a great next step for you. Our Dedupe playbook provides some great information in terms of how to identify duplicates, which objects to start with, etc..


Regards,

Jonathan Greenip
Customer Success Manager

View solution in original post

4 REPLIES 4

SummerinGreece
Frequent Contributor

@LeonaBelkMSSU I'd recommend to start by standardising, then dedupe.
You can start using rigid criteria and relax as you go. Afterwards, you can run update jobs to fill in the missing information.

klabisje
Contributor

Once we have proven out our scenario logic its solid & tested it (very important step), We have nightly jobs that a single owner manages & ensures run (my laptop should never not be on, so I personally ensure DT V is open at all times & can run locally nightly) 

The jobs are based on what they are doing to ensure they are not too large & could cause other issues in SFDC. 

 

Screenshot 2023-05-17 at 3.49.07 PM.png

Screenshot 2023-05-17 at 3.53.15 PM.png

JonG_Validity
Validity Team Member
Validity Team Member

Hi @LeonaBelkMSSU, it's great to hear you're getting started on cleaning up that dirty data!

It sounds like you've already got the standardization part established which is a great start - by standardizing your records, you'll be able to identify duplicate records easier. If interested in reviewing some additional standardization/cleaning tips, you can review the following playboks for some suggestions:

If you're in a place where you're comfortable with beginning to identify and merge duplicate records, that would be a great next step for you. Our Dedupe playbook provides some great information in terms of how to identify duplicates, which objects to start with, etc..


Regards,

Jonathan Greenip
Customer Success Manager

Thank you Jon! This is exactly what I was looking for. I appreciate the links and look forward to exploring them for some scenarios we can use in our org.