Set<sobject> myset = new Set<sobject>();
List<sobject> result = new List<sobject>();
myset.addAll(originalList);
result.addAll(myset);
Here's a good trick. Sets, by definition, contain no duplicates. So, the first addAll() causes any dupes to overwrite themselves when going into the set. The second addAll() just gets you back to a list. If you don't need the result to be a list, you can omit that.
Best part: you only get dinged for 4 statements in the governor limits.
Set<sobject> myset = new Set<sobject>();
List<sobject> result = new List<sobject>();
myset.addAll(originalList);
result.addAll(myset);
Here's a good trick. Sets, by definition, contain no duplicates. So, the first addAll() causes any dupes to overwrite themselves when going into the set. The second addAll() just gets you back to a list. If you don't need the result to be a list, you can omit that.
Best part: you only get dinged for 4 statements in the governor limits.
I know this is an old topic, but I've used that same technique to eliminate duplicates. The problem I've run into, though, is that Sets are unordered. So, when converted to a Set and then back to a List using addAll(), the order doesn't remain. Is there an efficient technique to remove duplicates while maintaining List order?
To maintain order, you need to iterate over the elements:
Set<sobject> myset = new Set<sobject>();
List<sobject> result = new List<sobject>();
for (sobject s : originalList) {
if (myset.add(s)) {
result.add(s);
}
}
Set.add() returns a Boolean, which is true if it was successfully added to the set. If the object was already in the set, it will be false.
DuplicateFilesDeleter is a simple, but effective tool to locate duplicate files in one or more selected search paths. It scans the files and compares them based on Byte for Byte Comparison, which ensures 100% accuracy. You can then choose to delete the selected duplicate or original files. The program is multi-threaded and performs scans quickly.
I can name that tune in 4 notes!
Here's a good trick. Sets, by definition, contain no duplicates. So, the first addAll() causes any dupes to overwrite themselves when going into the set. The second addAll() just gets you back to a list. If you don't need the result to be a list, you can omit that.
Best part: you only get dinged for 4 statements in the governor limits.
Rich
All Answers
what you are going to remove from list of objects.
duplicates. :D
I can name that tune in 4 notes!
Here's a good trick. Sets, by definition, contain no duplicates. So, the first addAll() causes any dupes to overwrite themselves when going into the set. The second addAll() just gets you back to a list. If you don't need the result to be a list, you can omit that.
Best part: you only get dinged for 4 statements in the governor limits.
Rich
Hello,
I know this is an old topic, but I've used that same technique to eliminate duplicates. The problem I've run into, though, is that Sets are unordered. So, when converted to a Set and then back to a List using addAll(), the order doesn't remain. Is there an efficient technique to remove duplicates while maintaining List order?
Thanks.
-Greg
To maintain order, you need to iterate over the elements:
Set.add() returns a Boolean, which is true if it was successfully added to the set. If the object was already in the set, it will be false.
Hi Rich.
That worked great. Thank you!
-Greg
Works great...
Found another way to acheive this and thought I'd share. Use a map instead of a set.
Then use the map values as your list.
-PM
Without adding a lot of code to read through, you can do something as easy as this:
Lists and Sets can take each other as inputs to their constructors
Public static Void Listmethod(){
List<Integer> i1= New List<integer> {1,2,3,4,3,8,6,7};
System.debug(i1);
Set<Integer> i3 = New Set<Integer>();
i3.Addall(i1);
System.debug(i3);
i1.clear();
i1.AddAll(i3);
i1.sort();
System.debug(i1);
}
}