• HGS
  • NEWBIE
  • 0 Points
  • Member since 2008

  • Chatter
    Feed
  • 0
    Best Answers
  • 1
    Likes Received
  • 0
    Likes Given
  • 6
    Questions
  • 6
    Replies

Hello Community,

 

I have a VF page, with a list of objects.

 

I am using DataTable to bring the list of objects on VF Page, and am using a component to display a single object. The component has its own logic of building the view to display that object.

 

Now, in that component, I have a small button, that is supposed to do something in the background, and update the display of an object field on the component itself. And, I am using apex:CommandLink and rerender to do that.

 

But, the issue is, that when a component is rendered repeatedly in a VF page, the IDs inside the component are also duplicated and repeated as-they-are; resulting in duplicate IDs in the VF page.

 

For example, if I have a <apex:outputText id="DisplayName" value="{!DisplayName}"> inside the component; and if this component is repeated in the VF page, all these outputText elements would have the same ID.., not even a single change....., and if I am trying to re-render this outputText, the system doesn't do anything at all.

 

Re-render is not working at all, in this case...

 

To summarize again, if I am using a component repeatedly in a VF page, and doing something inside the component and re-rendering inside the component.. it doesn't work at all !!

 

Any solutions ? I dont want to go the manual way of writing javascript and ajax methods.

 

 

  • July 08, 2009
  • Like
  • 0

Hello All,

 

I am programming something which requires a lot of data to be kept in memory for the VF page. And, I am keeping that data in Map.

 

A button from the page causes this Map to be re-initiated, i.e., to be re-filled. 

 

Now, when I fill the map the first time, it eats up 90% of my heap space. And, to fill it next time, I have to empty it first.., or force it to be garbage collected. 

 

Here is what I have tried :

 

System.debug('Heap Size 0.1: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = new Map<ID, List<MyClass>>(); System.debug('Heap Size 0.2: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = null; System.debug('Heap Size 0.3: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = new Map<ID, List<MyClass>>(); System.debug('Heap Size 0.4: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize());

 

And, here is the output it produces:

 

 

Heap Size 0.1: 937950 , 1000000 Heap Size 0.2: 929766 , 1000000 Heap Size 0.3: 929758 , 1000000 Heap Size 0.4: 929766 , 1000000

 

 

 

  As you can see, the heap size is not getting reduced at all, which means I cannot fill the collection again !!

 

Any pointers or ideas on how I should force this Map to be garbage collected / free up some memory ?

 

Help would be greatly appreciated !!

 

     


 

  • July 01, 2009
  • Like
  • 1

Hello All, I am developing a site on Force.com platform, and after around 6-7 hours of work, am getting the Limit Exceeded message on any page I want to access.

 

I checked, the limits are still not used 100%. Can someone please help me in finding out what is causing the error ? I tried to enable debug-log for the guest user, but it doesn't show anything.

 

Here are the site usage limits:

 

Bandwidth:31.538 MB out of 1,024 MB  = 3% used

Request Time: 21.032 minutes out of 30 minutes = 70% used

 

I also checked the limits in the company information, and everything is under limit there as well.

 

 

 

Limit Error Message

  • June 25, 2009
  • Like
  • 0

Hello Community !!

 

We have 2 custom object, in master-detail relationship.

 

When master record is deleted, the associated detail records are also deleted automatically.

 

And, we have a after-delete trigger on the detail object.

 

The problem is, that though the trigger works fine when we delete the detail records directly, it does not fire when the master is deleted, and thus the detail records are deleted.

 

Any pointers ?

 

 

 

  • May 08, 2009
  • Like
  • 0

We have a program that requires locking on records. Here is the scenario:

 

Product__c : that contains products 

issue__c : issue transactions

availability__c : availability, sort of stock, used to track how many records in inventory. 

 

Now, when issuing, we want to make sure that  two people, working on different issue transactions, dont use the same products at the same time. And, here is what we are doing in the "Save" button:

 

 

1. Product__c tempLockList = [select id from product where name = 'xx' for update];

2. /*

3.  All update logic, which inserts records in issue__c and updates other objects, but not product object.

4. */

 

 

Now, the same code is called by two people, at the same time.

 

Ideal behavior would have been, that the  system would have come on line 2 in one call, and waited for first transaction to finish in the second call, to come to line 2. 

 

But, somehow, nothing is waiting, and seems like there is no locking at all. both code are executing and causing inconsistent data.

 

To  summarize:

 

1. I am not updating same sobjects. 

2. I need a lock such that when one transaction is going on, the other one should wait.. 

 

Any ideas would be appreciated. 

 

 

 

 

 

 

  • April 30, 2009
  • Like
  • 0
Greetings,

I am developing a VF page, and have a query that searches contact records based on some criteria, taking input from user on the page.

Here is the scenario:

  • Contact is the standard Contact Object
  • MyObject is a custom business object.
  • ConnectorObject is a custom object, which acts as a connector object between MyObject and Contact (M-M relationship between Contact and MyObject )
Here is the query that is executed as the final search query:

select Id, Name,Email,MobilePhone,Account.Name, ownerId  from Contact   where Contact_Status__c = 'Active' AND   id in (select contact__c from ConnectorObject__C where  MyOBject__r.Type__c = 'SomeValue')  AND (id NOT in (select contact__c from MyOBject__c where Type__c = 'SomeValue'))    order by Name limit 1000

The query takes around 0.8 seconds with around 1000 contacts, and around 200 MyObjects and around 400 Connector Objects.

Now, another requirement on this search is that only those contacts should be brought whose owner is the current logged-in user.

So, I added another condition to the query, to check ownerID and the query became (took Current User Id in variable and inserted into query):

select Id, Name,Email,MobilePhone,Account.Name, ownerId  from Contact   where Contact_Status__c = 'Active' AND   id in (select contact__c from ConnectorObject__C where  MyOBject__r.Type__c = 'SomeValue')  AND (id NOT in (select contact__c from MyOBject__c where Type__c = 'SomeValue'))  and ownerId = '0000xxxx0000xxxx00'   order by Name limit 1000


Now, when the query is executed, it is taking about 32 seconds !!.

Thinking that this might be because of so many conditions, I tried removing one condition : Contact_Status__c = 'Active'.  But the result is same, it is taking 32 seconds. I didnt try removing the 'Nested Conditions', as the search won't be possible without that.

I verified the query by running it in SF Explorer and noting the time lapsed.

So, the end result is, that as long as I dont put the OwnerID filter, it is fine, but as soon as I put OwnerID in the condition, the time increases exponentially, to 31-33 seconds !!

So, I ended up in removing the OwnerID from the filter..., and running a for loop after the query to eliminate the un-wanted records.., but this is not a good solution, as I might miss records because of the 1000 limit on the query !!

Has anyone else faced the same problem ? The time increasing manyfolds is a matter of concern, as probably I am doing something wrong, or there is a flaw in the SFDC query processor. Or, probably I am violating some hidden SOQL rule !!

Help Appreciated.






 

  • November 28, 2008
  • Like
  • 0

Hello All,

 

I am programming something which requires a lot of data to be kept in memory for the VF page. And, I am keeping that data in Map.

 

A button from the page causes this Map to be re-initiated, i.e., to be re-filled. 

 

Now, when I fill the map the first time, it eats up 90% of my heap space. And, to fill it next time, I have to empty it first.., or force it to be garbage collected. 

 

Here is what I have tried :

 

System.debug('Heap Size 0.1: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = new Map<ID, List<MyClass>>(); System.debug('Heap Size 0.2: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = null; System.debug('Heap Size 0.3: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize()); MainMap = new Map<ID, List<MyClass>>(); System.debug('Heap Size 0.4: ' + Limits.getHeapSize() + ' , ' + Limits.getLimitHeapSize());

 

And, here is the output it produces:

 

 

Heap Size 0.1: 937950 , 1000000 Heap Size 0.2: 929766 , 1000000 Heap Size 0.3: 929758 , 1000000 Heap Size 0.4: 929766 , 1000000

 

 

 

  As you can see, the heap size is not getting reduced at all, which means I cannot fill the collection again !!

 

Any pointers or ideas on how I should force this Map to be garbage collected / free up some memory ?

 

Help would be greatly appreciated !!

 

     


 

  • July 01, 2009
  • Like
  • 1

Hello Community,

 

I have a VF page, with a list of objects.

 

I am using DataTable to bring the list of objects on VF Page, and am using a component to display a single object. The component has its own logic of building the view to display that object.

 

Now, in that component, I have a small button, that is supposed to do something in the background, and update the display of an object field on the component itself. And, I am using apex:CommandLink and rerender to do that.

 

But, the issue is, that when a component is rendered repeatedly in a VF page, the IDs inside the component are also duplicated and repeated as-they-are; resulting in duplicate IDs in the VF page.

 

For example, if I have a <apex:outputText id="DisplayName" value="{!DisplayName}"> inside the component; and if this component is repeated in the VF page, all these outputText elements would have the same ID.., not even a single change....., and if I am trying to re-render this outputText, the system doesn't do anything at all.

 

Re-render is not working at all, in this case...

 

To summarize again, if I am using a component repeatedly in a VF page, and doing something inside the component and re-rendering inside the component.. it doesn't work at all !!

 

Any solutions ? I dont want to go the manual way of writing javascript and ajax methods.

 

 

  • July 08, 2009
  • Like
  • 0

Hello Community !!

 

We have 2 custom object, in master-detail relationship.

 

When master record is deleted, the associated detail records are also deleted automatically.

 

And, we have a after-delete trigger on the detail object.

 

The problem is, that though the trigger works fine when we delete the detail records directly, it does not fire when the master is deleted, and thus the detail records are deleted.

 

Any pointers ?

 

 

 

  • May 08, 2009
  • Like
  • 0

We have a program that requires locking on records. Here is the scenario:

 

Product__c : that contains products 

issue__c : issue transactions

availability__c : availability, sort of stock, used to track how many records in inventory. 

 

Now, when issuing, we want to make sure that  two people, working on different issue transactions, dont use the same products at the same time. And, here is what we are doing in the "Save" button:

 

 

1. Product__c tempLockList = [select id from product where name = 'xx' for update];

2. /*

3.  All update logic, which inserts records in issue__c and updates other objects, but not product object.

4. */

 

 

Now, the same code is called by two people, at the same time.

 

Ideal behavior would have been, that the  system would have come on line 2 in one call, and waited for first transaction to finish in the second call, to come to line 2. 

 

But, somehow, nothing is waiting, and seems like there is no locking at all. both code are executing and causing inconsistent data.

 

To  summarize:

 

1. I am not updating same sobjects. 

2. I need a lock such that when one transaction is going on, the other one should wait.. 

 

Any ideas would be appreciated. 

 

 

 

 

 

 

  • April 30, 2009
  • Like
  • 0
Greetings,

I am developing a VF page, and have a query that searches contact records based on some criteria, taking input from user on the page.

Here is the scenario:

  • Contact is the standard Contact Object
  • MyObject is a custom business object.
  • ConnectorObject is a custom object, which acts as a connector object between MyObject and Contact (M-M relationship between Contact and MyObject )
Here is the query that is executed as the final search query:

select Id, Name,Email,MobilePhone,Account.Name, ownerId  from Contact   where Contact_Status__c = 'Active' AND   id in (select contact__c from ConnectorObject__C where  MyOBject__r.Type__c = 'SomeValue')  AND (id NOT in (select contact__c from MyOBject__c where Type__c = 'SomeValue'))    order by Name limit 1000

The query takes around 0.8 seconds with around 1000 contacts, and around 200 MyObjects and around 400 Connector Objects.

Now, another requirement on this search is that only those contacts should be brought whose owner is the current logged-in user.

So, I added another condition to the query, to check ownerID and the query became (took Current User Id in variable and inserted into query):

select Id, Name,Email,MobilePhone,Account.Name, ownerId  from Contact   where Contact_Status__c = 'Active' AND   id in (select contact__c from ConnectorObject__C where  MyOBject__r.Type__c = 'SomeValue')  AND (id NOT in (select contact__c from MyOBject__c where Type__c = 'SomeValue'))  and ownerId = '0000xxxx0000xxxx00'   order by Name limit 1000


Now, when the query is executed, it is taking about 32 seconds !!.

Thinking that this might be because of so many conditions, I tried removing one condition : Contact_Status__c = 'Active'.  But the result is same, it is taking 32 seconds. I didnt try removing the 'Nested Conditions', as the search won't be possible without that.

I verified the query by running it in SF Explorer and noting the time lapsed.

So, the end result is, that as long as I dont put the OwnerID filter, it is fine, but as soon as I put OwnerID in the condition, the time increases exponentially, to 31-33 seconds !!

So, I ended up in removing the OwnerID from the filter..., and running a for loop after the query to eliminate the un-wanted records.., but this is not a good solution, as I might miss records because of the 1000 limit on the query !!

Has anyone else faced the same problem ? The time increasing manyfolds is a matter of concern, as probably I am doing something wrong, or there is a flaw in the SFDC query processor. Or, probably I am violating some hidden SOQL rule !!

Help Appreciated.






 

  • November 28, 2008
  • Like
  • 0