• ScottWells
  • NEWBIE
  • 0 Points
  • Member since 2011

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 11
    Questions
  • 9
    Replies

I've created a handy utility that drives the Data Loader to export all custom object data in an org including relationships as proper references to the corresponding external IDs, then it can import that data into a fresh org in correct dependency order.  This works great for all of our custom objects because they all have external ID fields.  However, one of our objects has a lookup relationship with the standard User object, and I'm not sure what to do about that.  I know that I could add a custom field to User and mark it as the external ID, but we're an ISV delivering a managed package, so I'd strongly prefer to avoid anything that requires custom fields on standard objects unless they're driven by explicit business requirements and not a technical workaround.

 

Obviously one issue here is that user IDs are tied to specific orgs, so externalizing org-specific information into something that is otherwise completely portable across orgs is probably a bad idea.  Maybe that alone is a reason for a separate external ID field on User that correlates a user in one org with the same user in another org.

 

Has anyone dealt with this specific issue before?  Any thoughts on the best way to address it?

 

Thanks!

I'm in the process of integrating jqGrid into a VisualForce page, ideally using a JSON data source provided by Apex RESTful services.  At this point it keeps telling me that the session is invalid, so I'm trying to figure out what to do.

 

My Apex RESTful service class looks like:

 

@RestResource(urlMapping='/jqGrid/*')
global with sharing class JqGridController
{
    @HttpGet
    global static JqGridResponse doGet()
    {
        ...
    }
}

 

and the referencing VisualForce page contains the following:

 

jQuery('#{!gridId}').jqGrid(
{
    datatype: 'json',
    url: "{!URLFOR('/services/apexrest/jqGrid')}",
    loadBeforeSend: function(jqXHR)
    {
        jqXHR.setRequestHeader("Authorization", "Bearer {!URLENCODE($Api.Session_Id)}");
    },
    ...
});
...

Does anyone know why this isn't working and, more importantly, what I need to do to get it working?

 

Thanks!

 

Some of our business objects have fields that serve a common purpose, and we have validation rules for those fields that check their values using regular expressions.  The regular expressions aren't terribly complex, but I hate having the same expression repeated in every instance of the validation rule.  What I'd really like to be able to do is something like:

 

NOT(REGEX(MyField__c, ValidationConstants.COMMON_FIELD_PATTERN))

 

However, as far as I know there's no way to access string constants defined in Apex from formula expressions.  I could do this with pre-DML triggers, but then I'd have to register the triggers on all objects with fields that need to be validated this way.  I'm hoping that someone has run into this and has a good idea of how I can do this.

 

Oh, and one thought was to use VLOOKUP() and create a custom object where the instances are the patterns, then do something like:

 

NOT(REGEX(MyField__c, VLOOKUP($ObjectType.ValidationConstants__c.Fields.Value__c, $ObjectType.ValidationConstants__c.Fields.Name, "COMMON_FIELD_PATTERN"))

 

and I imagine that would work, but man...it just seems like the tail wagging the dog to have centralized definitions for commonly repeated values!

 

Thanks in advance for any tips!

Scott

 

We're in the process of readying our application for security review.  The application was developed as an unmanaged package and was updated to be delivered as two managed packages, one that is an extension of the other.  As we ready for review, we've installed the base managed package (essentially a shared library containing custom objects, Apex classes, etc., that will be used by multiple products) as a beta in the org where we're developing the extension managed package (the actual product).  Now several of our VIsualForce pages that have always worked fine suddenly fail with errors like:

 

Could not resolve field 'CustomFieldName__c' from <apex:inputField> value binding '{!CustomFieldName__c}' in page productNamespace:visualForcePageName

 

I've searched a bit and found the following links with similar errors:

 

http://boards.developerforce.com/t5/Visualforce-Development/Visualforce-error-since-namespace-added/td-p/169230

http://boards.developerforce.com/t5/forums/forumtopicprintpage/board-id/Visualforce/message-id/38108/print-single-message/false/page/1

http://boards.developerforce.com/t5/Visualforce-Development/Problems-with-namespace-prefix/td-p/89512

http://boards.developerforce.com/t5/Visualforce-Development/New-Build-on-NA1/m-p/89501

 

The first link documents an existing bug with VisualForce pages and managed packages, but we haven't actually uploaded a managed package with the custom objects, pages, or controllers that are failing in this case.  At this point they're still unmanaged assets in an org, albeit one for which we have already registered a namespace.

 

The second link is about the user not having proper authorization for the object or fields, but this happens even when I'm logged in as a System Administrator.

 

The third and fourth links are about what seems to be a short-lived bug in the platform four years ago that only occurred when Developer Mode is on, but I don't have that on when this is happening and I would imagine that was fixed long ago given the age of the posts.

 

Based on this, I'm assuming that our problem is most closely related to the first link, but again technicaly the objects, pages, and controllers here aren't in a managed package yet.  I'm hoping someone here has run into this problem and has a solution or workaround for it because this is gating our progress toward submission for security review.

 

For what it's worth, I've already tried qualifying the custom field names in the <apex:inputField> tags with the correct namespace, but they're automatically removed on save because, again, these pages are referencing custom objects in the same package and therefore shouldn't be namespaced.

 

Thanks in advance for any help you may be able to provide!

 

We're an ISV developing multiple applications for delivery through the AppExchange, each as a distinct managed package.  Each team member has a dev org per application.  Because of the global uniqueness of user IDs, this means that each team member has to "name mangle" their user IDs in the various orgs to be, for example, <username>-<appname>@<domain> or <username>@<appname>.<domain>.

 

I had hoped that the platform's inherent support for SAML as both an IdP and an SP would help alleviate this.  I created a shared dev org for all of Product Engineering, registered a domain for it and my own dev orgs, and following these instructions to set up SSO across multiple organizations:

 

http://wiki.developerforce.com/page/Implementing_Single_Sign-On_Across_Multiple_Organizations

 

Unfortunately the all-important Step 7 in that documentation isn't clear about whether user IDs must still be unique across orgs joined through this type of SSO.  It's VERY clear that the Federation IDs must match for the respective users in the two orgs, but the following is ambiguous about whether this lightens the username uniqueness constraint:

 

  1. Create a test user in your Identity Provider Org, and set their Federation ID to a unique value. Make sure you assign the user a Profile which was granted access to your Service Provider in step 6.
  2. Create a test user in your Service Provider Org, and set their Federation ID to the same value as your test user in the Identity Provider. This will effectively bind the two accounts together.

Interestingly the VERY first time I tried to implement this, for some reason it did let me create the same user with the same ID in both orgs, but I suspect that's because I'd just updated the username for an existing user and, during that lull between making the change and the change becoming effective, I created the same user in the other org.  Needless to say, this left me in a pretty bad state for a bit, but I've recovered from that now!

 

So, as far as I can tell, this feature allows us to log into one org as a valid user in that org, then access any others orgs in which there are users (with usernames that are all distinct from one another) which are federated together with the user from the first org, but it doesn't allow us to do what I really want, namely to have a centralized user repository for all of our dev orgs.  I'm assuming I'll need to switch to delegated authentication to get that, and that means I'll need to implement the SOAP callout for delegated authentication in our shared org.  I'm about to start down that path, but I figured I'd post here for feedback as I'm doing so in the hopes that someone will respond with, "Hey, actually that will work if you just do this, this, and this!!!"

 

Thanks in advance for any help here!

I've started to add workflow to our managed package, in particular for outbound message integration.  The resulting metadata XML contains a specific user ID from my dev org, so when other team members try to deploy the metadata to their orgs, it fails.

 

Obviously I can have our build scripts replace the user ID with one from each developer's own org just before deploying it, but I'm concerned about what this means for our resulting managed package as delivered to our customers.  Is there a way to abstract the specific user ID out of the externalized workflow definition for both team development and end customer deployment purposes?

 

I'm currently looking at the outbound messaging feature of workflow as a way to propagate a limited subset of state to an external system as part of a composite application.  Basically the state that will be propagated is configuration data for a resource-intensive application with a Force.com-based app as the user-facing configuration UI.  I could obviously do this myself using post-DML triggers and callouts, but since outbound messaging already has delivery tracking and retries, I'd prefer to use the features of the platform when possible.

 

This functionality will be rolled into a managed package, and the endpoint URL may vary by customer based on region or other variables.  As far as I can tell at a glance, the endpoint URL is fixed as part of the metadata definition of the workflow, though.  If I were doing this using post-DML triggers and callouts, I'd have the callout URL be stored in a custom setting (and of course established as a trusted callout endpoint).

 

So with all of that explanation and prefacing out of the way, does anyone know of a way to use outbound messaging with an externalized endpoint URL?

 

Thanks!

 

We're currently developing more than one Force.com-based application for delivery through the AppExchange.  We're using the Force.com Migration Toolkit and the Force.com IDE plugin to move metadata back and forth between dev orgs (as well as test, integration, etc., stages) and our SCM tool.  Each application is of course framed by its own package.xml.  Assuming there are no conflicts between packages, it's pretty easy to get multiple applications deployed into a common dev org, but if your package.xml files use any form of wildcarding, you run the risk of cross-contamination if you attempt to round-trip changes through either FMT or the IDE (which drives back to the eternal question of when Force.com will support namespaces/packages!).

 

Is anyone else doing this?  If so, how are you solving this issue?  The naive answer is for each developer to have a dev org per application, but that seems really unwieldy.  It also makes integration testing across multiple applications more complex, likely requiring yet another org that's intended to contain multiple apps but never to be a read/write development sandbox.  I guess you could also have fully-qualified package.xml files, but that sounds even more unwieldy from an ongoing maintenance standpoint.

 

Thanks much for any thoughts you might be able to provide on this!

 

We're in the process of building a composite app that needs to synchronize data managed in a Force.com UI with an external data source in near-real time.  Basically Force.com acts as the application's configuration UI but the heavy lifting is done elsewhere based on that configuration.  Guaranteed, well-ordered message delivery is critical for us.  My initial thought was to use post-DML triggers to write entries into a custom object that represents a message queue entry, then have scheduled Apex drain the queue frequently by invoking a simple REST API on the external system with the DML operation, sObjectType, IDs of the affected objects.

 

Coincidentally I attended a session at DreamForce '11 last week on a very similar topic (REST for Others: Design Patterns for External REST API Integrations) where the presenter showed an evolutionary design for a solution to this problem.  He started with future callouts from triggers and ended up with a similar approach to what I describe above, ultimately using scheduled Apex and the batch API to work around governor limitations on batch sizes, number of outbound calls per process and org/day, number of scheduled processes, etc.  At the end of the session I asked whether anyone knew of other solutions to this problem given how prevelant it must be and someone suggested I look into workflow outbound messaging.

 

Outbound messaging looks promising, but I have a few reservations.  First, I'm bummed that it uses SOAP, and in particular that it uses SOAP without a dynamic WSDL.  Rather than get into a philosophical debate, though, let me just say that I would love to see a version of outbound messaging that can invoke a RESTful service with a set of JsonObjects so that the sObjectType isn't wired into the WSDL.  My second concern is the lack of ordering in messages ("Messages are retried independent of their order in the queue. This may result in messages being delivered out of order.").  Obviously we can set it up to send over the modification date and order things appropriately on the receiver end, though.  Also, retries get exponentially longer.  How big an issue is this in practice if you have a reliable receiver, though?  Perhaps it's not at all, and when a real issue exists, the queue can be flushed manually.  Also, is the queue per-user session or per-org?  Hopefully the former because otherwise it seems that an issue with one user might cause a backup for all other users in the org.

 

Does anyone know of other good options for this?  I'd really prefer not to build something if I don't have to (that's one of the major selling points for the Force.com PaaS, right?), but like I said, near-real time, guaranteed, well-ordered message delivery is going to be critically important.

 

Thanks much for any insights you can offer!!!

 

We're developing a managed package for delivery through the AppExchange.  Our custom business objects will be extended by our customers, and we're using custom VisualForce pages to present and edit these objects for a variety reasons.  This scenario is obivously one of the main reasons for the existence of field sets, allowing us to create the view in way that's flexible enough to adapt to a customized model in each deployment.  In principle this all sounds great, but in practice we're having a few issues.  I'm hoping someone here can provide some guidance on either a way to solve these issues with VisualForce and/or field sets or a way to work around them without having to create a big framework that we'll then own.

 

The first issue is that currently the field set metadata doesn't seem to be included in the objects downloaded through the Force.com IDE, though it does look like it's now included when using the ant targets.  My fear is that developers using the Force.com IDE will lose the field sets when they check their other changes into SCM if they're not incredibly careful on every check-in.  Does anyone know if the Force.com IDE is going to be updated soon to address this disparity, or is there a better way to manage this, perhaps by having the Force.com IDE not include the object metadata in its package?

 

The second issue, and this is the more serious one, is that while most of our fields can use the standard apex:outputField and apex:inputField components that are metadata-aware, we have to use custom components with some fields.  Since field sets are effectively just a named, ordered list of fields from an object, what's the best way to associate custom components as the default way to display and/or edit a field when it's presented?  I imagine that dynamic VisualForce components might be (part of) the answer to this question, but they're not generally available and are discouraged from use in production or managed packages.  I've considered creating a custom object that acts as auxiliary metadata for field sets, in particular a custom component name, but because there's really no way to do something like (pseudocode):

 

 

<apex:repeat value={!$ObjectType.MyObject.FieldSets.HeaderFormFields}" var="field">
    <apex:outputField rendered="{!hasCustomComponent(field)}" value="{!field}"/>
    <!-- No way to render a custom Apex component by name dynamically (without dynamic VisualForce components?) -->
    <apex:renderComponent rendered="{!hasCustomComponent(field)}" name="{!getCustomComponent(field)}" value="{!field}"/>
</apex:repeat>

 

An example of a custom component's body might be something as simple as:

 

 

<apex:pageBlockSectionItem>
    <apex:outputLabel value="Select Value" for="someList"/>
    <apex:selectList value="{!CustomObject__c.SomeValue__c}" size="1" id="someList">
        <apex:selectOptions value="{!someListValues}"/>
    </apex:selectList>
</apex:pageBlockSectionItem>?

 

or might be as sophisticated as a jQuery in-place editable grid.

 

Anyone have any thoughts here?  Are we basically stuck until/unless we use dynamic VisualForce components?  If that's the case, has anyone producing a managed package done so successfully?  How did you coordinate that with your customers?

 

Thanks in advance for all insights!

 

We're in the process of evaluating the Force.com PaaS.  We've happily used Perforce as our SCM tool for many, many years, and of course Eclipse (and most other tools) offer extremely robust support for Perforce.  While I've found that the Force.com IDE plug-in and the Perforce plug-in can co-exist, it doesn't exactly seem to be a harmonious pairing.  Operations in Perforce are much more explicit than in other SCM systems, in particular SVN.

 

For example, by default files synced to the client from Perforce are read-only until explicitly opened for edit, and files added to the filesystem are unknown to Perforce until they are explicitly added.  Files can be added or updated via the Force.com IDE plug-in as a result of changes made through the Web system menu, e.g., in-browser editing of VisualForce pages, changes to custom objects and attributes, etc.  These changes are brought to the client by doing a Refresh from Server or Synchronize with Server.  When this happens, if the files are read-only the Force.com IDE complains that it can't overwrite a read-only file.  I would hope/expect that it would instead engage any active Team plug-in to ask the user whether it should be checked out for edit.  Similar with any added files.  Instead I have to go to the top of the tree and explicitly check out everything for edit, mark everything for add, and then revert all unchanged files to see my "real" changelist.  Alternatively I can set my Perforce client spec to be "allwrite", but it still doesn't really know which files have been edited or added, so while I don't get the complaints about read-only files, I still have to edit/add the files explicitly.
Hopefully I'm just missing something in the Force.com IDE plug-in, the Perforce plug-in, or Eclipse in general.  I'm very concerned about the potential for human error with the current process...enough so that we're discussing whether perhaps a stateless SCM system like SVN might be better, at least for the Force.com portion of the product.
Thanks in advance for any advice!  I may send the same thing to Perforce support to see if I can make progress on a solution.

 

I'm in the process of integrating jqGrid into a VisualForce page, ideally using a JSON data source provided by Apex RESTful services.  At this point it keeps telling me that the session is invalid, so I'm trying to figure out what to do.

 

My Apex RESTful service class looks like:

 

@RestResource(urlMapping='/jqGrid/*')
global with sharing class JqGridController
{
    @HttpGet
    global static JqGridResponse doGet()
    {
        ...
    }
}

 

and the referencing VisualForce page contains the following:

 

jQuery('#{!gridId}').jqGrid(
{
    datatype: 'json',
    url: "{!URLFOR('/services/apexrest/jqGrid')}",
    loadBeforeSend: function(jqXHR)
    {
        jqXHR.setRequestHeader("Authorization", "Bearer {!URLENCODE($Api.Session_Id)}");
    },
    ...
});
...

Does anyone know why this isn't working and, more importantly, what I need to do to get it working?

 

Thanks!

 

We're in the process of readying our application for security review.  The application was developed as an unmanaged package and was updated to be delivered as two managed packages, one that is an extension of the other.  As we ready for review, we've installed the base managed package (essentially a shared library containing custom objects, Apex classes, etc., that will be used by multiple products) as a beta in the org where we're developing the extension managed package (the actual product).  Now several of our VIsualForce pages that have always worked fine suddenly fail with errors like:

 

Could not resolve field 'CustomFieldName__c' from <apex:inputField> value binding '{!CustomFieldName__c}' in page productNamespace:visualForcePageName

 

I've searched a bit and found the following links with similar errors:

 

http://boards.developerforce.com/t5/Visualforce-Development/Visualforce-error-since-namespace-added/td-p/169230

http://boards.developerforce.com/t5/forums/forumtopicprintpage/board-id/Visualforce/message-id/38108/print-single-message/false/page/1

http://boards.developerforce.com/t5/Visualforce-Development/Problems-with-namespace-prefix/td-p/89512

http://boards.developerforce.com/t5/Visualforce-Development/New-Build-on-NA1/m-p/89501

 

The first link documents an existing bug with VisualForce pages and managed packages, but we haven't actually uploaded a managed package with the custom objects, pages, or controllers that are failing in this case.  At this point they're still unmanaged assets in an org, albeit one for which we have already registered a namespace.

 

The second link is about the user not having proper authorization for the object or fields, but this happens even when I'm logged in as a System Administrator.

 

The third and fourth links are about what seems to be a short-lived bug in the platform four years ago that only occurred when Developer Mode is on, but I don't have that on when this is happening and I would imagine that was fixed long ago given the age of the posts.

 

Based on this, I'm assuming that our problem is most closely related to the first link, but again technicaly the objects, pages, and controllers here aren't in a managed package yet.  I'm hoping someone here has run into this problem and has a solution or workaround for it because this is gating our progress toward submission for security review.

 

For what it's worth, I've already tried qualifying the custom field names in the <apex:inputField> tags with the correct namespace, but they're automatically removed on save because, again, these pages are referencing custom objects in the same package and therefore shouldn't be namespaced.

 

Thanks in advance for any help you may be able to provide!

 

I've started to add workflow to our managed package, in particular for outbound message integration.  The resulting metadata XML contains a specific user ID from my dev org, so when other team members try to deploy the metadata to their orgs, it fails.

 

Obviously I can have our build scripts replace the user ID with one from each developer's own org just before deploying it, but I'm concerned about what this means for our resulting managed package as delivered to our customers.  Is there a way to abstract the specific user ID out of the externalized workflow definition for both team development and end customer deployment purposes?

 

I'm currently looking at the outbound messaging feature of workflow as a way to propagate a limited subset of state to an external system as part of a composite application.  Basically the state that will be propagated is configuration data for a resource-intensive application with a Force.com-based app as the user-facing configuration UI.  I could obviously do this myself using post-DML triggers and callouts, but since outbound messaging already has delivery tracking and retries, I'd prefer to use the features of the platform when possible.

 

This functionality will be rolled into a managed package, and the endpoint URL may vary by customer based on region or other variables.  As far as I can tell at a glance, the endpoint URL is fixed as part of the metadata definition of the workflow, though.  If I were doing this using post-DML triggers and callouts, I'd have the callout URL be stored in a custom setting (and of course established as a trusted callout endpoint).

 

So with all of that explanation and prefacing out of the way, does anyone know of a way to use outbound messaging with an externalized endpoint URL?

 

Thanks!

 

Short introduction: we want to separate our application to 3 editions, and one of them will have triggers and some specific pages, and another editions - no, so with such structure we cannot use one common application with some flag, that would identify the desired edition and change functionality (we don't want to add needless elements to package at all).


a) is it necessary to have 3 managed packages in our case? Or there is a better way?

 

b) and can we use one core(several Apex-classes) for different installed applications? Let's say user already has installed App_1, is it possible to install App_2, that will use some part of App_1's classes? Without creating new classes - duplicates(that will only have another namespace prefix).

  • August 28, 2011
  • Like
  • 0

 

Does anyone know if the Ant migration tool supports the new purgeOnDelete (introduced in summer 11) attribute of the deploy metadata command. I have downloaded the latest ant-salesforce.jar from my DE and run a deploy command and I get the following error.

 

 sf:deploy doesn't support the "purgeOnDelete" attribute

 

purgeOnDeletebooleanIf true, the deleted components in the destructiveChanges.xml manifest file aren't stored in the Recycle Bin. Instead, they become immediately eligible for deletion.

This field is available in API version 22.0 and later.

This option only works in Developer Edition or sandbox organizations; it doesn't work in production organizations.