• MaxPMagee
  • NEWBIE
  • 0 Points
  • Member since 2012

  • Chatter
    Feed
  • 0
    Best Answers
  • 0
    Likes Received
  • 0
    Likes Given
  • 2
    Questions
  • 4
    Replies

We have a cron job that runs every half hour to verify the status of our Streaming API client and to make sure it's connected to SalesForce.  Last night between 2:30 AM and 4:30 AM (central time), I received several notifications that the Streaming Client was not happy.

 

I'm investigating this morning and I see a lot of errors involving empty responses and failed connections to the Streaming API via the Bayeux Client.  Error events occurred during an oauth attempt's Soap call around 2:18 AM.

 

After several errors and restarts, and after the final restart at 4:18 AM, the StreamingAPI, our listener, the Bayeux Client, and every other piece has been happy and 100% functional.  It's been running perfectly well for about 5.5 hours now so there was something transient that took place that I'd just like to understand better.

 

The oauth that started this whole error snowball was a part of a Rest API call, but when that failed, we shut down the entire process.  On restart, the Streaming API connection failed. (Up until that point, there had been no errors and everything had been up and working 100% for around 11 hours according to our logs.)

2012-04-03 02:17:51,743 - ERROR - Stopping because of error- Traceback (most recent call last):
  File "/usr/local/svn/SalesForceLicensing/RestingDaemon/src/resting_daemon.py", line 560, in <module>
    oauth=authenticate(verbose=False)
  File "/usr/local/svn/SalesForceLicensing/RestingDaemon/src/resting_daemon.py", line 50, in authenticate
    data = urllib2.urlopen(token_url, params).read()
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 394, in open
    response = self._open(req, data)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 412, in _open
    '_open', req)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 372, in _call_chain
    result = func(*args)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 1207, in https_open
    return self.do_open(httplib.HTTPSConnection, req)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 1174, in do_open
    raise URLError(err)
URLError: <urlopen error [Errno -3] Temporary failure in name resolution>
2012-04-03 02:17:51,744 - INFO - License Signer terminating subprocesses
2012-04-03 02:17:52,760 - DEBUG - Verifying subprocess pid 6104 is dead
2012-04-03 02:17:52,760 - INFO - Shutdown complete.

 

That killed it good, so it wasn't until 2:30 AM when the next Cron job came along to restart the process that we saw:

2012-04-03 02:30:01,656 - INFO - SalesForce License Signer Started
2012-04-03 02:30:01,657 - INFO - Version Info: [$Id: resting_daemon.py 35 2012-04-02 19:53:19Z mmagee $]
2012-04-03 02:30:01,671 - DEBUG - Starting Streaming API listener
2012-04-03 02:30:56,043 - DEBUG - Received stream data: 2012-04-03 02:30:55.713:WARN::CONNECTION FAILED ContentExchange@6182315=POST//test.salesforce.com:443//services/Soap/u/22.0/#9
2012-04-03 02:30:56,044 - ERROR - Found CONNECTION FAILED message: 2012-04-03 02:30:55.713:WARN::CONNECTION FAILED ContentExchange@6182315=POST//test.salesforce.com:443//services/Soap/u/22.0/#9
To be continued on the next post...

I built the Java/Bayaex example here:

http://www.salesforce.com/us/developer/docs/api_streaming/index_Left.htm#StartTopic=Content/code_sample_java_add_source.htm

 

After adding some waiting to the busy loop, adding my credentials (and after setting up a streaming topic in our SalesForce database), the example works great.  It authenticates, connects to the topic, and receives data as per normal until it gets to the 71st "heartbeat" connection, at which point it fails with the following message:

 

[CHANNEL:META_CONNECT]: {"id":"71","error":"402::Unknown client","successful":false,"advice":{"interval":500,"reconnect":"handshake"},"channel":"/meta/connect"}

 

Slightly more detailed error output says, Error during CONNECT: 402::Unknown client

 

The way we use the Streaming API client, we want it to always be up, and testing it overnight last night showed that after each connection error and restart, the client would get to the 70th heartbeat just fine, but on the 71st, it would have the same error each time.

 

(Our backend code processes the output of the streaming client and currently prints any messages received to a log file):

...

2012-03-26 16:32:42,316 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"67","successful":true,"channel":"/meta/connect"}
2012-03-26 16:34:32,117 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"68","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:22,174 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"69","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:55,122 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"70","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:55,630 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"71","error":"402::Unknown client","successful":false,"advice":{"interval":500,"reconnect":"handshake"},"channel":"/meta/connect"}
2012-03-26 16:36:56,143 - DEBUG - Received stream data: Error during CONNECT: 402::Unknown client
2012-03-26 16:36:56,658 - DEBUG - Received stream data: Exiting...
2012-03-26 16:37:04,791 - WARNING - Status poll of StreamingAPI indicates status '1'- not alive.

Is this normal behavior?  Has anyone run into this before?

 

-Max

We have a cron job that runs every half hour to verify the status of our Streaming API client and to make sure it's connected to SalesForce.  Last night between 2:30 AM and 4:30 AM (central time), I received several notifications that the Streaming Client was not happy.

 

I'm investigating this morning and I see a lot of errors involving empty responses and failed connections to the Streaming API via the Bayeux Client.  Error events occurred during an oauth attempt's Soap call around 2:18 AM.

 

After several errors and restarts, and after the final restart at 4:18 AM, the StreamingAPI, our listener, the Bayeux Client, and every other piece has been happy and 100% functional.  It's been running perfectly well for about 5.5 hours now so there was something transient that took place that I'd just like to understand better.

 

The oauth that started this whole error snowball was a part of a Rest API call, but when that failed, we shut down the entire process.  On restart, the Streaming API connection failed. (Up until that point, there had been no errors and everything had been up and working 100% for around 11 hours according to our logs.)

2012-04-03 02:17:51,743 - ERROR - Stopping because of error- Traceback (most recent call last):
  File "/usr/local/svn/SalesForceLicensing/RestingDaemon/src/resting_daemon.py", line 560, in <module>
    oauth=authenticate(verbose=False)
  File "/usr/local/svn/SalesForceLicensing/RestingDaemon/src/resting_daemon.py", line 50, in authenticate
    data = urllib2.urlopen(token_url, params).read()
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 126, in urlopen
    return _opener.open(url, data, timeout)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 394, in open
    response = self._open(req, data)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 412, in _open
    '_open', req)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 372, in _call_chain
    result = func(*args)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 1207, in https_open
    return self.do_open(httplib.HTTPSConnection, req)
  File "/opt/python-2.7.2/lib/python2.7/urllib2.py", line 1174, in do_open
    raise URLError(err)
URLError: <urlopen error [Errno -3] Temporary failure in name resolution>
2012-04-03 02:17:51,744 - INFO - License Signer terminating subprocesses
2012-04-03 02:17:52,760 - DEBUG - Verifying subprocess pid 6104 is dead
2012-04-03 02:17:52,760 - INFO - Shutdown complete.

 

That killed it good, so it wasn't until 2:30 AM when the next Cron job came along to restart the process that we saw:

2012-04-03 02:30:01,656 - INFO - SalesForce License Signer Started
2012-04-03 02:30:01,657 - INFO - Version Info: [$Id: resting_daemon.py 35 2012-04-02 19:53:19Z mmagee $]
2012-04-03 02:30:01,671 - DEBUG - Starting Streaming API listener
2012-04-03 02:30:56,043 - DEBUG - Received stream data: 2012-04-03 02:30:55.713:WARN::CONNECTION FAILED ContentExchange@6182315=POST//test.salesforce.com:443//services/Soap/u/22.0/#9
2012-04-03 02:30:56,044 - ERROR - Found CONNECTION FAILED message: 2012-04-03 02:30:55.713:WARN::CONNECTION FAILED ContentExchange@6182315=POST//test.salesforce.com:443//services/Soap/u/22.0/#9
To be continued on the next post...

I built the Java/Bayaex example here:

http://www.salesforce.com/us/developer/docs/api_streaming/index_Left.htm#StartTopic=Content/code_sample_java_add_source.htm

 

After adding some waiting to the busy loop, adding my credentials (and after setting up a streaming topic in our SalesForce database), the example works great.  It authenticates, connects to the topic, and receives data as per normal until it gets to the 71st "heartbeat" connection, at which point it fails with the following message:

 

[CHANNEL:META_CONNECT]: {"id":"71","error":"402::Unknown client","successful":false,"advice":{"interval":500,"reconnect":"handshake"},"channel":"/meta/connect"}

 

Slightly more detailed error output says, Error during CONNECT: 402::Unknown client

 

The way we use the Streaming API client, we want it to always be up, and testing it overnight last night showed that after each connection error and restart, the client would get to the 70th heartbeat just fine, but on the 71st, it would have the same error each time.

 

(Our backend code processes the output of the streaming client and currently prints any messages received to a log file):

...

2012-03-26 16:32:42,316 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"67","successful":true,"channel":"/meta/connect"}
2012-03-26 16:34:32,117 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"68","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:22,174 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"69","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:55,122 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"70","successful":true,"channel":"/meta/connect"}
2012-03-26 16:36:55,630 - DEBUG - Received stream data: [CHANNEL:META_CONNECT]: {"id":"71","error":"402::Unknown client","successful":false,"advice":{"interval":500,"reconnect":"handshake"},"channel":"/meta/connect"}
2012-03-26 16:36:56,143 - DEBUG - Received stream data: Error during CONNECT: 402::Unknown client
2012-03-26 16:36:56,658 - DEBUG - Received stream data: Exiting...
2012-03-26 16:37:04,791 - WARNING - Status poll of StreamingAPI indicates status '1'- not alive.

Is this normal behavior?  Has anyone run into this before?

 

-Max