Getting Your Twitter Updates with Curl

Last updated: Aug 12, 2008

I have had some serious issues integrating Twitter into my site using the javascript method. Firefox seems to load the twitter updates every time without a problem; however, Internet Explorer and Opera both have issues.

Opera for example will load the twitter updates on the first load of the page or if the twitter feed has been updated but will not load the second time the page is loaded. You can see this on a number of sites that use the javascript method of gathering twitter updates. Just go to their site and hit refresh and the twitter updates will disapear. Even sites that use twitterjs script from code.google.com don’t load properly after a refresh.

With Internet Explorer I found that it was hit or miss. Sometimes they would load sometimes they wouldn’t. In my testing I found that Internet explorer behaved exactly like Opera; although, I have seen some sites that have gotten it to work.

After messing with these browsers trying to get it to work I decided to bag the whole thing and store the twitter updates in a database using PHP and cURL.

The advantages:

You can minipulate the data any way you choose. You’re not limited by only recieving the data >li< format. If twitter is down you can still display your latest tweets. Since the code is in PHP and not JavaScript I can utilize wp-cache

The disadvantages:

Since I will be pulling the data from twitter and storing it periodically there may be a delay before my site recieves the latest tweets.

Getting the Tweets with cURL

This is the function I came up with to grab the tweets from Twitter.

function twitterCapture() {
        // Set your username and password here
        $user = 'YOUREMAILADDRESS';
        $password = 'YOURPASSWORD';

        $ch = curl_init("https://twitter.com/statuses/user_timeline.xml");
        curl_setopt($ch, CURLOPT_HEADER, 1);
        curl_setopt($ch,CURLOPT_TIMEOUT, 30);
        curl_setopt($ch,CURLOPT_USERPWD,$user . ":" . $password);
        curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
        curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, 0);
        curl_setopt ($ch, CURLOPT_SSL_VERIFYHOST, 0);
        $result=curl_exec ($ch);
        $data = strstr($result, '<?');

        $xml = new SimpleXMLElement($data);

        return $xml;
}

Code Explanation

The first part of the code is here so you can set your username and password. Replace these values with your actual username and password.

// Set your username and password here
        $user = 'YOUREMAILADDRESS';
        $password = 'YOURPASSWORD';

This next part of the code tells cURL which site to open.

$ch = curl_init("https://twitter.com/statuses/user_timeline.xml");

If you check out the Twitter API you will notice that this URL will output different responses based on the last part of the URL. Possible formats are: xml, json, rss, atom. In this case we used XML.

The next part of the code are the cURL options.

curl_setopt($ch, CURLOPT_HEADER, 1);  
curl_setopt($ch,CURLOPT_TIMEOUT, 30);  
curl_setopt($ch,CURLOPT_USERPWD,$user . ":" . $password);  
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);  
curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, 0);  
curl_setopt ($ch, CURLOPT_SSL_VERIFYHOST, 0);

The first option, CURLOPT_HEADER, simply says that we want to see the http header. This is useful for when twitter throws out a 502 or 503 error. This allows you to build into your script error reporting functions to alert you that your script can’t reach out to Twitter.

The second option, CURLOPT_TIMEOUT, is the time in seconds that we give cURL to complete its task.

The next option, CURLOPT_USERPWD, is the username and password curl shall use if requested. In our case it is.

CURLOPT_RETURNTRANSFER, is used when we want to return the data instead of printing it to the screen.

CURLOPT_SSL_VERIFYPEER and CURLOPT_SSL_VERIFYHOSTwhen set to ‘0’ are used to ignore SSL errors. You might want to change CURLOPT_SSL_VERIFYHOST to ‘1’ once you get your code working. This will ensure that you don’t accidently send your username password to a site pretending to be Twitter.

This part of the code executes cURL and puts the XML part of the code into the variable ‘$data’. We had to use the function strstr because cURL also returns the HTTP header as per our, CURLOPT_HEADER option. Our parser only wants the stuff that starts with, <?.

$result=curl_exec ($ch);
$data = strstr($result, '<?');

Usage

Simply call the function like this:

$xml = twitterCapture();

Then all you have to do is echo the part of the xml you want.

echo $xml->status[0]->text;
echo $xml->status[1]->text;
echo $xml->status[2]->text;

This would echo out the three most recent tweets. You can also pull other information such as, creation date and profile pictures. Just look at the XML response to find out what other values you might want to pull.

I recommend grabbing the the updates from Twitter and storing them in a text file or database every so often. I wouldn’t run a cURL fetch each time the page loads. This can cause problems if the Twitter is down and your vistors may have to wait up to 30 seconds (whatever your CURLOPT_TIMEOUT is set to). Then point your blog/website to the text file or database.

Need to print shipping labels on your site?

Checkout my product RocketShipIt for simple easy-to-use developer tools for UPS™ FedEx™ USPS™ and more.

Get notified on new posts or other things I'm working on

Share: