diff options
Diffstat (limited to 'old/published/Webmonkey/APIs')
-rw-r--r-- | old/published/Webmonkey/APIs/delicious.txt | 45 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/disqusapi.txt | 1 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/flickr.txt | 66 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/gajaxsearch.txt | 87 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/gdata_documents_list.txt | 101 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/glossary.txt | 38 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/gmaps.txt | 83 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/oembed.txt | 120 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/twitter.txt | 62 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/youtube_data_api.txt | 102 | ||||
-rw-r--r-- | old/published/Webmonkey/APIs/youtube_player_api.txt | 124 |
11 files changed, 829 insertions, 0 deletions
diff --git a/old/published/Webmonkey/APIs/delicious.txt b/old/published/Webmonkey/APIs/delicious.txt new file mode 100644 index 0000000..19d699d --- /dev/null +++ b/old/published/Webmonkey/APIs/delicious.txt @@ -0,0 +1,45 @@ +Del.icio.us started the social bookmarking revolution and it continues to a popular way to store and share your bookmarks with the world. + +Thanks to its API, it's relatively simple to retrieve your bookmarks and display them on your own site or mash them up somewhere else. And there's no need to limit yourself to your own bookmarks, you can grab bookmarks by tag, other users, date and half a dozen other means. + +As an example, we'll use the del.icio.us API to grab your recent bookmarks, however, if you want to grab something else, say a list of all your tags, you'll just need to change one line. + +== Getting Started == + +There are a number of client libraries available for those looking to access the del.icio.us API. There's a particularly nice one for [http://code.google.com/p/pydelicious/ Python] and one for [http://delicious-java.sourceforge.net/ Java] as well. + +However we're going to skip the client in favor of writing our own code in PHP. The process is simple, just send your login credentials and then the url of the method you're trying to access. + +Copy this code into your PHP file. + +<? +$dusername = "XXXXXXXXXXXXX"; +$dpassword = "XXXXXXXXXXXXX"; +$api = "api.del.icio.us/v1"; +$apicall = "https://$dusername:$dpassword@$api/posts/recent?&count=30"; +function del_connect($url) { + $ch = curl_init(); + curl_setopt($ch, CURLOPT_URL,$url); + curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2); + curl_setopt($ch, CURLOPT_USERAGENT, 'mycoolname'); + curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); + $xml = curl_exec ($ch); + curl_close ($ch); + return $xml; +} +$result = del_connect($apicall); +echo $result; +?> + +The first four lines just set up our URL to retrieve the last 30 bookmarks posted to your account. Naturally you can change this to access any of the [http://del.icio.us/help/api/ API methods] that del.icio.us offers. + +Once we have the URL in place we just pass it to our <code>del_connect()</code> which then uses PHP's built in cURL library to retrieve the XML data from del.icio.us. + +== Where to go from here == + +So we have our last 30 bookmarks, but of course you'll need to parse that XML into some usable PHP structure, like an array, for displaying on a webpage. Since there are at least half a dozen XML parsers for PHP we'll leave that for you to decide. + +The other thing to keep in mind is that we haven't cached our results at all, so don't use this code in an actual web page or del.icio.us may well ban you. You need to set up some sort of cache -- Apache's mod_cache can fit the bill -- so that you don't pound on the del.icio.us servers every time your page refreshes. + + + diff --git a/old/published/Webmonkey/APIs/disqusapi.txt b/old/published/Webmonkey/APIs/disqusapi.txt new file mode 100644 index 0000000..76a968b --- /dev/null +++ b/old/published/Webmonkey/APIs/disqusapi.txt @@ -0,0 +1 @@ +Disqus is a very clever and simple way to add comments to just about any page. Thanks to the handy plugins for various blogging platforms (WordPress, Movable Type, Blogger and more) it's easy to integrate into your current publishing system.
For those of using a custom blogging engine there's even some nice JavaScript code that can get Disqus comments on your site in no time.
The only problem with using Disqus is there's no on-site backup -- all your comments are stored and managed through Disqus, but if Disqus is down for some reason, or you decide to stop using it in the future, well, you're screwed.
There's a nicely integrated plugin for WordPress that automatically pushes your Disqus comments to your WordPress database, but the those of us not using WordPress are seemingly out of luck.
Luckily for us the the company [http://www.webmonkey.com/blog/Disqus_Poised_to_Rule_the_World_of_Blog_Comments recently launched] a new [http://disqus.com/docs/api/ public API], which offers some ways to retrieve your comment data and store it wherever you like. Grab a couple of joe and we'll dig into to how the Disqus API works.
== Getting Started ==
The Disqus API is slightly convoluted, but with a little work we can get what we're after. The first step is grabbing your [http://disqus.com/api/get_my_key/ Disqus API key] (note that you need to be logged into Disqus for that link to work).
Now, if you look over the documentation you'll notice that there's actually two API keys we need. The first is the one linked above, but then we'll also need a key for each "forum" that we're going to access. Luckily we can query for the second key.
If the word "forum" is a little confusing, here's low down (the terminology we suspect originates from Disqus' infancy when it was a forum software project).
Each of your sites in Disqus are what Disqus calls a "forum." Within each site or forum, you have message threads. The threads correspond to you blog posts (or the page where you embed Disqus).
Then within each thread are the "posts," or comments, people have left on your site.
So the API process means first you fetch a list of forums (if you only have one site, there's just one forum_id to worry about), then you fetch a list of "threads" in each forum, then you can request the actual comments for each thread.
== Dive in ==
To help get you started I've written a quick and dirty Disqus API Client for Python. Because the Disqus API is young and subject to change, I didn't mirror it, rather I created a generic wrapper function which can handle all the current (and future) methods.
Here's the code:
<pre>
import urllib
import simplejson
BASE_PATH = 'http://disqus.com/api/'
DEBUG = True
class DisqusError(Exception):
def __init__(self, code, message):
self.code, self.message = code, message
def __str__(self):
return 'DisqusError %s: %s' % (self.code, self.message)
class DisqusAPIClient():
def __init__(self):
"""instantiate"""
def __getattr__(self, method):
def method(_self=self, _method=method, **params):
url = "%s%s/?&%s" % (BASE_PATH, _method, urllib.urlencode(params))
if DEBUG: print url
data = self.fetch(url)
return data
return method
def fetch(self, url):
data = simplejson.load(urllib.urlopen(url))
if data.get("code", "") != "ok":
raise DisqusError(data["code"], data["message"])
return data['message']
def __repr__(self):
return "<DisqusClient: %s>" % self.method
</pre>
Save that in a new file, named disqus.py, somewhere on your PythonPath. Be sure to note that we're using the Python simplejson library, so you'll need to [http://pypi.python.org/pypi/simplejson download and install] that if you haven't already.
== Using the Disqus API ==
Okay, now we have something we can use to access Disqus and return nice, native Python objects. So do we go about using it?
Here's an example using the Python command line interface:
<pre>
>>> from disqus import DisqusAPIClient
>>> client = DisqusAPIClient()
>>> API_KEY = 'XXXXXXXXXXXXXX'
>>> fkey = client.get_forum_list(user_api_key=API_KEY)
>>> fkey
[{u'created_at': u'2008-08-29 18:33:26.560284', u'shortname': u'luxagraf', u'name': u'luxagraf', u'id': u'00000000'}]
</pre>
So the first thing we do is import our client. Then we define our API key. The next step is fetch our list of forums using the Disqus method <code>get_forum_list</code>, which requires a parameter <code>user_api_key</code> along with the actual key.
As you can see the result is a Python list containing, among other data, our forum id. So now we can plug that into the function that will retrieve our forum key.
Here's how that works:
<pre>
>>> forum_key = client.get_forum_api_key(user_api_key=API_KEY, forum_id=fkey[0]['id'])
>>> forum_key
u'u@E6KnR....'
</pre>
All we've done here is call the <code>get_forum_api_key</code> method, passing it the user API key and the forum id, which we extracted from our earlier call.
Now that we have the forum key we can actually retrieve a list of threads (all the posts where we have Disqus comments running).
Once we have the list of threads, we can then query of all the comments on each thread:
<pre>
>>> comments = []
>>> posts = client.get_thread_list(forum_api_key=forum_key)
>>> for post in posts:
... comments.append(client.get_thread_posts(forum_api_key=forum_key, thread_id=post['id']))
</pre>
What we've done here is create a list object to store all our comments and then queried for the list of threads. Once we have the thread we loop through each one and call <code>get_thread_posts</code> which returns the actual comments.
Then we just append those to our <code>comments</code> list.
Now we have all the comments that have been posted on each entry in a single Python list. From here all we need to do loop through the comments and store them in database that matches to all the kinds of data Disqus stores -- comment, commenter name, avatar and so on.
Because there are any number of ways you can do that, different databases etc, we'll leave that as an exercise for the reader, but to access the individual comments and associated data you would just need to loop through our comments list like so:
<pre>
>>> for c in comments:
... if c:
... print c[0]['message']... etc
</pre>
Rather than simply printing out the data, just call a function that writes the data to the database and you'll have your local backup of Disqus comments.
== Conclusion ==
When it comes to sending comments to Disqus, you'll have to stick with the JavaScript forms that the company provides. At least for now, though the API docs do note that Disqus is looking into other ways of submitting.
Still, despite being one-way, the Disqus API makes it relatively easy to store a local backup of all the comments that have been submitted to your site through the Disqus service.
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/flickr.txt b/old/published/Webmonkey/APIs/flickr.txt new file mode 100644 index 0000000..8c052dd --- /dev/null +++ b/old/published/Webmonkey/APIs/flickr.txt @@ -0,0 +1,66 @@ +One the first and most comprehensive APIs of web 2.0, the Flickr API was in no small part responsible for the site's success. There were dozens of photo sharing sites clamoring for attention when Flickr first launched, but thanks the API developers begin building tool and extending the site far beyond the capabilities of others. + +The Flickr API exposes just about every piece of data that the site store and offer near limitless possibilities for mashups, data scraping, tracking friends and just about anything else you can think of. + +Some of the more popular applications leveraging the API are the various desktop uploaders available on all platforms, endless mapping mashups and more. Perhaps the most prolific of Flickr API users is John Watson (Flickr user fd) who has an [http://bighugelabs.com/flickr/ extensive collection] of tools and mashups available. + +== Getting Started == + +The nice thing about Flickr is that the API is mature and there are already client libraries available for most languages (several libraries in some cases). That mean you don't need to sit down and work out the details of every method in the API, you just need to grab the library for your favorite programming language. + +For the sake of example, I'm going to use a Python library to retrieve all the photos I've marked as favorites on Flickr. + +First grab [http://flickrapi.sourceforge.net/ Beej's Python Flickr API library] and install it on your Python path (instructions on how to do that can be found in the [http://flickrapi.sourceforge.net/flickrapi.html documentation]). I like Beej's library because it handles the XML parsing without being dependent on any other libraries. + +== Writing the Code == + +Now let's write some code. Fire up a terminal and start Python. Now import the flickrapi and set your API key: + +import flickrapi +api_key = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' + +Now we're going to create an instance of the flickrapo client: + +flickr = flickrapi.FlickrAPI(api_key) + +Okay, now let's grab all the photos we've marked as favorites. Accessing all the methods of the Flickr API takes the general form: + +flickr.method_name(params) + +So to grab our favorites we'll use: + +favs = flickr.favorites_getPublicList(user_id='85322932@N00') + +So the favs variable now holds our list of images as parsed XML data. To print it we just loop through and pull out what we want: + +for photo in favs.photos[0].photo: + print photo['title'] + +To access the actual images, for instance to generate some HTML, we just need to build a url: + +for photo in favs.photos[0].photo: + print '<img src="'+"http://farm%s.static.flickr.com/%s/%s_%s_m.jpg" % (photo['farm'], photo['server'], photo['id'], photo['secret']) +'" />' + + +== Mashups == + +If all you want to do is put images on your website there's probably already a widget that can handle the task (of course you *can* DIY if you like). But what if you wanted to plot all the Favorites we just retrieved on a Google Map? + +That's exactly the sort of mashup that the Flickr API excels at. To do that, we would just need to add a parameter to our original method call, to tell flickr to include the photos geo coordinates, for instance: + +favs = flickr.favorites_getPublicList(user_id='85322932@N00', extras='geo') + +Now we can parse through and grab the coordinates: + +for photo in favs.photos[0].photo: + print photo['latitude'] + photo['longitude'] + +Then we can pass that over to the Google Maps API and plot the images. Note that in this particular case only a couple of the returned photos actually have lat/long info so it would be a good idea to test for non-zero values before passing the data to the Google Maps API. + +I should also point out that the Flickr API will return other formats besides XML. For instance we coud use this method to get a JSON response: + +favs = flickr.favorites_getPublicList(user_id='85322932@N00', extras='geo', format='JSON') + +== Conclusion == + +The Flickr Maps API exposes nearly every aspect of the site, which makes it both limitless and daunting, but thankfully Flickr has excellent documentation. As for what you can do with the Flickr API, the best mashups seem to start with the thought, "you know what would be cool..."
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/gajaxsearch.txt b/old/published/Webmonkey/APIs/gajaxsearch.txt new file mode 100644 index 0000000..69dbb3e --- /dev/null +++ b/old/published/Webmonkey/APIs/gajaxsearch.txt @@ -0,0 +1,87 @@ +Building a site search engine is pain and unless you're handy at writing algorithms, yours probably isn't going to be that great, even after all your hard work. So why bother? Especially when there's already a reasonably popular search engine by the name of Google that's perfectly willing to handle the job for you. + +The [http://code.google.com/apis/ajaxsearch/ Google Search API] is not only really good at searching, since it accesses the Google index, but it's also really easy to use. + +The mashup potential here is near limitless, but we'll confine ourselves to a much more common case -- a site specific search engine for your blog. + +== Getting Started == + +The first step is to get a Google Search API. Just login to your Google account and head over the application page. Tell Google the domain where you'll be using the Search API and then copy and paste your key, we'll need it in just a minute. + +First, just to ensure there's no confusion, the only search API from Google uses AJAX. There was an older SOAP-based API, but sadly, that's no longer available. You might still run across a few SOAP-based implementation since Google hasn't shut it down, but it doesn't hand out new keys. + +The other thing to keep in mind is that if you're launching a new site, the site-specific results won't exist, since Google probably hasn't crawled the URL yet. If you don't have one, set up a Google Webmaster accounts and tell Google about your site by creating a sitemap. That should speed up the indexing process though you will likely still have to wait a few days before a Google search returns anything. + +== Implementing The Basic Search Engine == + +The first thing to do if open up your site template and add this line to the head tags: + +<code> + <script src="http://www.google.com/uds/api?file=uds.js&v=1.0&key=YOURKEYHERE" type="text/javascript"></script> +</code> + +Paste in that API key you generated earlier and you're ready to go. For now we're going to write all our code in the page head tags, but if you end up with a long and complex script it's a better idea to break it out in its own file. + +<code> +<script language="Javascript" type="text/javascript"> + //<![CDATA[ + + function OnLoad() { + // Create a search control + var searchControl = new GSearchControl(); + + // create a search object + searchControl.addSearcher(new GwebSearch()); + // tell Google where to draw the searchbox + searchControl.draw(document.getElementById("search-box")); + + } + GSearch.setOnLoadCallback(OnLoad); + + //]]> +</script> +</code> + +What we've done here is create a function that fires when the page loads and creates a new GSearchControl object which is a text input box and a search button (there's also a little "powered by Google" badge). We then create a searcher, in this case we're just using a normal GwebSearch, which mimics the Google homepage. + +Other options include video search, image search, blog search and several other of Google's specialized search engines. For more details check out the [http://code.google.com/apis/ajaxsearch/documentation/reference.html#_intro_GSearch Search Docs]. + +Once we have the object initialized and the type of search set we have to tell Google where to draw the object. In this case we'll use a div with an id of "search-box," so add this code somewhere in the body of you html file: + +<code> +<div id="search-box"> + +</div> +</code> + +That's all there is to it, your users can now search Google without leaving your page. But that's not exactly what we want, read on to find out how we can limit the search to just your site. + +== Site Specific Search Engine == + +To restrict the results to just your domain we need to create a site restriction. To do that we're going to change this line: + +<code>searchControl.addSearcher(new GwebSearch());</code> + +To this: + +<code> +var siteSearch = new GwebSearch(); +siteSearch.setUserDefinedLabel("YourSite"); +siteSearch.setUserDefinedClassSuffix("siteSearch"); +siteSearch.setSiteRestriction("example.com"); +searchControl.addSearcher(siteSearch); +</code> + +Just fill in your site name and url and you're done. Give it a shot and you should see results limited to your domain (assuming Google has indexed it already) + +One thing to note, you can string together as many of these site search as you'd like and use the setUserDefinedClassSuffix to add a different class to each domain which makes it possible to do some fancy CSS work to, for instance, color code your results by domain. + +You can also create a search using a custom search engine if you have one defined. See the Search Docs for more details. + +== Where to go From Here == + +We've really just scratched the surface of what you can do with the AJAX Search API, so definitely read through the documentation and have a look at some of the example. Mashup potentials abound, especially using some the specialized search engines like local search or video. + +Other options include the ability to control most of the look and feel via stylesheets, the ability to search Google Books to find quotes for your blog and more. + +If even this is just too much code, you can always use the handy [http://code.google.com/apis/ajaxsearch/wizards.html Ajax Search Wizards] to generate some cut and paste code that will perform basic searches. diff --git a/old/published/Webmonkey/APIs/gdata_documents_list.txt b/old/published/Webmonkey/APIs/gdata_documents_list.txt new file mode 100644 index 0000000..b654b81 --- /dev/null +++ b/old/published/Webmonkey/APIs/gdata_documents_list.txt @@ -0,0 +1,101 @@ +Google Documents offers free online storage for a variety of common files -- your MS Office documents, spreadsheets, text files, and more. + +Even if you don't actually use Google Documents for editing or creating documents, it can serve as a handy backup for your desktop files. Today we'll take a look at the gData APIs that allow you to upload files from your local machine and store them in Google Documents. + + +== Installing the gData API == + +Because most of what we're going to do is shell-based, we'll be using the the Python gData library. If you're not a Python fan there are a number of other client libraries available for interacting with Google Docs including [http://code.google.com/apis/youtube/developers_guide_php.html PHP], [http://code.google.com/apis/youtube/developers_guide_dotnet.html .NET], [http://code.google.com/apis/youtube/developers_guide_java.html Java] and [http://code.google.com/apis/youtube/developers_guide_python.html Python]. + +To get started go ahead and download the [http://code.google.com/p/gdata-python-client/ Python gData Client Library]. Follow the instructions for installing the Library as well as the dependancies (in this case, ElementTree -- only necessary if you aren't running Python 2.5) + +Now, just to make sure you've got everything set up correctly, fire up a terminal window, start Python and try importing the modules we need: + +<pre> +>>> import gdata.docs +>>> import gdata.docs.service +</pre> + +Assuming those work, you're ready to start working with the API. + +== Getting Started == + +The first thing we need to get out of the way is what kinds of documents we can upload. There's a handy static member we can access to get a complete list: + +<pre> +>>> from gdata.docs.service import SUPPORTED_FILETYPES +>>> SUPPORTED_FILETYPES +</pre> + +Running that command will reveal that these are our supported upload options: + +#RTF: application/rtf +#PPT: application/vnd.ms-powerpoint +#DOC: application/msword +#HTM: text/html +#ODS: application/x-vnd.oasis.opendocument.spreadsheet +#ODT: application/vnd.oasis.opendocument.text +#TXT: text/plain +#PPS: application/vnd.ms-powerpoint +#HTML: text/html +#TAB: text/tab-separated-values +#SXW: application/vnd.sun.xml.writer +#TSV: text/tab-separated-values +#CSV: text/csv +#XLS: application/vnd.ms-excel + +Definitely not everything you might want to upload, but between the MS Office options and good old plain text, you should be able to backup at least the majority of your files. + +Now let's take a look at authenticating with the gData API. + +Create a new file named gdata_uploader.py and save it somewhere on your Python Path. Now open it in your favorite text editor. Paste in this code: + +<pre> +from gdata.docs import service + +def create_client(): + client = service.DocsService() + client.email = 'yourname@gmail.com' + client.password = 'password' + client.ProgrammaticLogin() + return client + +</pre> + +All we've done here is create a wrapper function for easy logins. Now, any time we want to login, we simply call <code>create_client</code>. To make you code a bit more robust you can pull out those hardcoded <code>email</code> and <code>password</code> attributes and define them elsewhere. + +== Uploading a document == + +Now we need to add a function that will actually upload a document. Just below the code we created above, paste in this function: + +<pre> +def upload_file(file_path, content_type, title=None): + import gdata + ms = gdata.MediaSource(file_path = file_path, content_type = content_type) + client = create_client() + entry = client.UploadDocument(ms,title) + print 'Link:', entry.GetAlternateLink().href +</pre> + +Now let's play with this stuff in the shell: + +>>> import gdata_upload +>>> gdata_upload.upload_file('path/to/file.txt','text/plain','Testing gData File Upload') +Link: http://docs.google.com/Doc?id=<random string of numbers> +>>> + +Note that our <code>upload_file</code> takes an optional parameter "title", if you import Python's date module and pass along the date as a string it's easy to make incremental backups, like: myfile-082908.txt, myfile-083008.txt and so on. + +== Where to go from here == + +To automate our backup process you could call the upload file function from a cronjob. For instance I use: + +<pre> +0 21 * * * python path/to/backup_docs.py 2>&1 +</pre> + +In this case backup_docs.py is just a three line file that imports our functions from gdata_uploader.py and then uses Python's <code>os</code> module to grab a list of files I want backed up and calls the <code>upload_file</code> function. + +While the automated script is a nice extra backup, unfortunately the gData Documents API is somewhat limited. For instance it would be nice if we could automatically move our document to a specific folder, but that currently isn't possible. + +There are some read functions available though, have a look through the [http://code.google.com/apis/documents/reference.html official docs] and if you come up with a cool way to use the API, be sure to add it to this page. diff --git a/old/published/Webmonkey/APIs/glossary.txt b/old/published/Webmonkey/APIs/glossary.txt new file mode 100644 index 0000000..54938ab --- /dev/null +++ b/old/published/Webmonkey/APIs/glossary.txt @@ -0,0 +1,38 @@ +An Application Programming Interface, better know by it's abbrieviation, API, is a simple way to interact with websites. Using an API you can extract public data from sites like del.icio.us, flickr, Digg and more to create mashups, reuse data or just about anything else you can imagine. + +APIs are also useful for extracting your own private data from a site so that you can back it up elsewhere or display it on another site. + +When talking about APIs you'll here the following terms quite a bit. + +== Common API Related Terms == + +# Web service/API -- These terms are largely interchangable and simple refer to the ways you can interact with the data on your favorite websites. + +# Method -- A method is just one aspect of an API; you might also see methods refered to a functions. For instance, if you're interacting with Flickr, you might want to get your public photos. To do so you would use the get_user_photos method. + +# Response -- the information returned by the API method that you've called. + +# REST -- short for Representational State Transfer. REST treats data as a web document that lives at a specific URL. REST APIs use standard HTTP requests such as GET, PUT, HEAD, DELETE and POST to interact with data. + +# XML-RPC -- This older API scheme formats method calls and reponses as XML documents which are sent over HTTP. + +# SOAP -- Simple Object Access Protocol. A W3C standard for passing messages across the network. SOAP is the successor to XML-RPC. It's complexity has led many to disparage SOAP and with more APIs leaning toward REST, SOAP's future is uncertain. + +# AJAX -- Asynchronous JavaScript and XML. Technically it has nothing to do with APIs, however many sites using APIs send their queries out using AJAX which is partially resposible for the popularity of JSON. + +# JSON -- JavaScript Object Notation. JSON's main advantage is that it is easy to convert from JSON to nearly any other programming language. JSON uses key-value pairs and arrays, something common to PHP, Python, Perl, Ruby and most other languages. The portability of JSON has made it an increasingly popular choice for sites developing APIs. + +== Popular Web APIs == + +# [http://www.google.com/apis/maps/ Google Maps] +# [http://developer.yahoo.com/maps/ Yahoo Maps] +# [http://www.flickr.com/services/api/ Flickr] +# [http://code.google.com/apis/youtube/overview.html YouTube] +# [http://del.icio.us/help/api/ del.icio.us] +# [http://wiki.ma.gnolia.com/Ma.gnolia_API ma.gnolia] +# [http://twitter.com/help/api Twitter] +# [http://www.yelp.com/developers/documentation/search_api Yelp] +# [http://openid.net/ OpenID] +# [http://www.amazonws.com/ Amazon S3] +# [http://atomenabled.org/ AtomAPI] +# [http://meta.wikimedia.org/wiki/API MediaWiki API]
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/gmaps.txt b/old/published/Webmonkey/APIs/gmaps.txt new file mode 100644 index 0000000..134eab9 --- /dev/null +++ b/old/published/Webmonkey/APIs/gmaps.txt @@ -0,0 +1,83 @@ +Google Maps is perhaps the biggest and most useful of all the common web APIs, but it's also one of the more complex and can be intimidating for newcomers. It's also somewhat difficult to immediately recognize all the possibilities of the Google Maps API since there are literally hundreds of ways to use it. + +To keep things simple we'll start with a vary common use: Adding a map to your site and displaying some markers. + +== Getting Started == + +The first thing you need to do is apply for a Google Maps key. Just head over to the [http://code.google.com/apis/maps/signup.html API key signup page] and login to your Google account. Once you have the key, create an html file with this basic code: + +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> +<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> + <head> + <title>My Map</title> + <script src="http://maps.google.com/maps?file=api&v=2&key=yourkeyhere" type="text/javascript"></script> + </head> + <body> + <h1>My Map</h1> + <div id="map-canvas"></div> + </body> +</html> + +Remember to paste your map key into the JavaScript tag and you're all set. Well, almost. WE need to add one more little thing so that Google will go ahead and initialize the map. Change the body tag to include the following handlers: + +<body onload="initialize()" onunload="GUnload()"> + +== Adding in the Map == + +We've got our script set up and it's loading, now we just need to tell the API where to draw the map. To do that we're going to write a little JavaScript. Let's get started by inserting this code into the head tags of your HTML file + +<style> +div#map-canvas { + width: 500px; + height: 300px; +} +</style> +<script type="text/javascript"> + var map = null; + + function initialize() { + if (GBrowserIsCompatible()) { + // create a center for our map + point = new GLatLng(37.780764,-122.395592) + // create a new map. + map = new GMap2(document.getElementById("map-canvas")); + // set the center + map.setCenter(point, 15, G_NORMAL_MAP); + + } + } +</script> + +Now load your html file in your browser and you should see a map centered on the Wired News offices. + +== Adding Markers == + +Let's add in marker so users have something to interact with. To do that we'll extend our initialize function. Add these lines just below map.setCenter bit: + + markerOptions = {clickable:true, draggable:false }; + marker = new GMarker(point, markerOptions); + map.addOverlay(marker); + marker.info_window_content = '<h2><strong>Wired News</strong></h2><p>Home of Monkeys</p>' + marker.bindInfoWindowHtml(marker.info_window_content, {maxWidth:350}); + GEvent.addListener(marker, "click", function() { + map.panTo(point, 2); + }); + + +Reload your page in the browser and you should now see a little red pin and when you click it, you should see our little info window. + +And that's all there is to it. + +== Where to go from here == + +Obviously the GMaps API is far more powerful than this simple example. By itself the Google Maps API might not be the most exciting web service, but when you start mashing it together with other data, it can turn boring address tables into map plotted, location-aware information for your visitors. + +Here's a few ideas to get you started exploring some other Google Maps options. + +# Include driving direction -- to get the handy "directions to here" links that you'll find on a normal Google map, see the [http://code.google.com/apis/maps/documentation/reference.html#GDirections GDirections class] +# Include map controls - There are a variety of different [http://code.google.com/apis/maps/documentation/reference.html#GControl GMap controls] for your users to pan and zoom. Try adding this line just above the initialize function: map.addControl(new GSmallZoomControl()); +# Batch Add Markers - The best way to add markers is to pull info from your database and loop through it when you output the HTML. Just nestle the code from the "Adding Markers" section inside a loop and make the marker names dynamic. +# Custom Markers - There's no need to stick with the default red pin, you can use any image you want, see [http://code.google.com/apis/maps/documentation/reference.html#GMarkerOptions the docs for more details]. +# Hide the Google logo and map image credits - Most definitely against the TOS, but if you're so inclined add this to your stylesheet: img[src="http://maps.google.com/intl/en_us/mapfiles/poweredby.png"], +#map-canvas>div:first-child+div>*, +a[href="http://www.google.com/intl/en_us/help/terms_maps.html"]
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/oembed.txt b/old/published/Webmonkey/APIs/oembed.txt new file mode 100644 index 0000000..f997f3e --- /dev/null +++ b/old/published/Webmonkey/APIs/oembed.txt @@ -0,0 +1,120 @@ +Have you ever wished you could get full multimedia embed code from a simple URL? Suppose you're building a social app that allows users to post links to images, videos or songs, wouldn't it be nice if you could turn that simple link into an embedded Flickr image or YouTube video? + +Of course you can reverse engineer many of the various embed structures, but what happens when the source site of your multimedia embed changes its format or relocates the actual image? Thousands of broken links suddenly litter your site. There has to be a better way. + +That's the thinking behind OEmbed, a new proposed standard for taking a URL and generating an embed link. OEmbed is the brainchild of Pownce developers Leah Culver and Mike Malone, as well Cal Henderson of Flickr and Richard Crowley of OpenDNS. + +OEmbed isn't going to solve all your embedding needs since not every site supports it, but given that some big names -- like Flickr and Viddler -- have already signed on, we think others will soon follow suit. + +So grab your coding tools and let's dive in to see how OEmbed can make your life easier. + +== What is OEmbed == + +Put simply, OEmbed dictates a standard format where you send a URL and the host site provides the embed code. In the simplest case you would capture the URL you user has entered and then query the originating service's API to get back any additional info you need. + +Here's how it works: The user enters a URL, the service (say Pownce) then queries the source of the URL (say, Flickr). The source site then sends back all the necessary information for Pownce to embed the image automatically. + +The full OEmbed spec says that all requests sent to the API endpoint (Flickr in our example) must be HTTP GET requests, with any arguments sent as query parameters. Obviously any arguments you send through HTTP should be url-encoded (as per [http://www.faqs.org/rfcs/rfc1738.html RFC 1738] in this case). + +The following query parameters are defined as part of the spec: + +# url (required) - The URL to retrieve embedding information for. +# maxwidth (optional) - the maximum width of the embedded resource. +# maxheight (optional) - the maximum height of the embedded resource. +# format (optional) - the required response format (i.e. XML or JSON). + +The maxwidth, maxheight parameters are nice when you're embedding content into a fixed width design and you don't want to end up with embeds that turn your carefully designed site into some horrible-looking MySpace page. + +As for the response you get back from an OEmbed call, that will depend somewhat on what type of object you're interested in embedding. In general you can expect things like the type of object, the owner of the content, thumbnails and more. For full details check out the [http://OEmbed.com/ OEmbed site]. + +== Using OEmbed == + +Let's say you've built a content sharing site like FriendFeed. I join your site and want to post [http://www.flickr.com/photos/luxagraf/137254255/ this Flickr image of the Himalayas] using just the URL. I cut and paste the URL from my browser window to your text field and then you would query Flickr using this code: + +<pre> +http://www.flickr.com/services/OEmbed/?url=http%3A//www.flickr.com/photos/luxagraf/137254255/ +</pre> + +The XML response you would get back looks like this: + +<pre> +<OEmbed> + <version>1.0</version> + <type>photo</type> + <title>Nepal-Sarangkot_12_16_05_31</title> + <author_name>luxagraf</author_name> + <author_url>http://www.flickr.com/photos/luxagraf/</author_url> + <cache_age>3600</cache_age> + <provider_name>Flickr</provider_name> + <provider_url>http://www.flickr.com/</provider_url> + <width>375</width> + <height>500</height> + <url>http://farm1.static.flickr.com/50/137254255_008f50c357.jpg</url> +</OEmbed> +</pre> + +As you can see all you need to do is grab the <code>url</code> and plug that into a standard <code>img</code> tag and my photo will show up without me having to do any extra work at all. + +And I know what you're thinking, if I just provide a text field for the user to paste in a URL how will I know what service to query? There's two obvious ways you can handle that. One would be to provide a drop down menu that allows users to specify the source of the link. The other would be the just parse the link with some [http://www.webmonkey.com/tutorial/Regular_Expressions_Tutorial Regular Expressions] [http://www.webmonkey.com/tutorial/Use_Regex_in_Perl magic] and handle it transparently. + +If you have other ideas, be sure to add them. + +== Getting more complex == + +That's all well and good, but what about more complex examples like video? This is actually where OEmbed shines -- no more filling in some complex embedding template that's liable to break whenever something changes on the source site. + +This time we'll use Viddler as an example. Let's say the visitor to our sharing site wants to embed [http://www.viddler.com/explore/RickRoll/videos/2/ this video]... they copy the URL and paste it in our form text field and then we query Viddler's OEmbed URL. But this time we'll add a parameter to make sure the embedded video is 400 pixels wide. + +The query code would look like this: + +<pre> +http://lab.viddler.com/services/OEmbed/?url=http%3A%2F%2Fwww.viddler.com%2Fexplore%2FRickRoll%2Fvideos%2F2%2F&width=400&format=xml +</pre> + +Viddler will then return this XML response (or if you change the <code>format</code> parameter to JSON, you could get a JSON response): + + +<pre> +<?xml version="1.0" encoding="UTF-8"?> +<OEmbed> + <version>1.0</version> + <type>video</type> + <width>400</width> + <height>342</height> + <title>Rick Roll Muppets Version</title> + <url>http://www.viddler.com/explore/RickRoll/videos/2/</url> + <author_name>RickRoll</author_name> + <author_url>http://www.viddler.com/explore/RickRoll/</author_url> + <provider>Viddler</provider> + <provider_url>http://www.viddler.com/</provider_url> + <html><object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="400" height="342" id="viddlerplayer-4310bfba"><param name="movie" value="http://www.viddler.com/player/4310bfba/" /><param name="allowScriptAccess" value="always" /><param name="allowFullScreen" value="true" /><embed src="http://www.viddler.com/player/4310bfba/" width="400" height="342" type="application/x-shockwave-flash" allowScriptAccess="always" allowFullScreen="true" name="viddlerplayer-4310bfba" ></embed></object></html> +</OEmbed> +</pre> + +As you can see, the last element of the response, the <code>html</code> node, gives us the embed code and keeps the video constrained to the dimensions we specified. It doesn't get much easier than that. + +Viddler has even put together a cool little sample app that [http://lab.viddler.com/services/OEmbed/form.php shows OEmbed in action]. + +=== Security Note === + + +When you're creating a site that's going to display HTML (as with video embeds), there's always the potential for XSS attacks from the site providing the code. At the moment all the site offering OEmbed are reputable, but that may not always be the case. To avoid opening your site up to XSS attacks, the OEmbed authors recommend displaying the HTML in an iframe, hosted from another domain. This ensures that the HTML cannot access cookies from the consumer domain. + + +== Sky's the Limit == + +OEmbed isn't just for developers either. At the moment no one has released any, but if you wrapped OEmbed as a WordPress or Movable Type Plug-in, even posting content on your own site would be considerably easier. + +If happen to be working with the Django web framework there's already a nice set of [http://code.google.com/p/django-OEmbed/ OEmbed template tags] up on Google code. The project allows you to do things like this in your Django templates: + +{% OEmbed %} +http://www.flickr.com/photos/luxagraf/ +{% endOEmbed %} + +If you know of other implementations, add them here. + +== Why We Love OEmbed == + +There's a bunch of ever-changing social web specs out there promising all sort of things, from easier logins though OAuth and OpenID to Google and Facebook's widget platforms, but where most of the promises remain unfulfilled, OEmbed is here today and it just works. + +Obviously it's missing some key sites like YouTube or Picasa, but hopefully it won't be long before OEmbed becomes a standard part of every successful web API.
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/twitter.txt b/old/published/Webmonkey/APIs/twitter.txt new file mode 100644 index 0000000..86e0b4b --- /dev/null +++ b/old/published/Webmonkey/APIs/twitter.txt @@ -0,0 +1,62 @@ +It's the hottest web service around and if you're looking to try out an API for the first time Twitter is a good place to start. + +Like the service itself, the Twitter API is simple and easy to use. The only thing to keep in mind is that Twitter limits you to 70 requests per 60 sixty minute interval so remember to cache or otherwise store your results or you may find yourself band. + +If you end up building something that needs to make more requests you can always e-mail Twitter and ask for permission. + + + +== Getting Started == + +Twitter's API is REST-based and will return results as either XML or JSON, as well as both RSS and ATOM feed formats. + +For a very simple look at the data returned fire up a terminal window and type: + +<code> +curl http://twitter.com/statuses/public_timeline.rss +</code> + +That line will give you the latest 20 tweets from the public timeline in the form of an RSS feed. To get the same results in JSON just change the extension to ".json". + +Public timelines can be accessed by any client, but all other Twitter methods require authentication. + + +=== Authenticating with Twitter === + +To authenticate yourself to Twitter you need to send a username and password. The basic format is: + +curl -u email:password http://twitter.com/statuses/friends_timeline.xml + +Most client libraries offer easy ways to pass in a username/password pair, making it very simple to authenticate with Twitter. + + +== Client Libraries == + +There's no need to re-invent the wheel and there are already some pretty good libraries out there for accessing Twitter. Whether you're looking for [http://twitter.com/help/api ActionScript], [http://code.google.com/p/python-twitter/ Python], [http://groups.google.com/group/twitter-development-talk/web/twitter4r-open-source-ruby-library-for-twitter-rest-api Ruby], [http://twitter.pbwiki.com/Scripts#PHP PHP] and [http://twitter.pbwiki.com/Scripts many more]. + +For the sake of example we'll use the Python library. Download and install the library and fire up a terminal window. + +Let's grab your last twenty tweets, fire up Python and type the following line, replacing "youusername" with your username: +<code> +>>> import twitter +>>> client = twitter.Api() +>>> latest_posts = client.GetUserTimeline(yourusername) +>>> print [s.text for s in latest_posts] +</code> +The last line prints out just the text of your posts. To get at things like date pasted and other useful methods, have a read through the [http://static.unto.net/python-twitter/0.5/doc/twitter.html Python library documentation]. + +To go the other direction, posting something to Twitter we'll need to authenticate. Recreate our Twitter client, but this time pass in your username and password: +<code> +>>> client = twitter.Api(username='yourusername', password='yourpassword') +>>> update = client.PostUpdate('The Twitter API is easy') +</code> + +Head over to your Twitter page and you should see the update. + +== Mashups, Apps and more == + +To get some idea of what you can do with the Twitter API, head over the fan wiki and check out some of the [http://twitter.pbwiki.com/Apps applications] and [http://twitter.pbwiki.com/Mashups mashups] that Twitterheads have created. + +One area of particular usefulness is the hashtags concept. Hashtags involve inserting a # sign to denote keywords or other data like location. Although not everyone is a fan of hashtags (they tend to make your tweets less readable) parsing Twitter streams for hashtags can yield a wealth of useful data. + +Some of the client libraries include functions to parse hashtags, but in many case you may have to write your own functions (have a look at the webmonkey primer on regular expressions, it might come in handy?)
\ No newline at end of file diff --git a/old/published/Webmonkey/APIs/youtube_data_api.txt b/old/published/Webmonkey/APIs/youtube_data_api.txt new file mode 100644 index 0000000..2be548b --- /dev/null +++ b/old/published/Webmonkey/APIs/youtube_data_api.txt @@ -0,0 +1,102 @@ +Last time around we looked at the YouTube Player API which allows you to customize, skin and otherwise control the playback of YouTube videos. + +Now it's time to explore the YouTube Data API, which you can use to request and store info about movies you'd like to display on your site. + +There are a variety of client libraries available for the youTube Data API, including [http://code.google.com/apis/youtube/developers_guide_php.html PHP], [http://code.google.com/apis/youtube/developers_guide_dotnet.html .NET], [http://code.google.com/apis/youtube/developers_guide_java.html Java] and [http://code.google.com/apis/youtube/developers_guide_python.html Python]. + +We'll be using the latter, but the general concepts will be the same no matter which language you use. + +== Getting Started == + +Let's say you frequently post movies to YouTube and you're tired of cutting and pasting the embed code to get them to show up on your site. + +Using the YouTube Data API and some quick Python scripts, we can grab all our movies, along with some metadata and automatically add them our database. For instance, if you followed along with a Django tutorial series, this would be handy way to add YouTube to your list of data providers. + +To get started go ahead and download the [http://code.google.com/support/bin/answer.py?answer=75582 Python YouTube Data Client Library]. Follow the instructions for installing the Library as well as the dependancies (in this case, ElementTree -- only necessary if you aren't running Python 2.5) + +Now, just to make sure you've got everything set up correctly, fire up a terminal window, start Python and try importing the modules we need: + +<pre> +>>> import gdata.youtube +>>> import gdata.youtube.service +</pre> + +Assuming those work, you're ready to start grabbing data. + +== Working with the YouTube Data API == + +The first thing we need to do is construct an instance of the YouTube Data service. Entry this code at the prompt: + +<pre> +yt_service = gdata.youtube.service.YouTubeService() +</pre> + +That's a generic object, with no authentication, so we can only retrieve public feeds, but for our purposes that's all we need. First let's write a function that can parse the data we'll be returning. + +Create a new text file named youtube_client.py and paste in this code: + +<pre> +import gdata.youtube +import gdata.youtube.service + +class YoutubeClient: + def __init__(self): + self.yt_service = gdata.youtube.service.YouTubeService() + + def print_items(self, entry): + print 'Video title: %s' % entry.media.title.text + print 'Video published on: %s ' % entry.published.text + print 'Video description: %s' % entry.media.description.text + print 'Video category: %s' % entry.media.category[0].text + print 'Video tags: %s' % entry.media.keywords.text + print 'Video flash player URL: %s' % entry.GetSwfUrl() + print 'Video duration: %s' % entry.media.duration.seconds + print '----------------------------------------' + + def get_items(self, feed): + for entry in feed.entry: + self.print_items(entry) +</pre> + +Now obviously if you want to store the data you're about to grab, you need to rewrite the <code>print_items</code> function to do something other than just print out the data. But for the sake of example (and because there are a near infinite number of ways your database could be structured) we'll just stick with a simple print function for now. + +So make sure that <code>youtube_client.py</code> is on your PythonPath and then fire up Python again and input these lines: + +<pre> +>>> from youtube_client import YoutubeClient +>>> client = YoutubeClient() +>>> client.get_items(client.yt_service.GetMostLinkedVideoFeed()) +</pre> + +The last line should produce a barrage of output as the client prints out a list of most linked videos and all the associated data. To get that list we used one of the YouTube service modules built-in methods <code>GetMostLinkedVideoFeed()</code>. + +Okay, that's all well and good if you want the most linked videos on YouTube, but what about ''our'' uploaded videos? + +To do that we're going to use another method of YouTube service module, this time the <code>GetYouTubeVideoFeed()</code> method. + +First, find the video feed url for your account, which should look something like this: + +<pre> +http://gdata.youtube.com/feeds/api/users/YOURUSERNAME/uploads +</pre> + +So let's plug that into our already running client with these two lines: + +<pre> +url = 'http://gdata.youtube.com/feeds/api/users/YOURUSERNAME/uploads' +client.get_items(client.yt_service.GetYouTubeVideoFeed(url)) +</pre> + +You should see a list of all your recently uploaded videos, along with all the metadata we plugged into our <code>print_items()</code> function. + +== Conclusion == + +Hopefully this has given you some insight into how the data API works. We've really just scratched the surface, there are dozens of methods available to retrieve all sorts of data -- see the [Python YouTube Data API guide] for more details. + +While we've used Python, the methods and techniques are essentially the same for all the client libraries, so you should be able to interact with YouTube via a language you're comfortable with. + +Obviously you'll need to adjust the <code>print_items()</code> function to do something better than just printing the results. If you're using Django, create a model to hold all the data and then use the model's <code>get_or_create()</code> method to plug the data in via <code>print_items()</code>. + +For full automation, write a shell script to call the methods we used above and attach the script to a cron job. + +And there you have it -- an easy way to add YouTube videos to your own personal site, with no manual labor on your end. diff --git a/old/published/Webmonkey/APIs/youtube_player_api.txt b/old/published/Webmonkey/APIs/youtube_player_api.txt new file mode 100644 index 0000000..737ec20 --- /dev/null +++ b/old/published/Webmonkey/APIs/youtube_player_api.txt @@ -0,0 +1,124 @@ +While other sites may offer higher quality video, if you want traffic YouTube is the place to be. But thanks to a recent overhaul to the YouTube API, you can do more than just embed your videos on your own site. In fact you could build your own uploading system and simultaneously post videos to YouTube and your site. + +The YouTube API is has a number of different functions -- there's the Data API for grabbing info about movies, the Player API for skinning your embedded players and more. We;re going to take a look at both the Data API and Player API. + +We'll start with the Player API since it's a little bit simpler to interact with. + +YouTube recently unveiled a new improved [http://code.google.com/apis/youtube/getting_started.html#player_apis Player API] that allows developers to do things like re-skin the video player or create your own custom controls. Many of the new functions can be accessed through both JavaScript and ActionScript. We'll take a look at the JavaScript controls, but the ActionScript API is very similar so you can convert this code without too much trouble. + + +== Getting Started == + +When it come to embedding Flash, YouTube recommends using SWFObject, which is a JavaScript library for embedding Flash movies. Grab a copy of [http://code.google.com/p/swfobject/ SWFObject] and put it in your public web folder. Now include this line at the top of your page: + +<code> +<script type="text/javascript" src="/path/to/swfobject.js"></script> +</code> + +Now let's embed a movie and start playing with the new API. + +== Using SWFObject == + +If you've never encountered SWFObject you're in for a treat. SWFObject greatly simplifies the process of embedding Flash movies, taking care of various cross-browser issues and other problems. + +All you need to do is define a tag for SWFObject to replace with a Flash movie. Here's some example HTML code you can use for this tutorial: + +<pre> +<body> +<div id="ytplayer"> +<p>You will need Flash 8 or better to view this content.</p> +</div> + + +<script type="text/javascript"> + var params = { allowScriptAccess: "always" }; + swfobject.embedSWF( + "http://www.youtube.com/v/tFI7JAybF6E&enablejsapi=1&playerapiid=ytplayer", "ytplayer", "425", "365", "8", null, null, params); +</script> + +</body> +</pre> + +Okay, here's how it works: first of all we create a <code>div</code> container to hold our embedded movie. If the user doesn't have Flash 8 or better they'll see our plain paragraph text (note that SWFObject offers far more sophisticated ways of handling this, like auto-updating the users Flash player, see the docs for full details). + +The next step is to write the JavaScript and embed the movie. We've defined a params argument to tell Flash that it's okay to let the YouTube domain load scripts and then we call the <code>embedSWF</code> function. + +The parameters passed to <code>embedSWF</code> include the location of the .swf file, the id of the tag we want to replace, width, height, player version to require (the YouTube API requires 8 or above) two params we're not using and finally our params value. + +Now let's take a look at that URL was passed to <code>embedSWF</code>. For the most part it's an ordinary YouTube URL, but we've added to additional bits of data, we've told YouTube that we want to use the JavaScript API and we've given our player an API name. + +The API name, in this case "ytplayer" is important because if you ever embed two movies one the same page and want to control them separately each one needs to have a unique name. + +== Controlling the Player with JavaScript == + +If you load the above code in a browser you'll notice that you just rickrolled yourself. But more importantly, you'll notice that the movie file doesn't look any different than a normal embedded movie. + +Let's start adding some outside controls to our page so you can see how the Player API works. Go ahead and paste this function into your HTML, just below the SWFObject function: + +<pre> +function play() { + if (ytplayer) { + ytplayer.playVideo(); + } +} + +function pause() { + if (ytplayer) { + ytplayer.pauseVideo(); + } +} + +function stop() { + if (ytplayer) { + ytplayer.stopVideo(); + } +} + +</pre> + +Now just below the div element that we're replacing, add this HTML code: + +<pre> + <a href="javascript:void(0);" onclick="play();">Play</a> + <a href="javascript:void(0);" onclick="pause();">Pause</a> + <a href="javascript:void(0);" onclick="stop();">Stop</a> +</pre> + +If you reload the page you'll see that our HTML links can now control the player. + +Now you might be thinking, what's the point? After all the player already has controls. but if you're trying to make embedded movies more closely match the look and feel of your site's design, these tools make it easy to create your own controls. + +So how to get rid of YouTube 's controls? Well, to do that we'll need to use the "chromeless" player. To embed the chromeless player you'll need to sign up for an API, which you can do at [http://code.google.com/apis/youtube/dashboard/ YouTube API Dashboard]. + +To use the chromeless player, our url parameter inside the <code>embedSWF</code> function becomes something like: + +<pre> +http://gdata.youtube.com/apiplayer?dev_key=YOUR_DEV_KEY&enablejsapi=1&playerapiid=ytplayer +</pre> + +notice that we haven't passed an actual movie id in this case. To do that with the chromeless player we use the <code>loadNewVideo</code> function. So add this code below our other JavaScript functions: + +<pre> +function loadNewVideo(id, startSeconds) { + if (ytplayer) { + ytplayer.loadVideoById(id, startSeconds); + } +} + +</pre> + +There are a number of ways we can call this function -- through a drop down list of options, a text input box, etc -- but for simplicity let's just add another link button: + +<pre> +<a href="javascript:void(0);" onclick="loadNewVideo('tFI7JAybF6E', 0);">load</a> +</pre> + +And there you have it custom, chromeless movie player that you can control with JavaScript. + +== Conclusion == + +Now you know how to control YouTube movie players and hopefully feel comfortable customizing them to fit your own site. If JavaScript isn't you bag, there's also a very similar (in function) ActionScript API that you can use to build your own controls and load chromeless players. + +Now you may be wondering, how can I get some YouTube movie data to display on my site? For instance maybe you'd like to grab all the movies you've marked as favorites and display them on your blog? Or maybe you want to grab your own movies for display elsewhere. + +Well, read on to our next installment when we'll tackle the other half of the YouTube API -- The Data API.
\ No newline at end of file |