editing json web services in javascript

I’ve been working on a utility for testing mobile apps.

The first stage is testing the web services directly.  I used python’s urllib2 & httplib2 to make requests, parse the results and do verification.  It was my first real foray into python since playing with pickle circa 2001 with raw sockets.  That didn’t last long and I went back to using expat and perl.

This time around I found a really cool XML library for python called amara.  Which is kindof like simplexml for php but way better.  It provides an object tree that is so intuitive to work with you’ll forget you ever worked with Java libraries like Xerces, JDOM, and Castor (from worst to better.)

The second stage was building a mock web service for testing the mobile app without the webservices.  I started by hand crafting XMLs and placing static files (and then PHP templates of XML files plus headers) on a webserver.  This worked okay, but it was limited, laborious, and I kept creating false states from not-really-valid responses.

Once the web services matured, I was able to take actual valid responses and modify them to test states that are difficult to simulate with backend services (which happen to include a test SABRE environment) and legacy airline services.

Then I had an epiphany.  I could put all this together and test the live mobile app and the live web services by intercepting requests from the app, query the web services, and injecting values into them before passing the results back to the device so I can simulate difficult to achieve states like back end error scenarios, cancelled flights, and no seats available — without having to swing an axe around the datacenter, make a bomb threat at an airport, or purchase every ticket on a plane to Hawaii.  All it would take is an app build that points to my test machine running Apache + mod_wsgi.

Only it turns out that the production app only uses JSON, not XML.

Now, I’d recently become convinced (in my blindered little world) that JSON can do everything XML does, only better, and in less space.  But XML has a DOM.  And you can parse an XML document, manipulate it, and spit out a reasonable facsimile of the original XML (with exceptions for whitespace, sibling element ordering, and attribute sequence) — none of which matter.

With JSON, you can get pretty close most of the time, but it turns out that every JSON parser has a different definition of close.  It might be due to the squishiness of the spec or just the relative lack of maturity of the tools, but I don’t know of a tool that you can use that will guarantee your json_encode() will look just like your json_decode() — or that is as nice to work with as Amara.

To be honest I was a bit suprised, but not shocked.  And I’m hoping I’ll get plenty of flames telling me how wrong I am complete with links and examples.

But then I had a second epiphany.  There is a JSON parser that always leaves your json documents the way you want them.  Because there is no encoding or decoding.  As I’m sure you all have known all along, that parser is javascript.  It doesn’t have problems with a squshy spec or immature implementations because it is fully documented in ECMA-262 (Don’t get me started on the scurrilous and spurious claims that it’s actually called “ECMAScript” — as if C were really called ANSI/ISOScript or that there is an association of computer manufacturers in europe developing or a marketing a single browser-based scripting language instead of registering hundreds of hardware and software standards.)

But now it’s implementation time.  I could switch to Jython and import Rhino, but that smells really bad.  It would mean losing my simple webserver solution, trusting tomcat, and jython, and it would take a lot more work than I want to do.  Despite it’s Rube Goldberg appeal, I decided against it.  There might be an easier way to invoke a javascript parser, but why?

The great thing about my python solution is the ease with which I’m doing it.  Adding complexity takes away from my desire to use python.  So, as nice as my flirtation with the snake has been (don’t worry, I’ll be back) it’s time to try something new.  And as a matter of fact, I was thinking of ditching my Apache+mod_wsgi solution and using a simple python web server like web.py so that it would eliminate webserver setup.

Enter node.js.  I’ve only played with it a little, but I read about how LinkedIn is using node.js for their mobile web services.  My one concern is if the tooling is good enough to do intercept, modify, and spit out the webservices with less effort than just building my own mapping classes to marshall and unmarshall the half-dozen or so REST services I have to deal with.

I’ll let you know how it turns out.

 

 

Advertisements

Testing REST web services with Python

I’m messing around with using python for testing REST web services.

Python 2

I quickly found that Python 2 has a messy tangle of URL and HTTP libraries

There are several low-level libraries:

  1. urllib
  2. urllib2
  3. httlib
  4. httplib2

You’ll probably end up using all or most of them together. There is some overlapping functionality, and several bugs & unimplemented features. So while you might be able to do some things with urllib.urlopen() you won’t be able to do others unless you use urllib2.urlopen(). But you still need urllib for urlencode() for instance.

See this post on stackoverflow.com about urllib vs urllib2.

Here’s an example GET request for JSON with Basic Auth over HTTPS using urllib2:

import urllib
import urllib2
import base64

url = "https://example.com/rest/resource"
authKey = base64.b64encode("username:password")
headers = {"Content-Type":"application/json", "Authorization":"Basic " + authKey}
data = { "param":"value"}

request = urllib2.Request(url)

# post form data
# request.add_data(urllib.urlencode(data))

for key,value in headers.items():
  request.add_header(key,value)

response = urllib2.urlopen(request)

print response.info().headers
print response.read()

Here’s an example using httplib:

import httplib
import urlparse

domain = urlparse.urlparse(url).netloc
connection = httplib.HTTPSConnection(domain)
connection.request("GET", url, headers=headers)
response = connection.getresponse()

print "status: " + str(response.status), response.reason
print response.getheaders()
print response.read()

And here’s an example using httplib2:

import httplib2</code>

http = httplib2.Http(disable_ssl_certificate_validation=True)
headers, content = http.request(url, "GET", headers=headers)

print headers
print content

httplib2 is actually a third party module, and on Python 2.6 & 2.7 you need to manually upgrade it to get it to work. If you get the message “TypeError: a float is required” — see the comments in the examples for the fix. Full traceback is included below. I was able to get it working with a simple pip install –upgrade httplib2 though.

An issue you might run into with httplib2 when testing is an invalid ssl cert (or perfectly valid self-signed cert.) You need to set OpenSSL::SSL::VERIFY_NONE. You can do this by specifying in the constructor:

http = httplib2.Http(disable_ssl_certificate_validation=True)

See the following for reference:

http://forums.ocsinventory-ng.org/viewtopic.php?id=980
http://viraj-workstuff.blogspot.com/2011/07/python-httplib2-certificate-verify.html

Python 3

Python3, in an attempt to fix this has created a new module http.client that implements httplib and urllib combines features urllib and urllib2 in python2 and uses http.client, but chances are you’ll still need to use the old ones. (See “Dive Into Python” Appendix A)

diveintopytyhon3.org also has good documentation for using web services

But then Python3 goes and completely screws up strings all in the name of Unicode political correctness.

Here’s the example using python 3 and http.client:

import base64
from urllib.parse import urlparse</code>

authKey = base64.b64encode("username:password".encode('utf-8')).decode('utf-8')
headers = {"Content-Type":"application/json", "Authorization":"Basic " + authKey}
domain = urlparse(url).netloc

import http.client

connection = http.client.HTTPSConnection(domain)
connection.request("GET", url, headers=headers)
response = connection.getresponse()

print("status: " + str(response.status), response.reason)
print(response.getheaders())
print(response.read().decode())

And with urllib.request in python3:

import urllib.request

request = urllib.request.Request(url)

for key, value in headers.items():
  request.add_header(key, value)

response = urllib.request.urlopen(request)

print(response.getheaders())
print(response.read().decode())

finally, with httplib2 on python3:

import httplib2

http = httplib2.Http( disable_ssl_certificate_validation=True)
http.add_credentials(username, password)
response, content = http.request(url, "GET", headers=headers)

print(response)
print(content)

There is actually a bug (logged only a few days ago) that prevents httplib2 from working using HTTPS with a self signed cert

There are several “REST” libraries worth looking at that I will be looking at next:

  1. python-rest-client
  2. siesta
  3. requests

python-rest-client uses urllib2 and httpclient2 and has good examples of how to use them. It also includes a Google App Engine version which wraps the GAE url fetch functionality.

You can download the full source code for my python REST client examples and the python 3 examples at

http://www.one-shore.com/aaron/examples/python/webservices


Error Traces

Error in httplib2 for Python 2.6 & 2.7:

Traceback (most recent call last):
File "C:\dev\projects\pythonwebservices\CheckFlightStatus.py", line 21, in
main()
File "C:\dev\projects\pythonwebservices\CheckFlightStatus.py", line 17, in main
fs.check("AS", "859", str(date.today()))
File "C:\dev\projects\pythonwebservices\FlightStatus.py", line 36, in check
self.getResponseUsingHttplib2(url)
File "C:\dev\projects\pythonwebservices\FlightStatus.py", line 68, in getResponseUsingHttplib2
headers, content = h.request(url, "GET", headers=self.headers)
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 1050, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 854, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 823, in _conn_request
conn.request(method, request_uri, body, headers)
File "C:\dev\Python\2.7.2\lib\httplib.py", line 955, in request self._send_request(method, url, body, headers)
File "C:\dev\Python\2.7.2\lib\httplib.py", line 989, in _send_request
self.endheaders(body)
File "C:\dev\Python\2.7.2\lib\httplib.py", line 951, in endheaders
self._send_output(message_body)
File "C:\dev\Python\2.7.2\lib\httplib.py", line 811, in _send_output
self.send(msg)
File "C:\dev\Python\2.7.2\lib\httplib.py", line 773, in send
self.connect()
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 736, in connect
sock.settimeout(self.timeout)
File "C:\dev\Python\2.7.2\lib\socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
TypeError: a float is required

Self-signed SSL certificate validation problem in httplib2:

Traceback (most recent call last):
File "C:\dev\projects\pythonwebservices\CheckFlightStatus.py", line 35, in
response, content = http.request(url, "GET", headers={"Authorization":token})
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 1436, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 1188, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 1123, in _conn_request
conn.connect()
File "C:\dev\Python\2.7.2\lib\site-packages\httplib2\__init__.py", line 911, in connect
raise SSLHandshakeError(e)
SSLHandshakeError: [Errno 1] _ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

Error using httplib2 on Python 3 with self signed certificate:

    File "C:\dev\projects\python3webservices\src\example\usinghttplib2.py", line 28, in
    response, content = http.request(url, "GET", headers=headers)
  File "C:\dev\Python\3.2.2\lib\site-packages\httplib2\__init__.py", line 1059, in request
    self.disable_ssl_certificate_validation)
  File "C:\dev\Python\3.2.2\lib\site-packages\httplib2\__init__.py", line 775, in __init__
    check_hostname=True)
  File "C:\dev\Python\3.2.2\lib\http\client.py", line 1086, in __init__
    raise ValueError("check_hostname needs a SSL context with "
ValueError: check_hostname needs a SSL context with either CERT_OPTIONAL or CERT_REQUIRED