Py Mag 2007 10

46

Transcript of Py Mag 2007 10

Page 1: Py Mag 2007 10
Page 2: Py Mag 2007 10

EDITORIAL

If you want to bring a Python-related topic to the attention of the professional Python community, whether it is personal research, company software, or anything else, why not write an article for Python Magazine? If you would like to contribute, contact us and one of our editors will be happy to help you hone your idea and turn it into a beautiful article for our magazine. Visit www.pythonmagazine.com/c/p/write_for_us or contact our editorial team at [email protected] and get started!

WRITE FOR US!

COLUMNS

FEATURES

CONTENTS

Download this month’s code at: http://code.pythonmagazine.com/1/10

Extending PythonJohn Berninger

Using C to teach an old Python new tricks, by hand!

33Processing Web Forms Using Anonymous Functions & WSGIKevin T. Ryan

Lambdas are dandy, and WSGI is quicker.

26

Working with IMAP and iCalendarDoug Hellmann

Integrate iCalendar functionality into your Exchange-like groupware service.

18Creating custom PyGTK widgets with CairoSayamindu Dasgupta

Empower your users with the perfect widget.

9

3| Import ThisWelcome to the debut of Python Magazine 40| Welcome to Python

XML Processing with the (now built-in!) ElementTree module

5| And now for something completely differentModern process management modules alleviate GIL woes

45| Random HitsThe Python Community

Page 3: Py Mag 2007 10

EDITORIAL

CONTENTSVolume 1 - Issue 10

Publisher

Marco Tabini

Editor-in-Chief

Brian Jones

Technical Editor

Doug Hellmann

Contributing Editor

Steve Holden

Columnist

Mark Mruss

Graphics & Layout

Arbi Arzoumani

Managing Editor

Emanuela Corso

Authors

John Berninger, Sayamindu Dasgupta,

Doug Hellmann, Steve Holden,

Mark Mruss, Kevin T. Ryan

Python Magazine is published twelve times a year by Marco Tabini & Associates, Inc., 28 Bombay Ave., Toronto, ON M3H1B7, Canada.

Although all possible care has been placed in assuring the accuracy of the contents of this magazine, including all associated source code, listings and figures, the publisher assumes no responsibilities with regards of use of the information contained herein or in all associated material.

Python Magazine, PyMag the Python Magazine logo, Marco Tabini & Associates, Inc. and the Mta Logo are trademarks of Marco Tabini & Associates, Inc.

For all enquiries, visit

http://pythonmagazine.com/c/p/contact_us

Printed in Canada

Copyright © 2003-2007

Marco Tabini & Associates, Inc.

All Rights Reserved

>>>import this

Welcome to the premier issue of Python Magazine. Projects like a new magazine tend to feel like a huge party. You put all kinds of time and effort into it, you call decorators, caterers, chair rentals, balloon blow-

ers, clowns, and you get everything coordinated. Then, once everything is in place, you pray that people actually show up.

I'm excited (and relieved!) to see that the community has figured out a couple of things that have made them willing to embrace the idea of a maga-zine devoted to Python.

First, we've done this before. Python Magazine is not the first magazine I've helped launch, nor is it the first one that MTA (the publisher) has launched. We are already intimately familiar with the problems inherent in trying to produce a magazine that is timely, accurate, thorough, and even entertaining, on a monthly basis, to a global audience. Further, we understand that the magazine has to be seen as a value to the readership, or there won't be one.

The second thing the community seems to have figured out is that our in-tention is to help further the use of the language by helping to support the community and advocate the language using whatever resources we can make available. Ideas are constantly flowing in. Talks are ongoing. Things are moving forward, and it's exciting to see all of this taking place.

Why this is happeningI guess I'm a bit of a workaholic, maybe. Truth is, I'm an infrastructure services architect (a fancy sort of sysadmin) by day. Part of my job is to write code to do various things and touch various services that I maintain. I had wanted to give Python a try for a while, and an opportunity presented itself. I took the plunge, and fell in love. However, I found that one of my favorite learning resources was unavailable in the Python world: the venerable how-to magazine!

These magazines exist for lots of other topics. There are magazines that'll tell you how to brew beer, how to work with wood, how to play pool, how to cook, how to stay in shape, how to take pictures, how to write, and how to use your computer in various ways (or how to use various computers in one particular way, as the case may be). Heck, there's even a magazine on how to code in PHP! For crying out loud, where's the Python mag?!

There wasn't one. It wasn't for lack of trying. Various attempts had failed to gain momentum for whatever reason, but I wasn't going to let something silly like a path littered with the corpses of past failed attempts get in the way of having a magazine I could read to glean inspiration and knowledge from about my new favorite programming language!

And so, I went to the publisher - the one I had worked with on php|architect. I told him that I wanted to learn Python better, and so did lots of other people, and that there was no magazine for them, and there was no magazine for me. I told him that people who already knew Python didn't know everything they wanted to know, and there was no magazine for them, either. I shed a tear for effect. Now, here we are, only 4 months after the initial "ok, go for it!", and Python Magazine is a reality.

So, in a way, it's happening because I want to know Python better. But it's

3| Import ThisWelcome to the debut of Python Magazine 40| Welcome to Python

XML Processing with the (now built-in!) ElementTree module

5| And now for something completely differentModern process management modules alleviate GIL woes

45| Random HitsThe Python Community

3 • Python Magazine • October 2007

Page 4: Py Mag 2007 10

EDITORIAL Import This COLUMN

also happening because there are lots of people at vari-ous experience levels who would like to know how to do something new, or something old in a better way, with Python. It's happening because the community wants it to happen as well. I got lots of positive feedback from various IRC channels, emails to community members, and even Guido himself.

The Premier IssueThis first issue dives right in, and is made to look like we've done this before, rather than spending time dwell-ing on the fact that this is issue 1 of a new magazine. It's less interesting to people who've shown up for the code to hear us big-headed editors patting ourselves on the back.

However, being that this is a highly specialized maga-zine, it's likely that I have a built-in, captive audience of people with more than a passing interest in the language. To you I'd like to extend an invitation to get involved in the community, the language, and the magazine. Join (or help start) a Python user group in your area, on your college campus, or in your high school. Help others on the Python mailing lists. Further your skill by helping out an open source project. And, of course, write articles for Python Magazine!

Let us know what you're doing with Python. Tell us your story. We'd like to help share it with everyone! Drop us a line at [email protected], or go to http://www.pythonmagazine.com/c/p/write_for_us to send us an article proposal.

And if you're a reader and want to ask questions or report a bug in an article or column you've seen in Py-thon Magazine, you can also send that to [email protected], but you can also join us in our IRC channel, #pymag, on irc.freenode.net.

Enjoy this first issue, and welcome to Python Maga-zine!

Meet the Rest of the Staff at PyMag

Doug Hellmann is a Senior Software Engineer at Racemi. He has been programming in Python since version 1.4 on a variety of Unix and non-Unix platforms. He has worked on projects ranging from mapping to medical news publishing, with a little banking thrown in for good measure.

Steve HolDen is a consultant, instructor and author active in networking and security technologies. He is Director of the Python Software Foundation and a recipient of the Frank Willison Memorial Award for services to the Python community.

For the last seven years mark mruSS has worked as a software developer, programming in the much maligned C++. In 2005 Mark decided it was time to add another language to his arsenal. After reading Eric Raymond's well known article "Why Python?" he set his sights on the inviting world of Python.

Since co-founding (and, some say, printing) php|architect magazine on the back of a napkin in 2002, arbi arzoumani has been responsible for making sure that letters are dotted and crossed in the appropriate places in all of MTA's publications and web properties. Purveryour of fine food, sights and sounds at our conferences, his ultimate goal in life is to invent a non-lethal instrument that prevents the technical staff from playing designer without causing (too much) permanent damage to their central nervous systems.

brian JoneS is a system/network/database administrator who writes a good bit of Perl, PHP, and Python. He's the co-author of Linux Server Hacks, Volume Two from O'Reilly publishing, founder of Linuxlaboratory.org, contributing editor at Linux.com, and, in a past life, worked as Editor in Chief of php|architect Magazine. In his spare time, he enjoys brewing beer, playing guitar and piano, writing, cooking, and billiards.

4 • Python Magazine • October 2007

Page 5: Py Mag 2007 10

COLUMN

REQUIREMENTS

PYTHON: 2.5

Other Software:

• Richard Oudkerk's "processing" package version

0.34 or higher http:pypi.python.org/pypi/processing

• Vitalii Vanovschi's "parallel python" package http:www.parallelpython.com/

Useful/Related Links:

• "It isn't Easy to Remove the GIL" http:www.artima.com/weblogs/viewpost.jsp?thread=214235

• "Can't we get rid of the Global Interpreter

Lock?" http:www.python.org/doc/faq/library/#can-t-we-get-rid-of-the-global-interpreter-lock

by Doug Hellmann

There is no predefined theme for this column, so I plan to cover a different, likely unrelated, subject every month. The topics will range anywhere from

open source packages in the Python Package Index (for-merly The Cheese Shop, now PyPI) to new developments from around the Python community, and anything that looks interesting in between. If there is something you would like for me to cover, send a note with the details to [email protected] and let me know, or add the link to your del.icio.us account with the tag "pymagdifferent".

I will make one stipulation for my own sake: any open source libraries must be registered with PyPI and config-ured so that I can install them with distutils. Creating a login at http://pypi.python.org/ and registering your proj-ect is easy, and only takes a few minutes. Go on, you know you want to.

Scaling Python: Threads vs. ProcessesIn the ongoing discussion of performance and scaling is-sues with Python, one persistent theme is the Global In-

Different

Completely

For Something

And Now

Has your multi-threaded application grown GILs? Take a look at these packages

for easy-to-use process management and interprocess communication tools.

October 2007 • Python Magazine • 5

Page 6: Py Mag 2007 10

COLUMN And Now For Something Completely Different COLUMN

terpreter Lock (GIL). While the GIL has the advantage of simplifying the implementation of CPython internals and extension modules, it prevents users from achieving true multi-threaded parallelism by limiting the interpreter to executing byte-codes in one thread at a time on a single processor. Threads which block on I/O or use extension modules written in another language can release the GIL to allow other threads to take over control, of course. But if my application is written entirely in Python, only a limited number of statements will be executed before one thread is suspended and another is started.

Eliminating the GIL has been on the wish lists of many Python developers for a long time – I have been work-ing with Python since 1998 and it was a hotly debated topic even then. Around that time, Greg Stein produced a set of patches for Python 1.5 that eliminated the GIL entirely, replacing it with a whole set of individual locks for the mutable data structures (dictionaries, lists, etc.) that had been protected by the GIL. The result was an interpreter that ran at roughly half the normal speed, a side-effect of acquiring and releasing the individual locks used to replace the GIL.

The GIL issue is unique to the C implementation of the interpreter. The Java implementation of Python, Jython, supports true threading by taking advantage of the underlying JVM. The IronPython port, running on Microsoft's CLR, also has better threading. On the other hand, those platforms are always playing catch-up with new language or library features, so if you're hot to use the latest and greatest, like I am, the C reference-imple-mentation is still your best option.

Dropping the GIL from the C implementation remains a low priority for a variety of reasons. The scope of the changes involved is beyond the level of anything the current developers are interested in tackling. Recently, Guido has said he would entertain patches contributed by the Python community to remove the GIL, as long as performance of single-threaded applications was not adversely affected. As far as I know, no one has an-

nounced any plans to do so. Even though there is a FAQ entry on the subject as

part of the standard documentation set for Python, from time to time a request pops up on comp.lang.python or one of the Python-related mailing lists to rewrite the interpreter so the lock can be removed. Each time it happens, the answer is clear: use processes instead of threads.

That response does have some merit. Extension mod-ules become more complicated without the safety of the GIL. Processes typically have fewer inherent deadlock-ing issues than threads. They can be distributed be-tween the CPUs on a host, and even more importantly, an application that uses multiple processes is not lim-ited by the size of a single server, as a multi-threaded application would be.

Since the GIL is still present in Python 3.0, it seems unlikely that it will be removed from a future version any time soon. This may disappoint some people, but it is not the end of the world. There are, after all, strate-gies for working with multiple processes to scale large applications. I'm not talking about the well worn, es-tablished techniques from the last millennium that use a different collection of tools on every platform, nor the time-consuming and error-prone practices that lead to solving the same problem time and again. Techniques using low-level, operating system-specific, libraries for process management are as passe as using compiled lan-guages for CGI programming. I don't have time for this low-level stuff any more, and neither do you. Let's look at some modern alternatives.

The subprocess moduleVersion 2.4 of Python introduced the subprocess module and finally unified the disparate process management interfaces available in other standard library packages to provide cross-platform support for creating new process-es. While subprocess solved some of my process creation problems, it still primarily relies on pipes for interpro-cess communication. Pipes are workable, but fairly low-level as far as communication channels go, and using them for two-way message passing while avoiding I/O deadlocks can be tricky (don't forget to flush()). Pass-ing data through pipes is definitely not as transparent to the application developer as sharing objects natively between threads. And pipes don't help when the pro-cesses need to scale beyond a single server.

Parallel PythonVitalii Vanovschi's Parallel Python package (pp) is a more complete distributed processing package that takes a centralized approach. Jobs are managed from

"Parallel Python is impressive, but it is not the only option for managing parallel jobs."

6 • Python Magazine • October 2007

Page 7: Py Mag 2007 10

COLUMN And Now For Something Completely Different

a "job server, and pushed out to individual processing "nodes".

Those worker nodes are separate processes, and can be running on the same server or other servers accessible over the network. And when I say that pp pushes jobs out to the processing nodes, I mean just that – the code and data are both distributed from the central server to the remote worker node when the job starts. I don't even have to install my application code on each ma-chine that will run the jobs.

Here's an example, taken right from the Parallel Py-thon Quick Start guide:

import ppjob_server = pp.Server()# Start tasksf1 = job_server.submit(func1, args1, depfuncs1, modules1)f2 = job_server.submit(func1, args2, depfuncs1, modules1)f3 = job_server.submit(func2, args3, depfuncs2, modules2)# Retrieve the resultsr1 = f1()r2 = f2()r3 = f3()

When the pp worker starts, it detects the number of CPUs in the system and starts one process per CPU automati-cally, allowing me to take full advantage of the comput-ing resources available. Jobs are started asynchronous-ly, and run in parallel on an available node. The callable object returned when the job is submitted blocks until the response is ready, so response sets can be computed asynchronously, then merged synchronously. Load dis-tribution is transparent, making pp excellent for clus-tered environments.

One drawback to using pp is that I have to do a little more work up front to identify the functions and mod-ules on which each job depends, so all of the code can be sent to the processing node. That's easy (or at least straightforward) when all of the jobs are identical, or use a consistent set of libraries. If I don't know everything about the job in advance, though, I'm stuck. It would be nice if pp could automatically detect dependencies at runtime. Maybe it will, in a future version.

The processing PackageParallel Python is impressive, but it is not the only op-tion for managing parallel jobs. The processing package from Richard Oudkerk aims to solve the issues of creating and communicating with multiple processes in a porta-ble, Pythonic way. Whereas Parallel Python is designed around a "push" style distribution model, the processing package is set up to make it easy to create producer/consumer style systems where worker processes pull jobs from a queue.

The package hides most of the details of selecting an appropriate communication technique for the plat-form by choosing reasonable default behaviors at run-time. The API does include a way to explicitly select the communication mechanism, in case I need that level of control to meet specific performance or compatibility requirements. As a result, I end up with the best of both worlds: usable default settings that I can tweak later to improve performance.

To make life even easier, the processing.Process class was purposely designed to match the threading.Thread class API. Since the processing package is almost a drop-in replacement for the standard library's threading module, many of my existing multi-threaded applications can be converted to use processes simply by changing a few import statements. That's the sort of upgrade path I like.

Listing 1 contains a simple example, based on the ex-amples found in the processing documentation, which passes a string value between processes as an argu-ment to the Process instance and shows the similarity between processing and threading. How much easier could it be?

In a few cases, I'll have more work to do to convert ex-isting code that was sharing objects which cannot easily be passed from one process to another (file or database handles, etc.). Occasionally, a performance-sensitive application needs more control over the communication channel. In these situations, I might still have to get my hands dirty with the lower-level APIs in the process-ing.connection module. When that time comes, they are all exposed and ready to be used directly.

Sharing State and Passing DataFor basic state handling, the processing package lets me share data between processes by using shared ob-jects, similar to the way I might with threads. There are two types of "managers" for passing objects between processes. The LocalManager uses shared memory, but the types of objects that can be shared are limited by a low-level interface which constrains the data types and

1. #!/usr/bin/env python 2. # Simple processing example 3. 4. import os 5. from processing import Process, currentProcess 6. 7. def f(name): 8. print 'Hello,', name, currentProcess() 9. 10. if __name__ == '__main__':11. print 'Parent process:', currentProcess()12. p = Process(target=f, args=[os.environ.get('USER', 'Unknown user')])13. p.start()14. p.join()15.

LISTING 1

October 2007 • Python Magazine • 7

Page 8: Py Mag 2007 10

COLUMN And Now For Something Completely Different

sizes. LocalManager is interesting, but it's not what has me excited. The SyncManager is the real story.

SyncManager implements tools for synchronizing in-terprocess communication in the style of threaded pro-gramming. Locks, semaphores, condition variables, and events are all there. Special implementations of Queue, dict, and list that can be used between processes safe-ly are included as well (Listing 2). Since I'm already comfortable with these APIs, there is almost no learn-ing curve for converting to the versions provided by the processing module.

For basic state sharing with SyncManager, using a Namespace is about as simple as I could hope. A namespace can hold arbitrary attributes, and any attri-bute attached to a namespace instance is available in all client processes which have a proxy for that namespace. That's extremely useful for sharing status information,

especially since I don't have to decide up front what information to share or how big the values can be. Any process can change existing values or add new values to the namespace, as illustrated in Listing 3. Changes to the contents of the namespace are reflected in the other processes the next time the values are accessed.

Remote ServersConfiguring a SyncManager to listen on a network socket gives me even more interesting options. I can start pro-cesses on separate hosts, and they can share data using all of the same high-level mechanisms described above. Once they are connected, there is no difference in the way the client programs use the shared resources re-motely or locally.

The objects are passed between client and server us-ing pickles, which introduces a security hole: because unpacking a pickle may cause code to be executed, it is risky to trust pickles from an unknown source. To mitigate this risk, all communication in the processing package can be secured with digest authentication using the hmac module from the standard library. Callers can pass authentication keys to the manager explicitly, but default values are generated if no key is given. Once the connection is established, the authentication and digest calculation are handled transparently for me.

ConclusionThe GIL is a fact of life for Python programmers, and we need to consider it along with all of the other fac-tors that go into planning large scale programs. Both the processing package and Parallel Python tackle the issues of multi-processing in Python head on, from dif-ferent directions. Where the processing package tries to fit itself into existing threading designs, pp uses a more explicit distributed job model. Each approach has benefits and drawbacks, and neither is suitable for every situation. Both, however, save you a lot of time over the alternative of writing everything yourself with low-level libraries. What an age to be alive!

1. #!/usr/bin/env python 2. # Using a shared namespace. 3. 4. import processing 5. 6. def f(ns): 7. print ns 8. ns.old_coords = (ns.x, ns.y) 9. ns.x += 1010. ns.y += 1011. 12. if __name__ == '__main__':13. # Initialize the namespace14. manager = processing.Manager()15. ns = manager.Namespace()16. ns.x = 1017. ns.y = 2018. 19. # Use the namespace in another process20. p = processing.Process(target=f, args=(ns,))21. p.start()22. p.join()23. 24. # Show the resulting changes in this process25. print ns26.

LISTING 3

Doug Hellmann is a Senior Software Engineer at Racemi. He has been programming in Python since version 1.4 on a variety of Unix and non-Unix platforms. He has worked on projects ranging from mapping to medical news publishing, with a little banking thrown in for good measure.

1. #!/usr/bin/env python 2. # Pass an object through a queue to another process. 3. 4. from processing import Process, Queue, currentProcess 5. 6. class Example: 7. def __init__(self, name): 8. self.name = name 9. def __str__(self):10. return '%s (%s)' % (self.name, currentProcess())11. 12. 13. def f(q):14. print 'In child:', q.get()15. 16. 17. if __name__ == '__main__':18. q = Queue()19. p = Process(target=f, args=[q])20. p.start()21. o = Example('tester')22. print 'In parent:', o23. q.put(o)24. p.join()25.

LISTING 2

8 • Python Magazine • October 2007

Page 9: Py Mag 2007 10

REQUIREMENTS

PYTHON: 2.x

Other Software: PyGTK 2.10 or above

Useful/Related Links:http://www.pygtk.orghttp://www.cairographics.org

PyGTK, a set of Python bindings for the popular GTK+ graphical toolkit, provides

a rich collection of commonly used windows, dialog boxes, buttons, layout

elements, and other 'widgets'. However, often a programmer has needs which

go beyond the functionality provided by the built-in widgets in PyGTK. This

article explains how to create new widgets using the Python bindings for Cairo

– the vector graphics library used by GTK+ to perform most of its drawing

operations.

by Sayamindu Dasgupta

Creating custom PyGTK widgets with Cairo

FEATURE

About GTK+GTK+ is one of the most popular free/open source Graph-ical User Interface (GUI) toolkits around. Though best known as the basic building block of GNOME, a popular free/open source desktop, GTK+ was originally written for the GIMP image editing program (in fact, 'GTK' actu-ally stands for 'Gimp Tool Kit'). Currently, apart from its role in the GNOME and GIMP projects, it is also used to create the GUIs for the XFCE and Rox desktops. In addition, it is also used in embedded devices such as the Nokia N800/N770 (as a part of the Hildon desktop), and the FIC Neo1973 (as a part of the OpenMoko frame-work).

October 2007 • Python Magazine • 9

Page 10: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo FEATURE

Though written in C, GTK+ supports object oriented features using GObject and it also has an excellent set of bindings for Python, known as PyGTK. In fact, Python is fast becoming one of the primary languages of choice for upcoming GNOME applications, as more and more de-velopers grow to love the language's simplicity and ease of use. Some of the upcoming GNOME applications writ-ten in Python include Sabayon (a user profile editor), Jokosher (a multi-track audio editor), and Pitivi (a video editor), to name just a few. Apart from GTK+, all other major components of the GNOME Development Platform have Python bindings as well, a factor that also contrib-utes to the adoption of Python within GNOME.

About CairoAnother crucial building block of the GNOME develop-ment platform is Cairo, a 2D graphics library with an API similar to the drawing operators offered by PostScript or PDF. Cairo was originally called Xr, though later the name was changed to reflect the fact that the library was not tied to the X windowing system only and supported multiple output "backends" (PDF. PS, X Window System, Image buffers, SVG, Win32 GDI, etc).

Though the library itself is written in C, excellent Py-thon bindings for Cairo exist as well. Moreover, it is al-most trivial to convert a Cairo program written in C to Python. As documented on the Cairo website (http://www.cairographics.org), in most cases, you can convert a Cairo based program written in C to a Python program with just a couple of trivial steps that even beginners would have no problem with. There are a few corner cases which

make converting a little less straightforward, but those peculiar cases are usually few and far between.

How Cairo fits into GTK+From version 2.8 onwards, GTK+ includes Cairo support, making it possible for developers to access the Cairo drawing API directly from within GTK+. This means that GTK+ developers can use Cairo to draw their widgets us-ing the Cairo API instead of using the GDK (GTK+ Draw-ing Kit) drawing functions. In fact, at present, most of the stock GTK+ widgets and theme engines use Cairo to do the rendering and drawing operations.

Cairo basicsPyGTK/GTK+ is an oft used library, and there are lots of examples and documentation available, like the excel-lent tutorial for PyGTK at http://pygtk.org/pygtk2tutorial/. Cairo, on the other hand, is somewhat newer, so here's a small introduction to Cairo before moving on to the PyGTK+Cairo part.

Cairo terminologyCairo draws on a surface (or a destination) which can be your X Window System, a SVG file, a PDF file, or any of the output target supported by Cairo. You can visualise the surface as a canvas on which you can paint using the Cairo drawing methods. Once you have the surface, you can get a CairoContext object from that surface which keeps track of all drawing related variables and resources as you progress. To do the actual "painting", you will also need to set a source, which is like the paint for your work. The source can be a single color, a gradient or even a previously created surface. The source can also contain alpha channel values (used to set the transpar-ency).

To "transfer" the source to a surface, the fill() meth-od is used. During this transfer, you may have a mask which specifies exactly which areas the paint() method will affect. The boundaries of the "holes" in the mask are specified by paths which you can draw before calling the fill() method. If you want to draw your paths only (for example, if you have a triangular path and you want to draw the outline of that triangle), you can use the stroke() method to do so.

Drawing lines, curves and basic shapes with CairoAssuming that ctx is your CairoContext object, the fol-lowing code snippet will draw a straight line segment

FIGURE 1

10 • Python Magazine • October 2007

Page 11: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo

from the coordinate points (10, 10) to (120, 130). Note that the origin point (0,0) of the surface is the top left corner. The value of the X coordinate increases as you move from left to right, while the Y coordinate increases downwards.

ctx.move_to(10, 10)ctx.line_to(120, 130)ctx.stroke()

The first line (ctx.move_to(10, 10)) begins a new sub-path and sets the current point to (10, 10). The second line (ctx.line_to(120, 130)) draws a line from the current point (10, 10) to the point (120, 130). The third line (ctx.stroke()) actually makes the line visible (you can think of the stroke() method as something that drags a virtual pen over your path). However, something that should be kept in mind while using the stroke() method: the path information is reset after the stroke operation is completed. To avoid this, the stroke_preserve() method can be used instead. See Figure 1 for the result of the above code.

Once you have a way to draw straight lines, drawing any kind of polygon becomes an easy task. For example, to draw a triangle, you only need to draw three straight lines between the right coordinates:

ctx.move_to(50, 10)ctx.line_to(125, 150)ctx.line_to(175, 150)ctx.close_path()ctx.stroke()

close_path() is the only new method in the above code snippet. It is a shortcut which draws a line from the

current point (175, 150) to the beginning of the current sub-path, defined as the point passed to the last invo-cation of move_to(). In this case, that's (50, 10). See Figure 2 for the output of the above code. Drawing any other polygon is a similar process (with the exception of a rectangle, which has its own convenience method rectangle(x0, y0, width, height) where (x0, y0) is the top left corner of the rectangle).

Cairo lets you draw cubic Bézier curves with the meth-od curve_to(x0, y0, x1, y1, x2, y2) where (x0, y0) and (x1, y1) are the two control points, and (x2, y2) is the point where the curve ends. To draw arcs, use the method arc(x, y, radius, angle1, angle2) where (x, y) is the center of the arc, radius is the radius, and angle1 and angle2 are the starting and end angles of the arc to be drawn. The angle is measured in radians, with angle 0 signifying the direction of the positive X-axis, and angle 90 (math.pi/2) represents the direction of the positive Y-axis. The angle increases in a clockwise direction. So, for example, to draw only the lower half of a circle centered at (50, 50) with a radius of 20, you will have to call the method ctx.arc(50, 50, 20, 0, math.pi). To draw the full circle instead, the method would be ctx.arc(50, 50, 20, 0, 2*math.pi)

Colors and text with CairoAs I mentioned before, the "paint" you'll be using with cairo is represented by a source. There are quite a few methods that you can use to set the source before apply-ing it. The one you'll probably be using most of the time is set_source_rgb() which lets you specify an rgb color value for your source (the value of each color component

FIGURE 2 FIGURE 3

October 2007 • Python Magazine • 11

Page 12: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo FEATURE

ranging from 0 to 1). So if you want to use the red color, you can use the method set_source_rgb(1, 0, 0) (since the RGB value for red is 255, 0, 0).

You may also use the method set_source_rgba() to set the alpha channel value for transparency. So if you want fully transparent red (invisible), you would need to use set_source_rgba(1, 0, 0, 0), for a red which is 75% opaque, you would use set_source_rgba(1, 0, 0, .75), and for a fully opaque red, you'll need to use set_source_rgba(1, 0, 0, 1). To actually put the source on your destination surface, you will need to use the fill() method which fills up the area enclosed by your path with the source. So if you want a red triangle (Figure 3), your code would be

ctx.move_to(50, 10)ctx.line_to(125, 150)ctx.line_to(175, 150)ctx.close_path()ctx.set_source_rgb(1, 0, 0)ctx.fill()

Note that the path information gets reset after fill() is called. To avoid this behaviour, you may call fill_pre-serve() instead. If you want to fill up your entire surface instead of the area within your path, you can use the paint() method which will transfer the entire source to the surface, regardless of the path that you may have created earlier.

For drawing basic text, show_text () is the method you should use. However, for most text operations you should probably create a PangoLayout instead. To display it, use the update_layout() method followed byshow_layout() to actually show the text. Using PangoLayout will give you a lot more flexibility in terms of text appearance and formatting, and moreover, it will also let you support complex scripts like Arabic or Indic in your rendering.

The anatomy of a PyGTK widgetMoving on from Cairo (for the time being) to PyGTK, let us start off with an analysis of the internals of a typical

custom PyGTK widget. Almost any custom widget that is written is created as a subclass of a standard wid-get (a gtk.TextView or a gtk.Window, for instance). So the first line of a custom widget code looks like class MyWidget(gtk.Window):.

The method you'll probably want to override is the do_expose_event() method, which is the event handler for an expose event. The expose event occurs when the widgets that received the signal need to be redrawn for some reason. However, when you are writing a wid-get from scratch (a subclass of gtk.Widget), some other methods need to be taken care of. The most important of these methods are:

• do_realize(): This method takes care of creat-ing the window (and related resources) for the widget, if required. The 'do_unrealize()' method does just the opposite and frees the window.

• do_size_request(): This method handles the request by GTK+ asking the widget for its size. Note that it isn't guaranteed that the size requested will be granted.

• do_size_allocate(): This method is called when the actual size allocated to the widget is known. Apart from saving the size allocated to the widget and computing the size of compo-nents, this method also takes care of moving the widget's window to the new position (and also resizing it if required).

• do_draw(): This method is called when the wid-get is drawn on the screen for the first time. This method by default generates artificial expose-events, and normally there is no need to change that behaviour, (that is, override the method) unless you are doing something really complicated.

Apart from these, a widget may have several custom sig-nals which are emitted when a particular event occurs, and they almost always have extra properties as well. Both of these are managed via GObject. Using GObject in Python provides quite a few advantages, including support for signals, type checking for properties, moni-toring of properties for change of values, etc.

An example of a typical custom PyGTK widgetLet us now look at a typical example of a PyGTK widget to understand how the various parts fit together. We will look into an OSD (On Screen Display) widget which pops up on the desktop when the user presses the volume

FIGURE 4

12 • Python Magazine • October 2007

Page 13: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo

control buttons on her keyboard (Figure 4). It supports a semi-transparent background, which is one of those things that has become very common and very easy to use since the advent of Cairo and other Xorg technolo-gies such as the COMPOSITE extension. However, it also gracefully degrades to a solid black background if the user is running it in an environment where translucency is not possible. Note that, if you wanted, it is possible to make an entire gtk.Window translucent with the set_opacity() method, but in this case, only the background needs to be translucent. The widget also shows an icon for a speaker and a white bar to indicate the volume level. Since the widget is a special form of GTK window

(one without any decoration, and a custom background), we'll make it a subclass of a a gtk.Window. Since we want to change the look and feel of the widget, we will be overriding the signal handler for the expose event (the do_expose() method). The widget will also have a property called level which will be used to set the length of the volume level indicator bar. Moreover, contrary to the behaviour of a normal gtk.Window, our window will also emit a 'clicked' signal if someone clicks on it. This will allow the developer who is using the widget to close it (or do something even fancier) if the user clicks on the window.

A walkthrough of the code

import randomimport pygtkpygtk.require('2.0')import gtkfrom gtk import gdkimport cairoimport gobject

if gtk.pygtk_version < (2,10,0): print 'PyGtk 2.10.0 or later required' raise SystemExit

So, without any more boring theory, let us dive straight

into the code. The first few lines are usual Python stuff. Apart from gtk, gdk, cairo and gobject, we also import the random module as we will use this for a demo of the widget once we are done. We will also be using a few features specific to PyGTK 2.10 and above, so we do a check for the version we are using (if gtk.pygtk_version < (2,10,0):). We want our OsdWindow widget to be a sub-class of gtk.Window, so we declare our class with class OsdWindow(gtk.Window):).

Dealing with signals

__gsignals__ = { 'expose-event': 'override', 'screen-changed': 'override', 'clicked' : (gobject.SIGNAL_RUN_LAST, gobject.TYPE_NONE, ())}

The __gsignals__ dictionary has all the signals we would want to deal with. The key of each pair in the dictionary is the name of the signal. We override the handlers for the built in class-specific callbacks for expose-event and screen-changed, so we set the value of the relevant pairs to override.

Note: In most of the PyGTK related documentation, you may see the above process as referred to as "over-riding the class closures". In very simplified terms, a closure is an abstraction of the callback concept and it contains, along with the callback function, related stuff such as user data supplied to the callback, etc. When a signal is emitted, a series of closures are emitted, one of them being specific to the class and hence known as the class closure.

Finally, the third signal (clicked) is something that we define on our own, and the value part of the pair is a tuple containing the following members:

• gobject.SIGNAL_RUN_LAST: This is the signal flag, determining when the class closure for the signal would be invoked. SIGNAL_RUN_LAST indicates that the invocation should be during the third stage of signal emission. For more information on this, you can check out the signals section of the Gobject manual at http://developer.gnome.org/doc/API/2.0/gobject/signal.html.

• gobject.TYPE_NONE: This signal does not return anything, so the second value is set to gobject.TYPE_NONE

• The third value is an empty tuple. This tuple is supposed to contain all the parameters to the signal. We have none, so the tuple is empty.

"Cairo draws on a surface which can be your X Window System, a SVG file, a PDF file, or any of the output target supported by Cairo. "

October 2007 • Python Magazine • 13

Page 14: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo FEATURE

Once you define a custom signal for yourself, you can emit the signal whenever you want with the emit() meth-od (we will be coming back to this later in the code).

Dealing with properties

__gproperties__ = { 'level': (gobject.TYPE_FLOAT, 'OSD level', 'value for the OSD level indicator', 0, 1, 0.5, gobject.PARAM_READWRITE)}

The properties of our widget are specified via the __gprop-erties__ dictionary. We need to have only one property called level which is specified as the key of the first (and only) pair in the dictionary. The value is a tuple contain-ing the following members:

• gobject.TYPE_FLOAT: This specifies the type of the property. Other types can be TYPE_STRING, TYPE_INT, TYPE_BOOLEAN, TYPE_PYOBJECT, etc.

• The second value is a short string describing the property.

• The third value is a larger description of the property.

• The fourth value specifies the minimal value of the property. Note that this is only valid for certain types of properties such as TYPE_INT, TYPE_FLOAT, etc. In other cases, it may either refer to the default value of the property or the property flags (explained later).

• The fifth value specifies the maximum value of the property. Again, as above, this is appli-cable only for certain types of properties.

• The sixth value specifies the default value of the property in our case.

• The seventh value specifies the property flags which is set to gobject.PARAM_READWRITE. Other flags include PARAM_CONSTRUCT, indi-cating that the property would be set during object construction, PARAM_CONSTRUCT_ONLY, indicating that the property would be set dur-ing object construction only, PARAM_LAX_VALI-DATION, indicating that strict validation is not required for handling the values, PARAM_READ-ABLE, which indicates that property is readable and PARAM_WRITABLE which should be used if the property is writeable. Note that property can be flags can be combined (eg: gobject.PARAM_CONSTRUCT | gobject.PARAM_WRIT-ABLE indicates that the property is writeable and it is set during object construction).

Gobject requires that two methods, do_get_property()

and do_set_property() be defined. These are called when-ever someone tries to access these properties. For our example, the methods are described in Listing 1.

Note: When you have property names with more than one word, GObject translates the - (hyphen) to _ (under-score) and vice versa. So a property representing a Py-thon variable "update_speed" would be translated into "update-speed" by GObject. This is something that you should keep in mind while working on your code.

The constructorThe constuctor for our widget (Listing 2) takes the ini-tial OSD level as an argument. It calls the constructor of the widgets superclass (gtk.Window) and sets the type to gtk.WINDOW_POPUP so that the window man-ager will not register our window. Thus the window will remain undecorated, it will not appear in the panel and users will not be able to resize/move it. We also call set_app_paintable(True) so that we can draw the window background ourselves, instead of GTK+ painting it in the opaque color specified in our theme. The position of the window is set to gtk.WIN_POS_CENTER, so that it appears in the middle of the screen. We also call self.do_screen_changed() which sets up the widget for the current screen. To make the widget emit the "clicked" signal when clicked, first we add the BUTTON_PRESS_MASK to the event mask for our widget and then we setup the widget to emit the signal when a button press is received. These (in addition to the event mask and emitting of the signal) are done via the following code

self.add_events(gtk.gdk.BUTTON_PRESS_MASK)self.connect('button-press-event', lambda x, y: self.emit('clicked'))

1. def do_get_property(self, property): 2. if property.name in self.__gproperties__: 3. return getattr(self, property.name) 4. else: 5. raise AttributeError, 'unknown property %s' % property.name 6. 7. def do_set_property(self, property, value): 8. if property.name in self.__gproperties__: 9. return setattr(self, property.name, value)10. else:11. raise AttributeError, 'unknown property %s' % property.name

LISTING 1

1. def __init__(self, level=0.5): 2. gtk.Window.__init__(self, type=gtk.WINDOW_POPUP) 3. self.set_app_paintable(True) 4. self.set_position(gtk.WIN_POS_CENTER) 5. self.do_screen_changed() 6. 7. self.add_events(gtk.gdk.BUTTON_PRESS_MASK) 8. self.connect('button-press-event', 9. lambda x, y: self.emit('clicked'))10. 11. self.level = level

LISTING 2

14 • Python Magazine • October 2007

Page 15: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo

And in the final step of our constructor, we set the level property of our widget.

The expose event handlerThe do_expose_event() method (Listing 3) is called when the widget gets an 'expose-event' (ie, when all or part of the widget needs to be redrawn for some reason). Before anything else, We get a CairoContext for our widget by invoking self.window.cairo_create() and we would be us-ing this CairoContext for all of our draw operations. The ctx.rectangle() method is used to figure out the exact region that would be affected by our drawing operations. Once this region is determined, the ctx.clip() call masks out all other possible parts of the surface that sit out-side this region. For our drawing, we want to operate in normal "source" mode, and not in "over" or "xor" mode (where drawing the same thing twice will have the same effect as deleting it), so we set the compositing operator to cairo.OPERATOR_SOURCE.

Although we want translucency in our widget, there are a lot of systems out there which do not have trans-lucency (due to older software, or sometimes because the user has chosen to disable translucency). Hence, we need to gracefully handle (or at least try to handle) sys-tems which do not have support for tranclucency. In or-der to do that, we perform some checks in our do_screen_changed() method to figure out if our Xserver supports RGBA visuals and set the value of supports_alpha accord-ingly. However, even if the Xserver supports RGBA visu-als, a compositing manager may not be running in the system, and in such a situation, we cannot rely on the alpha channel being drawn correctly on the screen. For such a situation, the gtk.Widget class in PyGTK 2.10 and above provides a method called is_composited() which returns True if a compositing manager is running for the screen on which the widget is being displayed. We use the values for both supports_alpha and that returned by is_composited() to figure out whether we should be using alpha channels and set the source accordingly.

if self.supports_alpha and self.is_composited(): ctx.set_source_rgba(1.0, 1.0, 1.0, 0.0)else: ctx.set_source_rgb(1.0, 1.0, 1.0)ctx.paint()

If alpha channel is supported, we paint the entire back-ground of the widget to be transparent (with a alpha value of 0.0).

Next, we move on to draw the base image for our wid-get, a translucent rectangle. However, we cannot use the rectangle() method here, since we want our rectangle to have rounded corners. So we create a path for such a rectangle using successive calls to line_to() and curve_to()

1. def clicked_cb(window): 2. print "Exiting..." 3. gtk.main_quit() 4. 5. def update(window): 6. window.set_level(random.random()) 7. return True 8. 9. if __name__ == '__main__':10. window = OsdWindow(level=0.75)11. window.connect('delete-event', gtk.main_quit)12. window.connect('clicked', clicked_cb)13. window.show()14. 15. gobject.timeout_add(1000, update, window)16. 17. gtk.main()

LISTING 4

1. def do_expose_event(self, event): 2. ctx = self.window.cairo_create() 3. alpha = self.supports_alpha and self.is_composited() 4. 5. ctx.rectangle(event.area.x, event.area.y, 6. event.area.width, event.area.height) 7. ctx.clip() 8. ctx.set_operator(cairo.OPERATOR_SOURCE) 9. if alpha:10. ctx.set_source_rgba(1.0, 1.0, 1.0, 0.0)11. else:12. ctx.set_source_rgb(1.0, 1.0, 1.0)13. ctx.paint()14. 15. x0 = event.area.x16. y0 = event.area.y17. width = event.area.width18. height = event.area.height19. x1 = x0 + width20. y1 = y0 + height21. radius = 4022. ctx.move_to(x0, y0+radius)23. ctx.curve_to(x0, y0+radius, x0, y0, x0+radius, y0)24. ctx.line_to(x1-radius, y0) # Top line segment25. ctx.curve_to(x1-radius, y0, x1, y0, x1, y0+radius)26. ctx.line_to(x1, y1-radius) # Right line segment27. ctx.curve_to(x1, y1-radius, x1, y1, x1-radius, y1)28. ctx.line_to(x0+radius, y1) # Bottom line segment29. ctx.curve_to(x0+radius, y1, x0, y1, x0, y1-radius)30. ctx.close_path() # Left line segment31. if alpha:32. ctx.set_source_rgba(0, 0, 0, 0.5)33. else:34. ctx.set_source_rgb(0, 0, 0)35. ctx.fill()36. 37. x0 = event.area.width/438. y0 = event.area.height/339. width = event.area.width/240. height = event.area.height/241. ctx.set_line_width(5)42. ctx.move_to(x0, y0)43. ctx.rel_line_to(width/2, 0)44. ctx.rel_line_to(width/3, -height/4)45. ctx.rel_curve_to(0, 0, height/3, 46. height/2, 0, height)47. ctx.rel_line_to(-width/3, -height/4)48. ctx.rel_line_to(-width/2, 0)49. ctx.close_path()50. ctx.set_source_rgb(1, 1, 1)51. ctx.stroke()52. 53. 54. x0 = event.area.width/855. y0 = event.area.height - event.area.height/856. length = event.area.width - event.area.width/8 - x057. 58. ctx.set_line_width(10)59. ctx.set_dash((10, 5), 0)60. ctx.move_to(x0, y0)61. ctx.line_to(length*self.level+x0, y0)62. ctx.stroke()

LISTING 3

October 2007 • Python Magazine • 15

Page 16: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo FEATURE

1. #!/usr/bin/env python 2. 3. # Demonstration of custom PyGTK widgets 4. # Author: Sayamindu Dasgupta 5. 6. import random 7. 8. import pygtk 9. pygtk.require('2.0') 10. import gtk 11. from gtk import gdk 12. import cairo 13. import gobject 14. 15. 16. if gtk.pygtk_version < (2,10,0): 17. print 'PyGtk 2.10.0 or later required' 18. raise SystemExit 19. 20. class OsdWindow(gtk.Window): 21. __gsignals__ = { 22. 'expose-event': 'override', 23. 'screen-changed': 'override', 24. 'clicked' : (gobject.SIGNAL_RUN_LAST, 25. gobject.TYPE_NONE, 26. ()) 27. } 28. 29. __gproperties__ = { 30. 'level': (gobject.TYPE_FLOAT, 31. 'OSD level', 'value for the OSD level indicator', 32. 0, 1, 0.5, gobject.PARAM_READWRITE) 33. } 34. 35. def __init__(self, level=0.5): 36. gtk.Window.__init__(self, type=gtk.WINDOW_POPUP) 37. self.set_app_paintable(True) 38. self.set_position(gtk.WIN_POS_CENTER) 39. self.do_screen_changed() 40. 41. self.add_events(gtk.gdk.BUTTON_PRESS_MASK) 42. self.connect('button-press-event', 43. lambda x, y: self.emit('clicked')) 44. 45. self.level = level 46. 47. def do_get_property(self, property): 48. if property.name in self.__gproperties__: 49. return getattr(self, property.name) 50. else: 51. raise AttributeError, 'unknown property %s' % property.name 52. 53. def do_set_property(self, property, value): 54. if property.name in self.__gproperties__: 55. return setattr(self, property.name, value) 56. else: 57. raise AttributeError, 'unknown property %s' % property.name 58. 59. def _set_level(self, level): 60. self._level = level 61. if self.window: 62. alloc = self.get_allocation() 63. rect = gdk.Rectangle(alloc.x, alloc.y, 64. alloc.width, alloc.height) 65. self.window.invalidate_rect(rect, True) 66. self.window.process_updates(True) 67. # Enforce how the level is set, so it doesn't change without updating the UI 68. level = property(lambda self: self._level, _set_level) 69. 70. def do_expose_event(self, event): 71. ctx = self.window.cairo_create() 72. alpha = self.supports_alpha and self.is_composited() 73. 74. ctx.rectangle(event.area.x, event.area.y, 75. event.area.width, event.area.height) 76. ctx.clip() 77. ctx.set_operator(cairo.OPERATOR_SOURCE) 78. if alpha: 79. ctx.set_source_rgba(1.0, 1.0, 1.0, a) 80. else: 81. ctx.set_source_rgb(1.0, 1.0, 1.0) 82. ctx.paint() 83. 84. x0 = event.area.x 85. y0 = event.area.y 86. width = event.area.width

LISTING 5

87. height = event.area.height 88. x1 = x0 + width 89. y1 = y0 + height 90. radius = 40 91. ctx.move_to(x0, y0+radius) 92. ctx.curve_to(x0, y0+radius, x0, y0, x0+radius, y0) 93. ctx.line_to(x1-radius, y0) # Top line segment 94. ctx.curve_to(x1-radius, y0, x1, y0, x1, y0+radius) 95. ctx.line_to(x1, y1-radius) # Right line segment 96. ctx.curve_to(x1, y1-radius, x1, y1, x1-radius, y1) 97. ctx.line_to(x0+radius, y1) # Bottom line segment 98. ctx.curve_to(x0+radius, y1, x0, y1, x0, y1-radius) 99. ctx.close_path() # Left line segment100. if alpha:101. ctx.set_source_rgba(0, 0, 0, 0.5)102. else:103. ctx.set_source_rgb(0, 0, 0)104. ctx.fill()105. 106. x0 = event.area.width/4107. y0 = event.area.height/3108. width = event.area.width/2109. height = event.area.height/2110. ctx.set_line_width(5)111. ctx.move_to(x0, y0)112. ctx.rel_line_to(width/2, 0)113. ctx.rel_line_to(width/3, -height/4)114. ctx.rel_curve_to(0, 0, height/3, 115. height/2, 0, height)116. ctx.rel_line_to(-width/3, -height/4)117. ctx.rel_line_to(-width/2, 0)118. ctx.close_path()119. ctx.set_source_rgb(1, 1, 1)120. ctx.stroke()121. 122. 123. x0 = event.area.width/8124. y0 = event.area.height - event.area.height/8125. length = event.area.width - event.area.width/8 - x0126. 127. ctx.set_line_width(10)128. ctx.set_dash((10, 5), 0)129. ctx.move_to(x0, y0)130. ctx.line_to(length*self.level+x0, y0)131. ctx.stroke()132. 133. def do_screen_changed(self, old_screen=None):134. screen = self.get_screen()135. colormap = screen.get_rgba_colormap()136. if colormap:137. self.supports_alpha = True138. else:139. colormap = screen.get_rgb_colormap()140. self.supports_alpha = False141. self.set_colormap(colormap)142. 143. 144. def clicked_cb(window):145. print "Exiting..."146. gtk.main_quit()147. 148. def update(window):149. window.level = random.random()150. return True151. 152. if __name__ == '__main__':153. window = OsdWindow(level=0.75)154. window.connect('delete-event', gtk.main_quit)155. window.connect('clicked', clicked_cb)156. window.show()157. 158. gobject.timeout_add(1000, update, window)159. 160. gtk.main()

LISTING 5

16 • Python Magazine • October 2007

Page 17: Py Mag 2007 10

FEATURE Creating custom PyGTK widgets with Cairo

(we calculate the orignating point for the rectangle, as well as the dimensions based on the dimensions of the widget). Once the path is created, we set the source to black with 50% transparency and call the fill() method.

Once the rectangle has been drawn, we follow a similar pattern of calculating our coordinates based on the size of the widget for drawing the icon for the speaker. Note the use of relative coordinate versions of the drawing methods while creating the speaker icon.

For the volume level indicator bar, we use a hack. We essentially draw a thick dashed line which looks like a bar.

ctx.set_line_width(10)ctx.set_dash((10, 5), 0)ctx.move_to(x0, y0)ctx.line_to(length*self.level+x0, y0)ctx.stroke()

The first line sets the line width to 10 pixels. The second line sets the line to be dashed from the very beginning, with each dash being 10 pixels wide, and the gap be-tween two dashes being 5 pixels. The last three lines actually draw the line (bar).

The screen change event handler

def do_screen_changed(self, old_screen=None): screen = self.get_screen() colormap = screen.get_rgba_colormap() if colormap: self.supports_alpha = True else: colormap = screen.get_rgb_colormap() self.supports_alpha = False self.set_colormap(colormap)

This event handler is there for making sure that the wid-get does not act strangely if the screen for it changes. It sets the value of the supports_alpha variable to False if the screen does not support RGBA visuals and thus the expose event handler knows that it should not draw us-ing alpha channels until the screen is changed again.

The _set_level() method

def _set_level(self, level): self._level = level if self.window: alloc = self.get_allocation() rect = gdk.Rectangle(alloc.x, alloc.y, alloc.width, alloc.height) self.window.invalidate_rect(rect, True) self.window.process_updates(True)

This method lets the developer set the value of the wid-get's level property. After a new value is received, it also

forces the widget to redraw so that the level indicating bar gets updated accordingly. This is done via the meth-od's invalidate_rect() and process_updates, which sends a synthetic expose event to the widget.

And in the end, the demonstrationThe demonstration (Listing 4) is fairly straightforward. The widget is displayed on screen, and its level prop-erty value is changed every second (via gobject.timeout_add()) to some random value. On clicking, gtk.main_quit() is called, which terminates the demo.

Final words and conclusionThis was a small demo (for the entire program's code, see Listing 5) of what is possible by combining PyCairo and PyGTK together. Almost each week, cool new widgets with really fancy effects are being created by developers all over the world, and I hope that my article will serve as the "initial push" for my readers in that direction. Readers who want to know more can look at the PyGTK website (http://www.pygtk.org). Be sure to read the FAQ for in depth information on just about every aspect of the toolkit. Cairo is a comparatively newer technology, but it is being adopted at a tremendous pace, and ex-amples, tutorials, etc are available all over the Internet. Most of the documents and tutorials are listed at the Cairo website (http://www.cairographics.org), and I would encourage you to look at the Samples section of the website (http://cairographics.org/samples/) where a large number of code snippets illustrate the various render-ings possible via Cairo.

Editor's note: The code for this article was tested under linux, but if you develop with gtk under Microsoft Windows, the same concepts apply and most, if not all, of the code should be reusable.

SayaminDu DaSgupta is an engineering student from Kolkata, India. Apart from coordinating the bn_IN (Bengali, India) translations for GNOME, he is involved with the Sabayon project and the Exaile Media Player project. When not around computers, he likes to play with his pet cat or fiddle with his digital camera.

October 2007 • Python Magazine • 17

Page 18: Py Mag 2007 10

FEATURE FEATURE

REQUIREMENTS

PYTHON: 2.x

Other Software: Max M's icalendar library, from http://codespeak.net/icalendar/

Useful/Related Links:Source for this program • http://www.doughellmann.com/projects/mailbox2ics/RFC 2445 - iCalendar specification • http://www.ietf.org/rfc/rfc2445.txtIMAP specification• http://www.ietf.org/rfc/rfc3501.txtPython standard library imaplib documentation • http://docs.python.org/lib/module-imaplib.html

What can you do to access group calendar information if your Exchange-like mail and calendaring server does not provide iCalendar feeds, and you do not, or cannot, use Outlook? Use Python to extract the calendar data and generate your own feed, of course! This article discusses a surprisingly short program to perform what seems like a complex operation: scan IMAP folders, extract iCalendar attachments, and merge the contained events together into a single calendar.

by Doug Hellmann

Working with IMAP and iCalendar

I recently needed to access shared schedule informa-tion stored on an Exchange-like mail and calendaring server. In this article, I will discuss how I combined

an existing third-party open source library with the tools in the Python standard library to create a command line program called mailbox2ics for converting the calendar data into a format I could bring into my desktop client directly. The final product is just under 140 lines long, including command line switch handling, some error processing, and debug statements; far shorter than I had anticipated. The output file produced can be consumed by any scheduling client which supports the iCalendar standard.

Using Exchange, or a compatible replacement, for email and scheduling makes sense for many environments. The client program, Microsoft Outlook, is usually familiar to non-technical staff members, who are able to hit the ground running instead of trying to figure out how to accomplish their basic communication tasks. However, my laptop runs Mac OS X and I do not have Outlook. Purchasing a copy of Outlook at my own expense, not to

18 • Python Magazine • October 2007

Page 19: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar

mention inflicting further software bloat on my already crowded computer, seemed like a suboptimal solution.

Changing the server software was also not an option. A majority of the users already had Outlook and were accustomed to using it for their scheduling, and I did not want to have to support a different server platform. What I needed, then, was a way to pull the data out of the existing server so I could convert it to a format that I could use with my usual tools: Apple's iCal and Mail.

With iCal, as with many other standards-compliant calendar tools, it is possible to subscribe to calendar data feeds. Unfortunately, the server we were using did not have the ability to export the schedule data in a standard format using a single file or URL. However, the server did provide access to the calendar data via IMAP using shared public folders. I decided to write a Python program to extract the data from the server and convert it into a usable feed. The feed could then be passed to iCal, which would merge the group schedule with the rest of my calendar information so I could see the group events alongside my other meetings, deadlines, and re-minders about when the recycling is picked up on our street.

IMAP BasicsThe calendar data with which I was working was only ac-cessible as attachments on email messages on an IMAP server. The messages were grouped into several folders, with each folder representing a separate public calendar used for a different purpose (meeting room schedules, event planning, holiday and vacation schedules, etc). I had read-only access to all of the messages in the public calendar folders. Each email message typically had one attachment describing a single event. To produce the merged calendar, I needed to scan several folders, read each message in the folder, find and parse the calen-dar data in the attachments, and identify the calendar events. Once I had identified the events to include in the output, I needed to add them to an output file in a format iCal understands.

Python's standard library includes the imaplib module for working with IMAP servers. The IMAP4 and IMAP4_SSL classes provide a high level interface to all of the features I needed: connecting to the server securely, accessing mailboxes, finding messages, and then down-loading them. To experiment with retrieving data from the IMAP server, I started by establishing a secure con-nection to the server on the standard port for IMAP over SSL, and logging in using my regular account. This would not be a desirable way to run the final program on a regular basis, but it worked fine for development and testing.

mail_server = imaplib.IMAP4_SSL(hostname)mail_server.login(username, password)

It is also possible to use IMAP over a non-standard port, when necessary. In that case, the caller can pass port as an additional option to imaplib.IMAP4_SSL(). To work with an IMAP server without the SSL encryption layer, you can use the IMAP4 class, but using SSL is definitely preferred whenever possible.

mail_server = imaplib.IMAP4_SSL(hostname, port)mail_server.login(username, password)

The connection to the IMAP server is "stateful". The cli-ent remembers which methods have been called on it, and changes its internal state to reflect those calls. The internal state is used to detect logical errors in the se-quence of method calls without the round-trip to the server.

On an IMAP server, messages are organized into "mail-boxes". Each mailbox has a name and, since mailboxes might be nested, the full name of the mailbox is the path to that mailbox. Mailbox paths work just like paths to directories or folders in a filesystem. The paths are single strings, with levels usually separated by forward slash (/) or period (.). The actual separator value used depends on the configuration of your IMAP server; one of my servers uses a slash, while the other uses period. If you do not already know how your server is set up, you will need to experiment to determine the correct folder names.

Once I had my client connected to the server, the next step was to call select() to set the mailbox context to be used when searching for and downloading messages.

mail_server.select('Public Folders/EventCalendar')# ormail_server.select('Public Folders.EventCalendar')

After a mailbox is selected, it is possible to retrieve mes-sages from the mailbox using search(). The IMAP method search() supports filtering to identify only the messages you need. You can search for messages based on the content of the message headers, with the rules evalu-ated on the server instead of your client, thus reducing the amount of information the server has to transmit to the client. Refer to RFC 3501 ("Internet Message Access Protocol") for details about the types of queries which can be performed and the syntax for passing the query arguments.

In order to implement mailbox2ics, I needed to look at all of the messages in every mailbox for the user named on the command line, so I simply used the filter "ALL" with each mailbox. The return value from search() in-cludes a response code and a string with the message numbers separated by spaces. A separate call is required to retrieve more details about an individual message,

October 2007 • Python Magazine • 19

Page 20: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar FEATURE

such as the headers or body.

(typ, [message_ids]) = mail_server.search(None, 'ALL')message_ids = message_ids.split()

Individual messages are retrieved via fetch(). If only part of the message is desired (size, envelope, body), that part can be fetched to limit bandwidth. I could not predict which subset of the message body might include the attachments I wanted, so it was simplest for me to download the entire message. Calling fetch("(RFC822)") returns a string containing the MIME-encoded version of the message with all headers intact.

typ, message_parts = mail_server.fetch( message_ids[0], '(RFC822)')message_body = message_parts[0][1]

Once the message body had been downloaded, the next step was to parse it to find the attachments with cal-endar data. Beginning with version 2.2.3, the Python standard library has included the email package for work-ing with standards-compliant email messages. There is a straightforward factory for converting message text to Message objects. To parse the text representation of an email and create a Message instance from it, use email.message_from_string().

msg = email.message_from_string(message_body)

Message objects are almost always made up of multiple parts. The parts of the message are organized in a tree structure, with message attachments supporting nested attachments. Subparts or attachments can even include entire email messages, such as when you forward a mes-sage which already contains an attachment to someone else. To iterate over all of the parts of the Message tree recursively, use the walk() method.

for part in msg.walk(): print part.get_content_type()

Having access to the email package saved an enormous amount of time on this project. Parsing multi-part email messages reliably is tricky, even with (or perhaps be-cause of) the many standards involved. With the email package, in just a few lines of Python, you can parse and traverse all of the parts of even the most complex standard-compliant multi-part email message, giving you access to the type and content of each part.

Accessing Calendar DataThe "Internet Calendaring and Scheduling Core Object Specification", or iCalendar, is defined in RFC 2445. iCal-endar is a data format for sharing scheduling and other date-oriented information. One typical way to receive an

iCalendar event notification is via an email attachment. Most standard calendaring tools, such as iCal and Out-look, generate these email messages when you initially "invite" another participant to a meeting, or update an existing meeting description. The iCalendar standard says the file should have filename extension ICS and mime-type text/calendar. The input data for mailbox2ics came from email attachments of this type.

The iCalendar format is text-based. A simple example of an ICS file with a single event is provided in Listing 1. Calendar events have properties to indicate who was in-vited to an event, who originated it, where and when it will be held, and all of the other expected bits of infor-mation important for a scheduled event. Each property of the event is encoded on its own line, with long values wrapped onto multiple lines in a well-defined way to al-low the original content to be reconstructed by a client receiving the iCalendar representation of the data. Some properties also can be repeated, to handle cases such as meetings with multiple invitees.

In addition to having a variety of single or multi-value properties, calendar elements can be nested, much like email messages with attachments. An ICS file is made up of a VCALENDAR component, which usually includes one or more VEVENT components. A VCALENDAR might also include VTODO components (for tasks on a to-do list). A VEVENT may contain a VALARM, which specifies the time and means by which the user should be reminded of the event. The complete description of the iCalendar format, including valid component types and property names, and the types of values which are legal for each property, is available in the RFC.

This sounds complex, but luckily, I did not have to worry about parsing the ICS data at all. Instead of do-ing the work myself, I took advantage of an open source Python library for working with iCalendar data released by Max M. ([email protected]). His iCalendar library (avail-able from codespeak.net) makes parsing ICS data sources very simple. The API for the library was designed based

1. BEGIN:VCALENDAR 2. CALSCALE:GREGORIAN 3. PRODID:-//Big Calendar Corp//Server Version X.Y.Z//EN 4. VERSION:2.0 5. METHOD:PUBLISH 6. BEGIN:VEVENT 7. UID:20379258.1177945519186.JavaMail.root(a)imap.example.com 8. LAST-MODIFIED:20070519T000650Z 9. DTSTAMP:20070519T000650Z10. DTSTART;VALUE=DATE:2007050811. DTEND;VALUE=DATE:2007050912. PRIORITY:513. TRANSP:OPAQUE14. SEQUENCE:015. SUMMARY:Day off16. LOCATION:17. CLASS:PUBLIC18. END:VEVENT19. END:VCALENDAR

LISTING 1

20 • Python Magazine • October 2007

Page 21: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar

on the email package discussed previously, so working with Calendar instances and email.Message instances is similar. Use the class method Calendar.from_string() to parse the text representation of the calendar data to cre-ate a Calendar instance populated with all of the proper-ties and subcomponents described in the input data.

from icalendar import Calendar, Event

cal_data = Calendar.from_string(

open('sample.ics', 'rb').read())

Once you have instantiated the Calendar object, there are two different ways to iterate through its components: via the walk() method or subcomponents attribute. Using walk() will traverse the entire tree and let you process each component in the tree individually. Accessing the subcomponents list directly lets you work with a larger portion of the calendar data tree at one time. Proper-ties of an individual component, such as the summary or start date, are accessed via the __getitem__() API, just as

1. #!/usr/bin/env python 2. # mailbox2ics.py 3. 4. """Convert the contents of an imap mailbox to an ICS file. 5. 6. This program scans an IMAP mailbox, reads in any messages with ICS 7. files attached, and merges them into a single ICS file as output. 8. """ 9. 10. # Import system modules 11. import imaplib 12. import email 13. import getpass 14. import optparse 15. import sys 16. 17. # Import Local modules 18. from icalendar import Calendar, Event 19. 20. # Module 21. 22. def main(): 23. # Set up our options 24. option_parser = optparse.OptionParser( 25. usage='usage: %prog [options] hostname username mailbox [mailbox...]' 26. ) 27. option_parser.add_option('-p', '--password', dest='password', 28. default='', 29. help='Password for username', 30. ) 31. option_parser.add_option('--port', dest='port', 32. help='Port for IMAP server', 33. type="int", 34. ) 35. option_parser.add_option('-v', '--verbose', 36. dest="verbose", 37. action="store_true", 38. default=True, 39. help='Show progress', 40. ) 41. option_parser.add_option('-q', '--quiet', 42. dest="verbose", 43. action="store_false", 44. help='Do not show progress', 45. ) 46. option_parser.add_option('-o', '--output', dest="output", 47. help="Output file", 48. default=None, 49. ) 50. 51. (options, args) = option_parser.parse_args() 52. if len(args) < 3: 53. option_parser.print_help() 54. print >>sys.stderr, '\nERROR: Please specify a username, hostname, and mailbox.' 55. return 1 56. hostname = args[0] 57. username = args[1] 58. mailboxes = args[2:] 59. 60. # Make sure we have the credentials to login to the IMAP server. 61. password = options.password or getpass.getpass(stream=sys.stderr) 62. 63. # Initialize a calendar to hold the merged data 64. merged_calendar = Calendar() 65. merged_calendar.add('prodid', '-//mailbox2ics//doughellmann.com//') 66. merged_calendar.add('calscale', 'GREGORIAN') 67. 68. if options.verbose: 69. print >>sys.stderr, 'Logging in to "%s" as %s' % (hostname, username)

LISTING 2

70. 71. # Connect to the mail server 72. if options.port is not None: 73. mail_server = imaplib.IMAP4_SSL(hostname, options.port) 74. else: 75. mail_server = imaplib.IMAP4_SSL(hostname) 76. (typ, [login_response]) = mail_server.login(username, password) 77. try: 78. # Process the mailboxes 79. for mailbox in mailboxes: 80. if options.verbose: print >>sys.stderr, 'Scanning %s ...' % mailbox 81. (typ, [num_messages]) = mail_server.select(mailbox) 82. if typ == 'NO': 83. raise RuntimeError('Could not find mailbox %s: %s' % 84. (mailbox, num_messages)) 85. num_messages = int(num_messages) 86. if not num_messages: 87. if options.verbose: print >>sys.stderr, ' empty' 88. continue 89. 90. # Find all messages 91. (typ, [message_ids]) = mail_server.search(None, 'ALL') 92. for num in message_ids.split(): 93. 94. # Get a Message object 95. typ, message_parts = mail_server.fetch(num, '(RFC822)') 96. msg = email.message_from_string(message_parts[0][1]) 97. 98. # Look for calendar attachments 99. for part in msg.walk():100. if part.get_content_type() == 'text/calendar':101. # Parse the calendar attachment102. ics_text = part.get_payload(decode=1)103. importing = Calendar.from_string(ics_text)104. 105. # Add events from the calendar to our merge calendar106. for event in importing.subcomponents:107. if event.name != 'VEVENT':108. continue109. if options.verbose: 110. print >>sys.stderr, 'Found: %s' % event['SUMMARY']111. merged_calendar.add_component(event)112. finally:113. # Disconnect from the IMAP server114. if mail_server.state != 'AUTH':115. mail_server.close()116. mail_server.logout()117. 118. # Dump the merged calendar to our output destination119. if options.output:120. output = open(options.output, 'wt')121. try:122. output.write(str(merged_calendar))123. finally:124. output.close()125. else:126. print str(merged_calendar)127. return 0128. 129. if __name__ == '__main__':130. try:131. exit_code = main()132. except Exception, err:133. print >>sys.stderr, 'ERROR: %s' % str(err)134. exit_code = 1135. sys.exit(exit_code)136.

LISTING 2: Continued...

October 2007 • Python Magazine • 21

Page 22: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar FEATURE

with a standard Python dictionary. The property names are not case sensitive.

For example, to print the "SUMMARY" field values from all top level events in a calendar, you would first iterate over the subcomponents, then check the name attribute to determine the component type. If the type is VEVENT, then the summary can be accessed and printed.

for event in cal_data.subcomponents: if event.name == 'VEVENT': print 'EVENT:', event['SUMMARY']

While most of the ICS attachments in my input data would be made up of one VCALENDAR componment with one VEVENT subcomponent, I did not want to require this limitation. The calendars are writable by anyone in the organization, so while it was unlikely that anyone would have added a VTODO or VJOURNAL to public data, I could not count on it. Checking for VEVENT as I scanned each component let me ignore components with types that I did not want to include in the output.

Writing ICS data to a file is as simple as reading it, and only takes a few lines of code. The Calendar class handles the difficult tasks of encoding and formatting the data as needed to produce a fully formatted ICS rep-resentation, so I only needed to write the formatted text to a file.

ics_output = open('output.ics', 'wb')try: ics_output.write(str(cal_data))finally: ics_output.close()

Finding Max M's iCalendar library saved me a lot of time and effort, and demonstrates clearly the value of Py-thon and open source in general. The API is concise and, since it is patterned off of another library I was already using, the idioms were familiar. I had not embarked on this project eager to write parsers for the input data, so I was glad to have libraries available to do that part of the work for me.

Putting It All TogetherAt this point, I had the pieces to build a program to do what I needed. I could read the email messages from

the server via IMAP, parse each message looking for the ICS attachments, parse them to produce another ICS file, and import that file into my calendar client. All that remained was to tie the pieces together and give it a user interface. The source for the resulting program, mailbox2ics.py, is provided in Listing 2.

Since I wanted to set up the export job to run on a regular basis via cron, I chose a command line inter-face. The main() function for mailbox2ics.py starts out at line 24 with the usual sort of configuration for command line option processing via the optparse module. Listing 3 shows the help output produced when the program is run with the -h option.

The –password option can be used to specify the IMAP account password on the command line, but if you choose to use it consider the security implications of embedding a password in the command line for a cron task or shell script. No matter how you specify the pass-word, I recommend creating a separate mailbox2ics ac-count on the IMAP server and limiting the rights it has so no data can be created or deleted and only public folders can be accessed. If –password is not specified on the command line, the user is prompted for a password when they run the program. While less useful with cron, providing the password interactively can be a solution if you are unable, or not allowed, to create a separate restricted account on the IMAP server. The account name used to connect to the server is required on the com-mand line.

There is also a separate option for writing the ICS out-put data to a file. The default is to print the sequence of events to standard output in ICS format. Though it is easy enough to redirect standard output to a file, the -o option can be useful if you are using the -v option to en-able verbose progress tracking and debugging.

The program uses a separate Calendar instance, merged_data, to hold all of the ICS information to be included in the output. All of the VEVENT components from the input are copied to merged_data in memory, and the entire calendar is written to the output location at the end of the program. After initialization (line 64), merged_data is configured with some basic properties. PRODID is required and specifies the name of the product which produced the ICS file. CALSCALE defines the date system, or scale, used for the calendar.

After setting up merged_calendar, mailbox2ics con-nects to the IMAP server. It tests whether the user has specified a network port using –port and only passes a port number to imaplib if the user includes the option. The optparse library converts the option value to an in-teger based on the option configuration, so options.port is either an integer or None.

The names of all mailboxes to be scanned are passed as arguments to mailbox2ics on the command line after

1. Usage: mailbox2ics.py [options] hostname username mailbox [mailbox...] 2. 3. Options: 4. -h, --help show this help message and exit 5. -p PASSWORD, --password=PASSWORD 6. Password for username 7. --port=PORT Port for IMAP server 8. -v, --verbose Show progress 9. -q, --quiet Do not show progress10. -o OUTPUT, --output=OUTPUT11. Output file

LISTING 3

22 • Python Magazine • October 2007

Page 23: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar

the rest of the option switches. Each mailbox name is processed one at a time, in the for loop starting on line 79. After calling select() to change the IMAP context, the message ids of all of the messages in the mailbox are retrieved via a call to search(). The full content of each message in the mailbox is fetched in turn, and parsed with email.message_from_string(). Once the message has been parsed, the msg variable refers to an instance of email.Message.

Each message may have multiple parts containing dif-ferent MIME encodings of the same data, as well as any additional message information or attachments included in the email which generated the event. For event noti-fication messages, there is typically at least one human-readable representation of the event and frequently both HTML and plain text are included. Of course, the mes-sage also includes the actual ICS file, as well. For my purposes, only the ICS attachments were important, but

there is no way to predict where they will appear in the sequence of attachments on the email message. To find the ICS attachments, mailbox2ics walks through all of the parts of the message recursively looking for attach-ments with mime-type text/calendar (as specified in the iCalendar standard) and ignoring everything else. At-tachment names are ignored, since mime-type is a more reliable way to identify the calendar data accurately.

for part in msg.walk(): if part.get_content_type() == 'text/calendar': # Parse the calendar attachment ics_text = part.get_payload(decode=1) importing = Calendar.from_string(ics_text)

When it finds an ICS attachment, mailbox2ics parses the text of the attachment to create a new Calendar in-stance, then copies the VEVENT components from the parsed Calendar to merged_calendar. The events do not need to be sorted into any particular order when they are added to merged_calendar, since the client reading the ICS file will filter and reorder them as necessary. It was important to take the entire event, including any subcomponents, to ensure that all alarms are included. Instead of traversing the entire calendar and accessing each component individually, I simply iterated over the subcomponents of the top-level VCALENDAR node. Most

of the ICS files only included one VEVENT anyway, but I did not want to miss anything important if that ever turned out not to be the case.

for event in importing.subcomponents: if event.name != 'VEVENT': continue merged_calendar.add_component(event)

Once all of the mailboxes, messages, and calendars are processed, the merged_calendar refers to a Calendar in-stance containing all of the events discovered. The last step in the process, starting at line 119, is for mailbox2-ics to create the output. The event data is formatted using merged_calendar.as_string(), just as in the example above, and written to the output destination selected by the user (standard output or file).

ExampleListing 4 includes sample output from running mailbox2-ics to merge two calendars for a couple of telecommut-ing workers, Alice and Bob. Both Alice and Bob have placed their calendars online at imap.example.com. In the output of mailbox2ics, you can see that Alice has 2 events in her calendar indicating the days when she will be in the office. Bob has one event for the day: a meet-ing scheduled with Alice.

The output file created by mailbox2ics containing the merged calendar data from Alice and Bob's calendars is shown in Listing 5. You can see that it includes all 3 events as VEVENT components nested inside a single VCALENDAR. There were no alarms or other types of com-ponents in the input data.

Mailbox2ics In ProductionTo solve my original problem of merging the events into a sharable calendar to which I could subscribe in iCal, I scheduled mailbox2ics to run regularly via cron. With some experimentation, I found that running it every 10 minutes caught most of the updates quickly enough

" The parts of the message are organized in a tree structure, with message attachments supporting

nested attachments."

October 2007 • Python Magazine • 23

Page 24: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar FEATURE

for my needs. The program runs locally on a web server which has access to the IMAP server. For better security, it connects to the IMAP server as a user with restricted permissions. The ICS output file produced is written to a directory accessible to the web server software. This lets me serve the ICS file as static content on the web server to multiple subscribers. Access to the file through the web is protected by a password, to prevent unauthorized access.

Thoughts About Future EnhancementsMailbox2ics does everything I need it to do, for now. There are a few obvious areas where it could be en-hanced to make it more generally useful to other users with different needs, though. Input and output filtering for events could be added. Incremental update support would help it scale to manage larger calendars. Han-dling non-event data in the calendar could also prove useful. And using a configuration file to hold the IMAP password would be more secure than passing it on the command line.

At the time of this writing, mailbox2ics does not offer any way to filter the input or output data other than by controlling which mailboxes are scanned. Adding finer-grained filtering support could be useful. The input data could be filtered at two different points, based on IMAP rules or the content of the calendar entries themselves.

IMAP filter rules (based on sender, recipient, subject line, message contents, or other headers) would use the capabilities of IMAP4.search() and the IMAP server with-out much effort on my part. All that would be needed are a few command line options to pass the filtering rules, or code to read a configuration file. The only difference in the processing by mailbox2ics would be to convert the input rules to the syntax understood by the IMAP server and pass them to search().

Filtering based on VEVENT properties would require a little more work. The event data must be downloaded and checked locally, since the IMAP server will not look inside the attachments to check the contents. Filtering using date ranges for the event start or stop date could be very useful, and not hard to implement. The Calendar class already converts dates to datetime instances. The datetime package makes it easy to test dates against rules such as "events in the next 7 days" or "events since Jan 1, 2007".

Another simple addition would be pattern matching against other property values such as the event summa-ry, organizer, location, or attendees. The patterns could be regular expressions, or a simpler syntax such as glob-bing. The event properties, when present in the input,

are readily available through the __getitem__() API of the Calendar instance and it would be simple to compare them against the pattern(s).

If a large amount of data is involved, either spread across several calendars or because there are a lot of events, it might also be useful to be able to update an existing cached file, rather than building the whole ICS file from scratch each time. Looking only at unread mes-sages in the folder, for example, would let mailbox2ics skip downloading old events that are no longer relevant or already appear in the local ICS file. It could then initialize merged_calendar by reading from the local file before updating it with new events and rewriting the file. Caching some of the results in this way would place less load on the IMAP server, so the export could easily

1. BEGIN:VCALENDAR 2. CALSCALE:GREGORIAN 3. PRODID:-//mailbox2ics//doughellmann.com// 4. BEGIN:VEVENT 5. CLASS:PUBLIC 6. DTEND;VALUE=DATE:20070704 7. DTSTAMP:20070705T180246Z 8. DTSTART;VALUE=DATE:20070703 9. LAST-MODIFIED:20070705T180246Z10. LOCATION:11. PRIORITY:512. SEQUENCE:013. SUMMARY:In the office to work with Bob on project proposal14. TRANSP:TRANSPARENT15. UID:9628812.1182888943029.JavaMail.root(a)imap.example.com16. END:VEVENT17. BEGIN:VEVENT18. CLASS:PUBLIC19. DTEND;VALUE=DATE:2007062720. DTSTAMP:20070625T154856Z21. DTSTART;VALUE=DATE:2007062622. LAST-MODIFIED:20070625T154856Z23. LOCATION:Atlanta24. PRIORITY:525. SEQUENCE:026. SUMMARY:In the office27. TRANSP:TRANSPARENT28. UID:11588018.1182542267385.JavaMail.root(a)imap.example.com29. END:VEVENT30. BEGIN:VEVENT31. CLASS:PUBLIC32. DTEND;VALUE=DATE:2007070433. DTSTAMP:20070705T180246Z34. DTSTART;VALUE=DATE:2007070335. LAST-MODIFIED:20070705T180246Z36. LOCATION:37. PRIORITY:538. SEQUENCE:039. SUMMARY:In the office to work with Alice on project proposal40. TRANSP:TRANSPARENT41. UID:9628812.1182888943029.JavaMail.root(a)imap.example.com42. END:VEVENT43. END:VCALENDAR

LISTING 5

1. $ mailbox2ics.py -o group_schedule.ics imap.example.com mailbox2ics "Calendars.Alice" "Calendars.Bob"2. Password: 3. Logging in to "imap.example.com" as mailbox2ics4. Scanning Calendars.Alice ...5. Found: In the office to work with Bob on project proposal6. Found: In the office7. Scanning Calendars.Bob ...8. Found: In the office to work with Alice on project proposal

LISTING 4

24 • Python Magazine • October 2007

Page 25: Py Mag 2007 10

FEATURE Working with IMAP and iCalendar

be run more frequently than once every 10 minutes.In addition to filtering to reduce the information in-

cluded in the output, it might also prove useful to add extra information by including component types other than VEVENT. For example, including VTODO would allow users to include a group action list in the group calendar. Most scheduling clients support filtering the to-do items and alarms out of calendars to which you subscribe, so if the values are included in a feed, individual users can always ignore the ones they choose.

As mentioned earlier, using the –password option to provide the password to the IMAP server is convenient, but not secure. For example, on some systems it is pos-sible to see the arguments to programs using ps. This al-lows any user on the system to watch for mailbox2ics to run and observe the password used. A more secure way to provide the password is through a configuration file. The file can have filesystem permissions set so that only the owner can access it. It could also, potentially, be encrypted, though that might be overkill for this type of program. It should not be necessary to run mailbox2ics on a server where there is a high risk that the password file might be exposed.

ConclusionMailbox2ics was a fun project that took me just a few hours over a weekend to implement and test. This proj-

ect illustrates two reasons why I enjoy developing with Python. First, difficult tasks are made easier through the power of the "batteries included" nature of Python's standard distribution. And second, coupling Python with the wide array of other open source libraries available lets you get the job done, even when you encounter those times when the Python standard library lacks the exact tool you need. Using the ICS file produced by mail-box2ics, I am now able to access the calendar data I need using my familiar tools, even though iCalendar is not supported directly by the group's calendar server.

And now for something completely differentThe first monthly magazine dedicated exclusively to Python

- Extending Python- Working with IMAP and iCalendar- Processing Web Forms Using Anonymous Functions & WSGI- Creating custom PyGTK widgets with Cairo For more info go to: http://www.pythonmagazine.com

Doug Hellmann is a Senior Software Engineer at Racemi. He has been programming in Python since version 1.4 on a variety of Unix and non-Unix platforms. He has worked on projects ranging from mapping to medical news publishing, with a little banking thrown in for good measure.

October 2007 • Python Magazine • 25

Page 26: Py Mag 2007 10

FEATURE

WSGI - One of Python's Greatest StrengthsMaybe you already have an idea of what WSGI is, but what exactly does it have to do with processing forms? Everything. WSGI allows us to create "middleware" fairly easily to assist with anything from url mapping to au-thentication to form processing to <insert idea here>. That's what WSGI is all about, and that's why it should be an important part of your repertoire. What do I mean by that statement? Let me give you an example:

Suppose you're building a web application that col-

REQUIREMENTS

PYTHON: 2.x

Useful/Related Links:WSGI specificationhttp://www.python.org/dev/peps/pep-0333/

If you're a web developer, you're well aware of the importance of forms in web development.

Not only are they a valuable tool in gathering information from your users, but they can

also be used for thousands of other purposes (e.g., running a survey to see what your users

think of your site). This article will demonstrate how to use anonymous functions (commonly

known as "lambda" functions) to assist in the creation of SQL statements based on the

values received from web forms. We will demonstrate this in the context of a WSGI compliant

framework or component. Though WSGI by now has become well known throughout the

Python community, there still seems to be a cloud of mystery over parts of the spec. We'll

discuss some of the details of the spec that relate to processing form submissions in the

hopes of providing a better understanding of how WSGI fits into the bigger picture.

by Kevin T. Ryan

Processing Web Forms Using Anonymous Functions & WSGI

FEATURE

26 • Python Magazine • October 2007

Page 27: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI

lects information from users – but only from registered users. And let's further assume that you want to be able to maintain internal state across HTTP requests (e.g., so people don't have to keep on logging in to use the site). To meet these fundamental needs, you will prob-ably need the following:

• something to map url's to internal functions• an authentication mechanism• something to do form processing• and probably a lot more!

If you are using components that are WSGI compliant, then you're in great shape! Why? Well, because WSGI can be called by subscribing to the following interface: "function_name(<environ>, <start_response>)"

If you'd like more background on WSGI, check out http://www.python.org/dev/peps/pep-0333/ which contains the PEP describing WSGI.

With that bit of information in hand, it follows that you can integrate the various pieces of your components by building them one on top of another:

• The url mapper will accept <environ>, <start_response> and figure out what function to call based on the environment it is given as its first argument (more on that later). So the url mapper calls the function you have desig-nated as accepting requests for this url.

• Since you want only authenticated users to be able to access this page, you can simply get a WSGI Authentication Middleware agent (or cre-ate one yourself - more on that later as well) and authenticate. If the user passes the test, continue on. Otherwise, send them to a log-in screen.

• Now that you've authenticated your user, you can continue to process the form they've submitted. Again, if this particular part of the

framework is WSGI compliant, those func-tions will be accepting the same arguments as before, and the webpage from step 2 can pass this information along and let the "Forms Middleware" do it's thing.

Moving forward, we will begin to build this form middle-ware we just spoke about in step 3 above, and we'll finish up by talking about using the information provided by the middleware with anonymous functions to build SQL statements that can be used in your web application.

Building Middleware - FormsSo, now that we have a good understanding of what we're trying to accomplish and why, let's get on with it. To begin, we know that, to be WSGI compliant, we have to accept 2 arguments: an environment variable, and a start_response variable. The only one we'll need to be concerned with at this point is the former. The latter is used when you are all finished and ready to complete the request of the user, which we are not ready to do at this point. Remember, we are building a middleware compo-nent here and some other function will have to complete the request later on.

To begin, we are going to build our middleware out of some simple components:

• data types• fields• forms

Each piece will be built on top of (or out of) the previ-ous pieces. For example, fields are built with the help of data types, and forms are built out of fields. So let's start with the simplest component: data types. Since we are assuming that your site is SQL based, this part of the puzzle should be fairly straightforward. Essentially, we want each data type to do two things:

• Validate values provided against values allowed.• Provide a helpful error message if the value

provided was no good.

Pretty easy, eh? Let's define a base abstract class (you can place it in Forms.py, but see below for the actual code):

class DataType(object): def __init__(self): pass def validates(self, value): raise NotImplementedError, "Must subclass DataType"

Why define a base abstract class? Well, it helps ensure

1. 2. class Integer(DataType): 3. def validates(self, value): 4. try: 5. int(value) 6. return value 7. except ValueError: 8. raise ValueError("Must be whole number (eg, 100)") 9. 10. class Varchar(DataType):11. def __init__(self, length=255):12. self.length = length13. def validates(self, value):14. if len(value) <= self.length:15. return value16. else:17. raise ValueError("Must be no longer than %d characters" % self.length)

LISTING 1

October 2007 • Python Magazine • 27

Page 28: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI FEATURE

that all of our data types comply to the standard inter-face (e.g., if a subclass tried to ignore the "validates" method, our users wouldn't be able to ensure a value is valid). To see some standard data types that will be use-ful going forward, see Listing 1.

These are pretty simple, but you get the idea. You may even want to provide better error checking - for example, the Integer type will allow you to pass floats without complaining, but you might not want that. It all depends on what you'll be using the data for, but I'll leave it up to you to define more types and perhaps bet-ter error checking.

Essentially, what these two examples do is provide a service: they make sure that values passed to the form comply with certain data standards. Also, note that their 'validates' method will raise an exception if there is an error in processing the form data. That is, the value is untouched and we know the field validates or we get some kind of error that we can pass back up the chain on invalid data which we can't handle. This will become useful when we develop our fields and forms. The next step is to use the data types in a field. To see how that's defined, see Listing 2.

This simple (but very functional) class allows us to define a new field that can be used in a form. Once the form information is sent back to us, this class will do a lot of the legwork in validating that the information giv-en to us is good. It will perform the following checks:

• If a field was required, was there a value provided?

• If a value was provided, does it comply with the expected type?

Note that checking for required fields is as simple as checking 'if self.required and not value_provided'. This is because Python is very flexible and adept at under-standing True/False values in this type of environment. If a blank string was sent (''), 'not value_provided' will return True, thus kicking off an exception about the lack of an entry. The error message can be later used in returning feedback to the web user or can be used for internal purposes. Now you can begin to see why it was convenient to develop our datatype interface for the 'validates' method in a similar fashion. To verify the appropriateness of the value (after we've checked that it has been provided), all we have to do is run the validates method of the datatype and return that as both methods subscribe to the same interface.

Now we are beginning to see things come together. The final step in this puzzle is to develop a 'Form' object that can link the fields together, validate the form as a whole and maybe provide some nice helpers as well. Also, this is where we'll look into the "mysterious envi-

ron" variable that I've been referring to throughout this article. But first, you can define the class as shown in Listing 3.

And that's basically it! The class boils down to:

• Create the form with a list of fields, which we assume (although we do not check for it above explicitly) are instances of our Field object.

• Populate the form with the values provided in the form.

• Validate the values given.

Step 2 probably requires a bit more discussion, so we'll do that in the next section. However, but for that one exception, the class should be pretty easy to follow. We ensure that we are WSGI compliant with our 'validates' method by accepting both 'environ' and 'start_response' arguments and returning an iterable of strings (the er-rors dictionary). The validates method will check that ap-propriate values were provided throughout the form. We

1. 2. class Field(object): 3. def __init__(self, name, datatype, required=False): 4. self.name = name 5. self.datatype = datatype 6. self.required = required 7. def validates(self, value_provided): 8. if self.required and not value_provided: 9. raise ValueError("Can not be left blank")10. elif value_provided:11. return self.datatype.validates(value_provided)12. else:13. return value_provided14.

LISTING 2

1. # Forms.py 2. 3. import cgi 4. 5. class Form(object): 6. def __init__(self, fields): 7. self._fields = fields 8. self._values = None # to be provided later 9. 10. def validates(self, environ, start_response=None):11. self._values = cgi.FieldStorage(fp=environ['wsgi.input'], environ=environ)12. errors = {}13. for field in self._fields:14. value = self.getvalue(field.name)15. try:16. field.validates(value)17. except Exception, e:18. errors[field.name] = e.args[0]19. return errors20. 21. # The following 2 functions will be helpful later22. 23. def fields(self):24. return self._fields25. 26. def getvalue(self, fieldname, default=None):27. try:28. return self._values.getvalue(fieldname, default)29. except AttributeError:30. raise TypeError("Must populate the form before you can get values")31.

LISTING 3

28 • Python Magazine • October 2007

Page 29: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI

rely heavily on our 'Field' class as well as our 'DataType' class (although the latter is not self-evident from above, we know it to be the case). Note that we use a trick to return our values to the user: we rely on the fact that the caller of our function can test for errors simply by determining if they received the empty dictionary.

So, the interface is simple: we return a mapping of the field names to any errors encountered from that field. If no errors are encountered, an empty dict is returned. Furthermore, each error encountered contains vital in-formation to the caller: which field contained an error (ie, the key of the dictionary), and a descriptive message (provided by the underlying classes) telling the end user what the problem with the field was.

You may not think that being WSGI compliant above is terribly important, but what if you are writing a giant framework or website instead of just looking at this one example? Knowing that each and every component you deal with complies with the same interface, enabling you to "just use the component for the purpose it serves", is compelling. Also, since we've made the 2nd parameter optional, anyone who knows about our interface can just call the function with the first argument and leave the 2nd blank. Those who would like to call it blindly with-out knowing the interface specifically can call it with the default WSGI arguments and all is well. Now that we've

discussed the framework, let's move on to discuss in a little more detail the environ variable we so cleverly used with Python's built in cgi module.

WSGI - The Environ Variable & CGIAbove, we created a form class that we'll be using later to process data received from a web user. Within that class we used Python's cgi module to give our Form class legs in terms of getting at the data the user sent to us via the form. So what exactly is in that 'environ' vari-able that is sent as part of every WSGI call? Well, as Ben Bangert (http://groovie.org/) so aptly put it:

"environ is merely a dict that's loaded with some common CGI environment variables"

So it's as simple as thinking of it as a dictionary with some predefined and available keys. To find out precisely which keys must be available (if it is truly WSGI compli-ant) see: http://www.python.org/peps/pep-0333.html#environ-variables. You'll note that as part of that list of required keys, there are keys specific to WSGI that must also be present. Of note is the 'wsgi.input' key, which should contain:

"An input stream (file-like object) from which the HTTP request body can be read. (The server or gateway may per-form reads on-demand as requested by the application, or it may pre- read the client's request body and buffer it in-memory or on disk, or use any other technique for provid-ing such an input stream, according to its preference.)"

(Taken directly from the WSGI PEP)

We also know from reading the cgi module's source that the FieldStorage class can be instantiated with a file pointer (fp, which defaults to sys.stdin) as well as an environment (environ, which defaults to os.environ). Since the environment we're given as part of the WSGI

1. 2. import re 3. 4. class Email(Varchar): 5. email_pattern = re.compile('^([a-zA-Z0-9_.\-+])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$') 6. 7. def validates(self, value): 8. value = super(Email, self).validates(value) 9. 10. # Further error checking specific to emails11. 12. if self.email_pattern.match(value):13. return value14. else:15. raise ValueError("Must be a valid email (eg, '[email protected]')")16.

LISTING 4

"You may not think that being WSGI compliant above is terribly important, but what if you are writing a giant framework or website instead of just looking at this one example?"

October 2007 • Python Magazine • 29

Page 30: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI FEATURE

protocol contains such a file pointer and an environment variable, we pass them explicitly to the FieldStorage call. The cgi module takes over from there, and graciously provides us with a dictionary-like object that contains all the values sent by the user via the form!

How To Use the Form Class & Anonymous Functions to Process DataWe've now come to the point in this article where we have a WSGI component that can process and validate forms in a WSGI-compliant way. Note that if you wanted to, you could just as easily use another WSGI component that acted as middleware to process form submissions - again, that's the beauty of WSGI! But we'll use our own classes here because they are simple to use, easy to ex-tend and well within the context of this article.

So, how do we use the middleware? Easy: let's assume that you are using a url mapper (as discussed above) such as Selector (http:lukearno.com/projects/selector/) and that you've mapped http:localhost:8080/my_form to a function in your application called 'process_my_form'. Let's further assume that on the 'my_form' HTML page, you are gathering information from your users (e.g., first name, last name and email so that you can send per-sonalized email to everyone who visits your site). So the form portion of the HTML page might look like the following:

<form method="post" action="/my_form"> <input type="text" name="first_name" /> <input type="text" name="last_name" /> <input type="text" name="email" /> <input type="submit" /></form>

Simple enough. Now, within the module that contains the 'process_my_form' function definition, you may want better error checking for email submissions than what is provided by the 'Varchar' datatype class we've defined above. Let's go ahead and extend the Varchar as shown in Listing 4.

You can see how easy it would be to do the same for phone numbers, unique fields, etc. Going into the ex-planation behind the regular expression above would be outside the scope of this article, but I can refer you to http://www.dustindiaz.com/update-your-email-regexp (which is where I think I grabbed it from in the 1st place). Now we have a data type that extends our original specifica-tion to check for valid emails. We'll use that to develop our Form instance as shown in Listing 5.

It is clear that defining our framework made things a lot easier (although we haven't yet built the 'process_updates' function yet - but we will). You'll notice that

if there are errors, errors will evaluate truthfully (i.e., things are *not* okay) and will map the problematic field names to their error messages. For example, if a user were to provide an email similar to the following:

bad@hostcom

and all other fields (first and last name in our example) were fine, the resulting error dictionary would look like:

{'email': "Must be a valid email (eg, '[email protected]')"}

You could then use this to regenerate the form letting them know that the email field contained a bad value and they need to fix it. Helpful error messages can go a long way in making things go as smoothly as possible. But we are still left to our own devices to generate the SQL used to take the data from the user and put it into the database. This is where we will begin to use anony-

1. 2. import Forms 3. # Here is the function that will process every request coming to '/my_form' 4. # url: 5. def process_my_form(environ, start_response): 6. # You may want to make this a global variable so that it is computed only 7. # once, instead of every time the function is called to respond to a url 8. # request from the user. 9. f = Forms.Form([10. Forms.Field('first_name', Forms.Varchar(50), required=True),11. Forms.Field('last_name', Forms.Varchar(75), required=True),12. Forms.Field('email', Email(), required=True)13. ])14. errors = f.validates(environ)15. if errors:16. # We'll assume you have defined a function that will show the form17. # with errors (and maybe re-populate the form with values the user has18. # already provided) in another function19. show_form_with_errors(errors)20. else:21. process_updates(f)22. # We'll also assume you have built a function to tell the user you23. # have succeeded in gathering the information.24. show_successful_submission_form() # Success!25.

LISTING 5

1. 2. def process_updates(form): 3. curs = CONNECTION.cursor() 4. sql_map = { 5. 'first_name' : lambda value: ('first_name', value), 6. 'last_name' : lambda value: ('last_name', value), 7. 'email' : lambda value: ('email', value), 8. } 9. sql = "INSERT INTO user_table (%s) VALUES (%s)"10. columns = []11. values = []12. for field in form.fields():13. # Check if the value was provided by the user and add it to our lists14. # if it was15. if form.get_value(field.name):16. name, val = sql_map[field.name]()17. columns.append(name)18. values.append(val)19. sql = sql % (", ".join(columns), ", ".join("%s" * len(values)))20. # sql now equals "INSERT INTO user_table (first_name, last_name, email) VALUES (%s, %s, %s)"21. curs.execute(sql, values)22. CONNECTION.commit()23.

LISTING 6

30 • Python Magazine • October 2007

Page 31: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI

1. 2. def search_for_contractor(environ, start_response): 3. curs = CONNECTION.cursor() 4. form = Forms.Form([ 5. Forms.Field('name', Forms.Varchar(50)), 6. Forms.Field('city', Forms.Varchar(75)), 7. Forms.Field('state', Forms.Varchar(20)) 8. ]) 9. errors = form.validates(environ)10. if errors:11. # tell user what to do12. return show_form_with_errors(errors)13. filters = []14. values = []15. for filterable_field in filter_map:16. val = form.get_value(filterable_field)17. if val:18. what_to_do, val = filter_map[filterable_field]()19. filters.append(what_to_do)20. values.append(val)21. # Now that we've processed all the values, let's build the SQL statement22. sql = sql % (" And ".join(filters))23. # I'll also assume you know how to set up a cursor to execute the stmt24. curs.execute(sql, values)25. results = curs.fetchall()26. start_response("200 OK",[('Content-Type', "text/html")])27. # Here we would return an iterable (e.g., a Cheetah Template) with the values28. # filled in from the "results" variable.29. return render_template(name='search_for_contractor', results=results)30.

LISTING 7

mous functions (or "lambdas") to help us again. It might be overkill for the current example, but we'll move onto something more substantial once you've seen the tech-nique in action.

Lambda functions are Python's way of representing anonymous functions (of single expressions, at least). We also know that there is a standard protocol for enter-ing values into our database (at least, there should be a standard protocol for entering values into your data-base). So one easy way to enter the information into the database is shown in Listing 6.

Now, that's quite a mouthful! Essentially, what we tried to do was link everything together so that the only thing we would need to change if our table were to change is the 'sql_map' dictionary. In that dictionary, we stored a list of columns and where we would like them to go in the insert statement. So, if we decided to later add 'phone number' to our database table, all we would have to do is add:

'phone_number' : lambda value : ('email', value),

to the end of our dictionary, and our code is udpated au-tomatically! As I mentioned before, this might be over-kill for this example because it is somewhat trivial. But to see how it might work in a larger example, consider searching through your records to find something based on input provided by the end user. Let's say that you were creating a search form that the users could use to search for ratings left by other users (for an example, see http://www.portss.com/searchform). You have several search fields that each might or might not be provided by the end user (e.g., they may want to search for contractors

who operate in Pennsylvania but they don't care about anything else). Your standard search SQL might look like the following (adapted, yet slightly modified from my work on the portss.com website):

sql = '''Select Distinct name, service, ... etc.From contractorsWhere %s'''

Pretty straightforward. But now you get into each of the additional filters that need to be applied, depending on the query sent by the end user. So we might set up a filter map as follows:

filter_map = { 'name' : lambda val : ('company LIKE %s', '%' + val + '%s'), 'city' : lambda val : ('city LIKE %s', '%' + val + '%'), 'state' : lambda val : ('state = %s', val),}

The 'LIKE' for name and city are an easy way of search-ing for values across the entire database for cases where the user may only know part of the name (e.g., they might know only the last name of the person they're looking for). The one current challenge is that our search is case-sensitive, but we'll come up with a way to deal with that in a minute. So how do we link these two items together? Just like we did before - see Listing 7 for how to do it.

Now you can begin to see how this might be more ex-tensible than putting in a ton of 'if this_value exists, add this clause to the sql statement' and figuring out where to put the 'AND'. For example, if you checked everything manually you would need to figure out if something had been added, then add the 'and' onto the front and con-tinue with the other filter, otherwise just show the filter. Furthermore, if you add additional filters to the HTML form, the only thing you need to change (again!) is the filter_map. It turns out to be very convenient to do this when building these kinds of applications. You can also see why we return the string and the 'val' as part of each of the lambda functions shown above - it makes it easy to transform the value before putting it into our SQL. In the example above, we were able to transform the string to utilize SQL's "LIKE" statement. But we still had the problem of case-sensitive searches. What if instead you decided to store all names in your database (e.g., first names, last names, etc.) in lowercase? Maybe you decided this after your search didn't seem to be work-ing for certain cases (eg, the name 'Kevin' was in the database, but the user was searching for 'kevin'). Well, in that case you can update your database changing all

October 2007 • Python Magazine • 31

Page 32: Py Mag 2007 10

FEATURE Processing Web Forms Using Anonymous Functions & WSGI

And now for something completely different

The first monthly magazine dedicated exclusively to Python.

- Extending Python- Working with IMAP and iCalendar

- Processing Web Forms Using Anonymous Functions & WSGI

- Creating custom PyGTK widgets with Cairo

SUBSCRIBE TODAY!

For more info go to: http://www.pythonmagazine.com

values to lower case. For example, in PostgreSQL this can be accomplished as follows:

Update my_table Set first_name = lower(first_name), last_name = lower(last_name), ... etc.

Then you could change your lambdas to the following:

'first_name' : lambda value : ('first_name = %s', value.lower()),

And everything would be set! Now the query will return correct results even when the user types something in all lowercase (or uppercase, etc.) as all values provided by the user will be converted in your anonymous func-tion and the query will update itself. Again, the only thing you would have to change is the filter_map and you'd be good to go. You still might have to figure out how to present the information, but that's the subject of another discussion.

ConclusionIn concluding, let's briefly recap what we've seen:

• abstracting out the form functionality is useful in increasing code reuse;

• doing so in a WSGI compliant fashion is easy and smart, because it is now interchangeable with other WSGI compliant form components without changing any of your code;

• using anonymous functions to process forms is easy and makes your code very maintainable. It also helps keep the logic all in one place, so it's very easy to update as well.

I hope this article gave you a flavor of what it means to be WSGI compliant Middleware and that it will help you in developing future websites. Good luck!

kevin t. ryan is a CPA by day, programmer by night. He has successfully been able to integrate his passion (programming) into his work (accounting) by using data mining as the bridge. He has also created a website to help people find contractors (e.g., plumbers, electricians, etc.) at http://www.portss.com.

32 • Python Magazine • October 2007

Page 33: Py Mag 2007 10

So, the stock distribution of Python isn't good enough for you, hmm? Well, that's not too surprising - it wasn't good enough for me, either. Naturally, I de-

cided to Do Something about it - I taught Python a few new tricks by writing a new module specifically for what I wanted to do - and it did indeed make my life easier!

I'm going to create a new module that duplicates functionality already available in Python modules as an example, so please forgive the seeming duplication of effort. It's easier to make sense of things using "Hello, World!" examples.

Basic Module Requirements and Setting up the EnvironmentFirst, some basic requirements for writing any new Python module: First, you need a compiler available, and Python development headers that you can compile against. In the Fedora Core Linux world, which is the world I live in, that consists of standard development tools like GCC, and make, and the python-devel package.

Specifically, the environment I've set up is using a de-fault installation of Fedora 7. Getting the right environ-

ment after the base OS install does take some work, but I'll show you the commands I used to get there. Once the default OS is installed, I added the "rpmdevtools" package with the command "yum install rpmdevtools", which I use for Fedora packaging. This package required the 'fakeroot' package be installed for dependencies, and also required updates on the following packages (again for dependencies):

• elfutils• elfutils-libelf

REQUIREMENTS

PYTHON: 2.5

Other Software:

• gcc, make, and standard build environment tools

• Python header files for the version you're

building against

So you need to do something in Python, but all you have available is a C library

API to deal with the actual data? Not to worry - Python can easily be extended

to work with that API. Just goes to show you, sometimes you can teach an old

dog new tricks!

by John Berninger

Extending Python Using C to Make Python Smarter

FEATURE

October 2007 • Python Magazine • 33

Page 34: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter FEATURE

• elfutils-libelf-devel• elfutils-libs• popt• rpm• rpm-build• rpm-devel• rpm-libs• rpm-python

Now that these packages are installed, I removed a se-ries of superfluous -devel packages. This was mostly to ensure a clean RPM build environment, and is not directly related to Python extension development, so it's not absolutely required. I suggest you remove as many -devel packages as you feel comfortable removing, how-ever. When all was said and done, I had only the follow-ing -devel packages left:

• libstdc++-devel-4.1.2-12• python-devel-2.5-12.fc7• glibc-devel-2.6-3• perl-devel-5.8.8-18.fc7

The libstdc++ and glibc devel packages are required by gcc, so we can't remove those. The python-devel pack-age is the one we're interested in, so removing it would defeat our purpose here. We could probably remove the perl devel package and it's dependencies, but I chose not to simply because I tend to leave perl alone - the OS is too dependent on perl and python for me to be completely comfortable removing packages that I'm not sure of.

The last part of the process will be installing libraries or -devel packages required for the extensions you will be writing. Since our examples here will be very sim-plistic (ie, they won't be making any library calls), we won't need any additional -devel packages installed (or reinstalled, as the case may be).

Starting DevelopmentOnce you have all those prerequisites installed, you can start developing your module using C code. You'll need to pull in the Python.h header file to get the Python module definitions, like so:

#include <Python.h>

One of the first things you'll want to do then is to declare a static pointer to a Python error object. As everyone reading this knows, programmers make mistakes. They call functions with incorrect parameters, or the wrong number of parameters, or whatever. We need a nice way to tell them they made a mistake, and that's done with the error object:

static PyObject *ErrorObject;

This will ensure that this error/exception object is unique to your module; although not strictly necessary, it is considered impolite to pollute someone else's error space.

Aside: Why not use a binding generator?There are many programs out there that are designed to take a library interface in one language and create bind-ings for it in another language. Python is no exception, so you might be wondering why we're going about doing this "the hard way" by writing all of our binding code ourselves, and not letting a generator do all the heavy lifting.

To be perfectly honest, for many purposes, a generator will work just fine. Most of them are designed to give you as close to an identical interface in the target lan-guage as was found in the source language, and most of them do exactly that with a minimum of fuss and bother. Just because you can use a generator and give yourself Python bindings, however, doesn't always mean you'll understand what goes into the bindings.

I'm a firm believer in understanding what's happening instead of relying on other people who try to tell me "Don't think about this, just let me work my magic and you'll be fine." Well, I'm like that with computer things anyway - my car's engine is a black box to me, and will likely stay that way until the heat death of the universe. But with computers, and programming in particular, I want to know what's inside that black box.

I also think it's important to understand how to write your own bindings in case the generator either doesn't work the way you expect it to, or you need to make your bindings do something the generator doesn't un-derstand. In either of those cases, you'll need to be able

"Passing parameters to functions in a Python script is done in the same way as in a C program..."

34 • Python Magazine • October 2007

Page 35: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter

to get down and dirty in the C code itself and figure out what makes the module tick, and what to turn sideways to get it to tock, as well as tick.

So, are generators or translators useful? Absoluely. Are they applicable to your needs? Probably, but they might not always be - and it's those few times when they're not applicable that you're going to desperately need something that will let you finish the project by noon tomorrow for a presentation to the Board of Direc-tors just before the big company-wide rollout announce-ment. So by all means, find the generators and transla-tors and what-nots. But please, first understand what they're doing to you and for you.

Making A Python-callable functionAll Python callable functions are declared static, and should return either void or a pointer to a PyObject. They take two parameters, both pointers to a PyObject, with the first being a reference to the module routine itself and the second being the argument list. Condens-ing all of that down to the actual function declaration, we get this:

static PyObject *myfunc(PyObject *self, PyObject *args)

You should always name your routines something de-scriptive - this is just as true in C as in Python (or in any other language). The "myfunc" name above should generally not be used unless (like here) you're just giv-ing examples. Always pick names that make sense for the module you're writing and for what the given func-tion is doing.

Now for a quick diversion into 'self-documenting' code. We all know there's no such thing as true self-doc-umenting code, but there is a really easy way to make documentation strings available to Python interpreters for your new module. For any given routine, creating a documentation string is as easy as declaring another variable with a specially-formatted name, like so:

static char myfunc__doc__[] = "This is the documentation string for thePython function myfunc()\n";

So not only do you have information available for the end user making use of your module inside of a Python interpreter, you have some documentation to remind you of just what the heck you were smoking when you initially wrote this function. This is especially helpful when you have to go back and rewrite a module 2 years after forgetting all about it.

Most modules will place the documentation string im-

mediately above the function it's describing, but this is just convention - the string defintion can be anywhere in relation to the function definition except inside the function itself.

For Our Next Trick, A Function That Does WorkSo now that we understand how to define functions, let's write one that actually does something, then examine it in detail. See listing 1 for the fully functional code.

The first thing we see is the documentation string. This tells us what the function will be doing: determin-ing if a given number is even or odd. Yes, this is a really trivial funtion that's already available in Python - I did this deliberately so I could teach concepts and not have to worry about teaching behavior.

Next, we see the function definition exactly as above, save for the function name. After that we have declara-tions for a couple of variables we'll use inside the func-tion. Pretty generic C code so far. The next line is where things get interesting. Since we're being called from Py-thon, our parameter list is a pointer to a Python object. We need an integer to work with, so we need to do some translation work - this is what the PyArg_ParseTuple rou-tine does for us. The three parameters to it are, in order, the Python object that we are parsing, the expected for-mat of the argument (the single 'i' representing a single integer in this case), and the location of the variable we want the parsed object stored in.

If parsing is successful, the function returns a 0; if unsuccessful, a non-zero value. We use the if to trap a non-zero return code, allowing us to handle the argu-ment-parsing error gracefully. In this case, we set an error string in the default error object (declared outside the function) with the specific type of error (PyExc_Val-ueError, which translates into a Python ValueError ex-

1. static char isEven__doc__[] = "Determines if a number is even - if so, returns '1' (for TRUE), else returns 0 (for FALSE)\n"; 2. 3. static PyObject *isEven(PyObject *self, PyObject *args) { 4. int inputValue; 5. int returnValue; 6. 7. if (!PyArg_ParseTuple(args, "i", &inputValue)) { 8. PyErr_SetString(PyExc_ValueError, "Argument parsing error"); 9. return NULL;10. }11. 12. if ( 0 == inputValue%2 ) {13. returnValue=1;14. } else {15. returnValue=0;16. }17. 18. return Py_BuildValue("i", returnValue);19. }20.

LISTING 1

October 2007 • Python Magazine • 35

Page 36: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter FEATURE

ception), and a descriptor string that gets printed to standard out. We signal the error by returning NULL to our caller, which the interpreter handles as a exception.

Once we've successfully parsed the argument, it's time to do the actual work. Determining if a number is even is a simple matter, so we perform the test and store the result that we'll want to hand back to the calling Python interpreter in the returnValue variable.

The last line of the function is also very Python-esque. We want to return a Python object, not a simple integer, so we have to create that object via the Py_BuildValue() function. The first parameter is the format of the object, again a single integer in this case, then we see a list of sufficient variables to build the described object. This works much like a printf() or scanf() call - the number of parameters after the object structure must be exactly equal to the number of items within the object struc-ture.

Aside: Py Arg Parse What?Passing parameters to functions in a Python script is done in the same way as in a C program - in C, you could have something like this:

val = getFuncVal(42);

This would be passing the integer value "42" to the function getFuncVal, and returning another value to be placed in the variable 'val'. Likewise, we see virtually the same call in Python, with the exception of the semi-colon:

val = getFuncVal(42)

So why, exactly, do we need to parse a tuple when we want to look at parameters in our new function? Aren't we just passing integers and strings?

As it turns out, we're passing in something a bit more complex - we're passing in a Python object, which has Python information wrapped around the actual integer value we want to work with. In the case of integers, the extra information is limited to a reference count, which the Python interpreter uses to determine when to garbage collect a given object. If the reference count is 1 or greater, the object is considered in use and not garbage collected. If the reference count is 0, it is con-sidered 'free' and is garbage collected, and the memory it was using is returned to the interpreter for later use. In the case of strings, not only is there a reference count, but there's also a string length. Unlike C, Python strings are not null-terminated, so without that string length information there's no reliable way to determine where the string is supposed to end.

The PyArg_ParseTuple() function is what strips off the

extra Python information and makes the actual param-eter available in it's "bare" C (or C++) form - a plain integer, or a plain null-terminated string. This function becomes especially important when we start passing Py-thon tuples, lists, and dictionaries to our C functions - we need that function to help us tell the C DSO (dynamic shared object) how to translate the list into an array, or the dictionary into a struct, or how to simply disas-semble the tuple into it's component strings, integers, and floating point numbers.

Aside Two: How Does Py_BuildValue Do That?The Py_BuildValue() function is a fairly complex beast - it has to convert C or C++ objects into Python objects. In the Python source code, this routine works by call-ing a series of helper routines - it takes the parameters passed and uses them as a format string and value list as mentioned previously. It then looks at each item of the format string, creates a PyObject from the correspond-ing item on the value list, and appends it to the main PyObject being built. It does so recursively over the format string, figuring out what sort of object to build, and building it. The critical function at the bottom of the recursion stack that gets called for each "single-ton" member of the object being built, is do_mkvalue(). This function is effectively a gargantuan switch state-ment which decides what low-level converter function to use at a given point in the format string, such as PyString_FromStringAndSize(), PyFloat_FromDouble(), or PyInt_FromLong(). This isn't an exhastive list of the low-level converter functions by any means - that list can be found in the online Python documentation.

Each of those low-level converter functions, in turn, takes the original object, initializes a Python object, copies the original item's value into the new Python ob-ject's value, increments the reference count on the new object, and then returns the new PyObject to the caller. These low-level converter functions are also accessible from your module, if you wish to simply return a single value and not use the Py_BuildValue() call.

A More Complex Function - Or TwoIn listing 2, we see the definition of another basic math function, one that takes a single number and returns the factorial of that number. The actual function call available to the Python interpreter is just as simple as our first function, but instead of a single yes/no test, we see a call to a helper function. We also see a return us-

36 • Python Magazine • October 2007

Page 37: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter

ing PyInt_FromLong() versus Py_BuildValue(), but that is merely a cosmetic difference as we've seen above.

The important question here is why we used a helper function for a recursive call versus simply re-calling the getFactorial() function. The answer is remarkably sim-ple - re-calling getFactorial would involve creating new Python objects from the interim results prior to each recursive call, and would also involve all the additional computation and memory overhead of storing and pars-ing those Python objects. Since we don't want to waste valuable computing resources, we simply made a helper function that deal with the C objects and variables na-tively.

One Plus One Equals...Now we have to tell the main Python interpreter what our module can do. We do this by creating a method definition table, which is exactly what the name would seem to imply - a table listing all the methods that we want to make available to the interpreter.

The table is a static struct PyMethodDef, so for the previous examples to be available we would have the following method table:

static struct PyMethodDef testmodule_methods[] = { { "isEven", isEven, METH_VARARGS, "Determine odd/even of a number" }, { "getFactorial", getFactorial, METH_VARARGS, "Calculate the factorial value of a number" }, { NULL, NULL, 0, NULL }};

Let's look at that a bit more closely. We have two func-tions we're making available, but three entries in the table. The last entry is a sentinel entry, and it must consist of {NULL, NULL, 0, NULL} to properly terminate the table for the Python interpreter.

Each entry in this table consists of four items. The first is the name of the function as it wil be called inside the Python interpreter. The second is the name of the C function as defined in the C source for the module. The third entry tells us how parameters will be passed - the possible values are METH_VARARGS, METH_KEY-WORDS, METH_VARARGS | METH_KEYWORDS, and 0. You should always use METH_VARARGS or METH_VARARGS | METH_KEYWORDS unless you really know what you're do-ing. The fourth parameter is simply a description of the function.

Aside: Varargs? Keywords? Whazzat?When determining how to pass parameters to your mod-ule functions, you will most often use the METH_VA-RARGS flag in the function table. This means that the parameters will be passed as a Python tuple, which can be parsed with the PyArg_ParseTuple() function. A flag of 0 for this parameter means that an obsolete version of the PyArg_ParseTuple() function is used - this should be avoided if for no other reason than to ensure your module complies with current best-practices for Python modules.

Using the "METH_VARARGS | METH_KEYWORDS" flag makes things much more interesting - you get vastly increased flexibility in how you call the module func-tion, at the price of a more complex function call on the Python side. In this case, the C function should accept a third argument, again a pointer to a PyObject, which will be a dictionary of keywords. Additionally, you will have to parse the arguments with the PyArg_ParseTu-pleAndKeywords() function as opposed to the simpler PyArg_ParseTuple() function.

... Four?!?There's one final routine we have to write to finish out the module - the initialization routine. This is the only non-static function in the entire module, so it'll look a bit different. Let's look at it now:

PyMODINIT_FUNC inittestmodule(void) { PyObject *m, *d;

m=Py_InitModule("testmodule", testmodule_methods);

d=PyModule_GetDict(m); ErrorObject = Py_BuildValue("s", "testmodule module error"); PyDict_SetItemString(d, "error", ErrorObject);

if (PyErr_Occurred()) Py_FatalError("Can't initialize module testmodule!");}

1. static char getFactorial__doc__[] = "This module takes a number as parameter and returns the factorial of that number\n"; 2. 3. static PyObject *getFactorial(PyObject *self, PyObject *args) { 4. int inputValue; 5. int resultValue; 6. 7. if (!PyArg_ParseTuple(args, "i", &inputValue)) { 8. PyErr_SetString(PyExc_ValueError, "Argument parsing error"); 9. return NULL;10. }11. 12. resultValue=factorialHelper(inputValue);13. return PyInt_FromLong(resultValue);14. }15. 16. 17. int factorialHelper(int factor) {18. 19. if ( factor <= 0 ) {20. return 0;21. }22. if ( factor == 1 ) {23. return 1;24. }25. return factor*factorialHelper(factor-1);26. }

LISTING 2

October 2007 • Python Magazine • 37

Page 38: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter FEATURE

Okay, so there's a lot of stuff we haven't seen yet in there. Basically, what we're doing is initializing the module, and handling any error that may have occurred (hopefully, no error occurred!).

The PyMODINIT_FUNC is another way of saying "void" for C, adding any special linkages required by the plat-form we're going to compile under, and in C++ making it 'extern "C"'. You could probably just use "void" as the return type of the init function, but let's be thorough and use PyMODINIT_FUNC. The Py_InitModule takes the name of the module, testmodule, and the method table definition, testmodule_methods, as parameters. It does all the black magic of making the member functions available to the Python interpreter. The rest of the code above is strictly optional, although I tend to include it since I like error checking, but all it does is look to see if there was a problem initializing the module.

Error Checking: A Closer LookSo you want to do the Right Thing and do error checking at module intialization time. Excellent - a good habit to be in. Now you're wondering just what all that extra stuff is doing and when I'll get around to explaining it - the answer to the second part is "right now".

The first command is the call assigning a pointer to a PyObject; a result of calling PyModule_GetDict(). In the Python interpreter, each loaded module has an associ-ated dictionary of function names and meta-information about that module. What we're doing here is grabbing a handle on that dictionary. Next, we build up a Python object using Py_BuildValue, and assign it to the Erro-rObject object we declared way back at the beginning of the module. This is the object that will hold the text string that will get sent to the STDOUT of the interpreter if there was an error initializing the module.

The next call associates the ErrorObject with the mod-

1. 2. static struct PyMethodDef testmodule_methods[] = { 3. { "isEven", isEven, METH_VARARGS, "Determine odd/even of a number" }, 4. { "getFactorial", getFactorial, METH_VARARGS, "Calculate the factorial value of a number" }, 5. { NULL, NULL, 0, NULL } 6. }; 7. 8. 9. 10. void inittestmodule() {11. PyObject *m, *d;12. 13. m=Py_InitModule("testmodule", testmodule_methods);14. 15. d=PyModule_GetDict(m);16. ErrorObject = Py_BuildValue("s", "testmodule module error");17. PyDict_SetItemString(d, "error", ErrorObject);18. 19. if (PyErr_Occurred())20. Py_FatalError("Can't initialize module testmodule!");21. }22.

LISTING 3

1. #include <Python.h> 2. #include <unistd.h> 3. #include <stdlib.h> 4. #include <sys/types.h> 5. 6. static PyObject *ErrorObject; 7. 8. static char isEven__doc__[] = "Determines if a number is even - if \ 9. so, retuns '1' (for TRUE), else returns 0 (for FALSE)\n";10. 11. static PyObject *isEven(PyObject *self, PyObject *args) {12. int inputValue;13. int returnValue;14. 15. if (!PyArg_ParseTuple(args, "i", &inputValue)) {16. PyErr_SetString(PyExc_ValueError, "Argument parsing error");17. return NULL;18. }19. 20. if ( 0 == inputValue%2 ) {21. returnValue=1;22. } else {23. returnValue=0;24. }25. 26. return Py_BuildValue("i", returnValue);27. }28. 29. 30. static char getFactorial__doc__[] = "This module takes a number \31. as parameter and returns the factorial of that number\n";32. 33. static PyObject *getFactorial(PyObject *self, PyObject *args) {34. int inputValue;35. int resultValue;36. 37. if (!PyArg_ParseTuple(args, "i", &inputValue)) {38. PyErr_SetString(PyExc_ValueError, "Argument parsing error");39. return NULL;40. }41. 42. resultValue=factorialHelper(inputValue);43. return PyInt_FromLong(resultValue);44. }45. 46. 47. int factorialHelper(int factor) {48. 49. if ( factor <= 0 ) {50. return 0;51. }52. if ( factor == 1 ) {53. return 1;54. }55. return factor*factorialHelper(factor-1);56. }57. 58. 59. static struct PyMethodDef testmodule_methods[] = {60. { "isEven", isEven, METH_VARARGS, "Determine odd/even of a number" }, 61. { "getFactorial", getFactorial, METH_VARARGS, "Calculate the \62. factorial value of a number" },63. { NULL, NULL, 0, NULL } 64. };65. 66. 67. 68. void inittestmodule() {69. PyObject *m, *d;70. 71. m=Py_InitModule("testmodule", testmodule_methods);72. 73. d=PyModule_GetDict(m);74. ErrorObject = Py_BuildValue("s", "testmodule module error");75. PyDict_SetItemString(d, "error", ErrorObject);76. 77. if (PyErr_Occurred())78. Py_FatalError("Can't initialize module testmodule!");79. }80.

LISTING 4

38 • Python Magazine • October 2007

Page 39: Py Mag 2007 10

FEATURE Extending Python: Using C to Make Python Smarter

ule by setting the ErrorObject to be the value of the item 'error' in the module's dictionary. Since that might be hard to follow (I know it was hard for me to write out), I'll try to explain by using code-like variable representa-tions. Initially, we can imagine the module dictionary as being in the following form:

testmodule: { 'name' => 'testmodule'; 'size' => '4 functions'; 'author' => 'jwb'; }

The actual dictionary wouldn't look anything like that, but that will serve the purposes of this illustration. Af-ter we returned from the PyDict_SetItemString() call, our dictionary would look like this:

testmodule: { 'name' => 'testmodule'; 'size' => '4 functions'; 'author' => 'jwb'; 'error' => ErrorObject; }

Once we've associated the ErrorObject with the module dictionary, we simply check to see if the Py_InitMod-ule() call generated an error. To do so, we call the Py-Err_Occurred() function, which returns zero if no error occurred, and non-zero if there was an error. If there was, we issue a call to Py_FatalError(), which causes the interpreter to remove the module from it's current namespace and prints the message we passed to that function along with the error string we associated with the ErrorObject.

Mix thoroughly, bake at 350, allow to cool, and serveNow we just have to put all the pieces together into a single file such as in Listing 4. Once this is done, we can compile the module into a .so, drop it into the proper

directory for the distribution you're using (for Fedora 7, this would be /usr/lib/python2.5/site-packages/), and start using the module. It's just that simple!

Of course, we first have to know how to compile the .so. In it's most basic form, this is two commands - the first one compiles the source to an object (.o) file using GCC. For our example, we would do the following:

$ gcc -I /usr/include/python2.5 -c listing4.c

This causes the listing4.c program to be compiled to ob-ject format in the listing4.o file. The -I tells the com-piler to search in /usr/include/python2.5 for included header files, which we need in order to find the Python.h file and include it's definitions. The second command turns it into a shared object suitable for dynamic load-ing via a dlopen() call. Again with our example, we do the following:

$ ld -shared -lpython2.5 listing4.o -o listing4.so

The -shared option tells the linker to create a shared li-brary as opposed to an executable, the -lpython2.5 tells the linker to also link in the libpython2.5.so shared li-brary, and the -o tells the linker what filename to write - the default is a.out, which is usually not ideal.

Once you have that .so, that's what you drop into the /usr/lib/python2.5/site-packages directory. Using autotools, or even just an RPM spec file, will involve a slightly more complex compilation process, but ulti-mately the added complexity is just window dressing to what really needs to happen.

JoHn berninger is a senior linux systems administrator at Gilbarco Veeder-Root in Greensboro, NC. He's been doing linux and unix for far too long to want to be reminded of that number of years, including serving hard time as a Red Hat Consultant on Wall Street. He enjoys getting away from computers via photography and SCUBA diving.

"You should always name your routines something descriptive - this is just as true in C as in Python."

October 2007 • Python Magazine • 39

Page 40: Py Mag 2007 10

COLUMNCOLUMN

REQUIREMENTS

PYTHON: 2.2+

Other Software: Python 2.5 or ElementTree Module

Useful/Related Links: •

• http://effbot.org/zone/element-index.htm

• http://effbot.org/zone/element-index.htm#installation

• http://docs.python.org/dev/whatsnew/whatsnew25.html

• http://effbot.org/zone/pythondoc-elementtree-Ele-mentTree.htm#elementtree.ElementTree.XML-function

• http://effbot.org/zone/pythondoc-elementtree-Ele-mentTree.htm#elementtree.ElementTree.parse-function

• http://docs.python.org/lib/module-xml.etree.Element-Tree.html

XML is everywhere. It seems you can't do much these days unless you utilize XML in one way or another. Fortunately, Python developers have a new tool in their standard arsenal: the ElementTree module. This article aims to introduce you to reading, writing, saving, and loading XML using the ElementTree module.

by Mark Mruss

Welcome to PythonElegant XML parsing using the ElementTree Module

Almost everyone needs to parse XML these days. They're either saving their own information in XML or loading someone else's data. This is why I was

glad to learn that as of Python 2.5, the ElementTree XML package has been added to the standard library in the XML module.

What I like about the ElementTree module is that it just seems to make sense. This might seem like a strange thing to say about an XML module, but I've had to parse enough XML in my time to know that if an XML module makes sense the first time you use it, it's prob-ably a keeper. The ElementTree module allows me to work with XML data in a way that is similar to how I think about XML data.

A subset of the full ElementTree module is available in the Python 2.5 standard library as xml.etree, but you don't have to use Python 2.5 in order to use the El-ementTree module. If you are still using an older version of Python (1.5.2 or later) you can simply download the module from its website and manually install it on your system. The website also has very easy to follow instal-lation instructions, which you should consult to avoid issues while installing ElementTree.

In general, the ElementTree module treats XML data as a list of lists. All XML has a root element with zero

40 • Python Magazine • October 2007

Page 41: Py Mag 2007 10

COLUMN Elegant XML parsing using the ElementTree Module

or more child elements. Each of those subelements may in turn have subelements of their own. Let's look at a brief example.

Here's a look at some sample XML data:

<root><child>One</child><child>Two</child></root>

Here we have a root element with two child elements. Each child element has some text associated with it, seen here as "One" and "Two". Visualizing the XML as a list of lists, or a multidimensional array, you'll see that we have a "root" list, which contains a "child" list. Not too complicated so far, is it?

Reading XML dataNow let's use the ElementTree package to parse this XML and print the text data associated with each child ele-ment. To start, we'll create a Python file with the con-tents shown in Listing 1.

This is basically a template that I use for many of my simple "*.py" files. It doesn't actually do anything except set up the script so that when the file is run, the main method will be executed. Some people like to use the Python interactive interpreter for simple hacking like this. Personally, I prefer having my code stored in a handy file so I can make simple changes and re-run the entire script when I am just playing around.

The first thing that we need to do in our Python code is import the ElementTree module:

from xml.etree import ElementTree as ET

Note: If you are not using Python 2.5 and have installed the ElementTree module on your own, you should import the ElementTree module as follows:

from elementtree import ElementTree as ET

This will import the ElementTree section of the mod-ule into your program aliased as ET. However, you don't have to import ElementTree using an alias; you can simply import it and access it as ElementTree. Us-ing ET is demonstrated in the Python 2.5 "What's new" documentation[1] and I think it's a great way to elimi-nate some key strokes.

Now we'll begin writing code in the main method. The first step is to load the XML data described above. Nor-mally you will be working with a file or URL; for now we want to keep this simple and load the XML data directly from the text:

element = ET.XML( "<root><child>One</child><child>Two</child></root>")

The XML function is described in the ElementTree docu-mentation as follows: "Parses an XML document from a string constant. This function can be used to embed "XML literals" in Python code"[2].

Be careful here! The XML function returns an Element object, and not an ElementTree object as one might expect. Element objects are used to represent XML ele-ments, whereas the ElementTree object is used to rep-resent the entire XML document. Element objects may represent the entire XML document if they are the root element but will not if they are a subelement. Element-Tree objects also add "some extra support for serializa-tion to and from standard XML."[3] The Element object that is returned represents the <root> element in our XML data.

Thankfully, the Element object is an iterator object, so we can use a for loop to loop through its child ele-ments:

for subelement in element:

This will give us all of the child elements in the root element. As mentioned earlier, each element in the XML tree is represented as an Element object, so as we iterate through the root element's child elements we are getting more Element objects. Each iteration will give us the next child element as an Element object until there are no more children left. To print out the text associated with an Element object we simply have to access the Ele-ment object's text attribute:

for subelement in element: print subelement.text

To recap, have a look at the code in Listing 2. Running the code should produce the following output:

OneTwo

1. #!/usr/bin/env python2. 3. def main():4. pass5. 6. if __name__ == "__main__":7. main()

LISTING 1

1. #!/usr/bin/env python 2. 3. from xml.etree import ElementTree as ET 4. 5. def main(): 6. element = ET.XML("<root><child>One</child><child>Two</child></root>") 7. for subelement in element: 8. print subelement.text 9. 10. if __name__ == "__main__":11. # Someone is launching this directly12. main()

LISTING 2

October 2007 • Python Magazine • 41

Page 42: Py Mag 2007 10

COLUMN Elegant XML parsing using the ElementTree Module COLUMN

If an XML element does not have any text associated with it, like our root element, the Element object's text attribute will be set to None. If you want to check if an element had any text associated with it, you can do the following:

if element.text is not None: print element.text

Reading XML AttributesLet's alter the XML that we are working with to add at-tributes to the elements and look at how we would parse that information.

If the XML uses attributes in addition to (or instead of) inner text, they can be accessed using the Element object's attrib attribute. The attrib attribute is a Python dictionary and is relatively easy to use:

def main(): element = ET.XML( '<root><child val="One"/><child val="Two"/></root>') for subelement in element: print subelement.attrib

When you run the code you get the following output:

{'val': 'One'}{'val': 'Two'}

These are the attributes for each child element stored in a dictionary. Being able to work with an XML element's attributes as a Python dictionary is a great feature and fits well with the dynamic nature of XML attributes.

Writing XMLNow that we've tried our hand at reading XML, let's try creating some. If you understand the reading process, you should have no trouble understanding the creation process because it works in much the same manner. What we are going to do in this example is recreate the XML data that we were working with above.

The first step is to create our <root> element:

#create the root <root>root_element = ET.Element("root")

After this code is executed, the variable root_element is an Element object, just like the Element objects that we used earlier to parse the XML.

The next step is to create the two child elements. There are two ways to do this.

In the first method, if you know exactly what you are creating, it's easiest to use the SubElement method, which creates an Element object that is a subelement (or child) of another Element object:

#create the first child <child>One</child>child = ET.SubElement(root_element, "child")

This will create a <child></child> Element that is a child of root_element. We then need to set the text associ-ated with that element. To do this we use the same text attribute that we used in the first parsing example. However, instead of simply reading the text attribute we set its value:

child.text = "One"

The second approach to creating a child element is to create an Element object separately (rather than a sub element) and append it to a parent Element object. The results are exactly the same - this is simply a different approach that may come in handy when creating your XML, or working with two sets of XML data.

First we create an Element object in the same way that we created the root element:

#create the second child <child>Two</child>child = ET.Element("child")child.text = "Two"

This creates the child Element object and sets its text to "Two". We then append it to the root element:

#now appendroot_element.append(child)

Pretty simple! Now, if we want to look at the contents of our root_element (or any other Element object for that matter) we can use the handy tostring function. It does exactly what its name suggests: it converts an Element object into a human readable string.

#Let's see the resultsprint ET.tostring(root_element)

To recap, have a look at the code in Listing 3. When you run this code you will get the following output:

<root><child>One</child><child>Two</child></root>

Writing XML attributesIf you want to create the XML with attributes (as illus-trated in the second reading example), you can use the Element object's set method. To add the val attribute to the first element, use the following:

child.set("val","One")

You may also set attributes when you create Element objects:

child = ET.Element("child", val="One")

42 • Python Magazine • October 2007

Page 43: Py Mag 2007 10

COLUMN Elegant XML parsing using the ElementTree Module

Reading XML filesMany times you won't be working with XML data that you explicitly create in your code. Instead, you will usually read the XML data in from a data source, work with it, and then save it back out when you are done. Fortu-nately, configuring ElementTree to work with different data sources is very easy. For example, let's take the XML data that we first used and save it to a file named our.xml in the same location as our Python file.

There are a few methods that we can use to load XML data from a file. We are going to use the parse func-tion. This function is nice because it will accept, as a parameter, the path to a file or a "file-like" object. The term "file-like" is used on purpose because the object does not have to be a file object per se - it simply has to be an object that behaves in a file-like manner. A "file-like" object is an object that implements a "file-like" interface, meaning that it shares many (if not all) methods with the file object. If an object is "file-like" this fact will usually be prominently mentioned in its documentation.

The first thing that we need in order to load the XML data is to determine the full path to the our.xml file. In order to calculate this, we determine the full path of our Python source file, strip the filename from it, and then append our.xml to the path. This is rather simple given that the __file__ attribute (available in Python 2.2 and later) is the relative path and filename of our Python source file. Although the __file__ attribute will be a rela-tive path, we can use it to calculate the absolute path using the standard os module:

import os

We then call the abspath function to get the absolute path:

xml_file = os.path.abspath(__file__)

However, since we only want the directory name (not the full path and filename of our Python source file) we have to strip off the filename:

xml_file = os.path.dirname(xml_file)

Now that we have the directory in which the our.xml file resides, all we have to do is append the our.xml filename to the xml_file variable. However, instead of just doing something like:

xml_file += "/our.xml"

we will use the os module to join the two paths so that the resulting path is always correct regardless of what operating system our code is executed on:

xml_file = os.path.join(xml_file, "our.xml")

Note: If you have any trouble understanding what any of the code used to determine the path of our.xml is doing, try printing out xml_file after each of the above lines and it should become clear.

We now have the full path to the our.xml file. In order to load its XML data we simply pass the path to the parse function:

tree = ET.parse(xml_file)

We now have an ElementTree object instance that repre-sents our XML file.

Since we are working with files, we should watch out for incorrect paths, I/O errors, or the parse function fail-ing for any other reason. If you wish to be extra careful, you can wrap the parse function in a try/except block in order to catch any exceptions that may be thrown:

try: tree = ET.parse("sar")except Exception, inst: print "Unexpected error opening %s: %s" % (xml_file, inst) return

In the except block, I catch the Exception base class so that I catch any and all exceptions that may be thrown (in the case of a missing file it will most likely be an IOError exception).

Writing XML data to a fileNow that we know how to read in XML data, we should look at how one writes XML data out to a file. Let's as-sume that after reading in the our.xml file we want to add another item to the XML file that we just read in:

child = ET.SubElement(tree.getroot(), "child")child.text = "Three"

1. #!/usr/bin/env python 2. 3. from xml.etree import ElementTree as ET 4. 5. def main(): 6. #create the root <root> 7. root_element = ET.Element("root") 8. #create the first child <child>One</child> 9. child = ET.SubElement(root_element, "child")10. child.text = "One"11. #create the second child <child>Two</child>12. child = ET.Element("child")13. child.text = "Two"14. #now append15. root_element.append(child)16. #Let's see the results17. print ET.tostring(root_element)18. 19. if __name__ == "__main__":20. # Someone is launching this directly21. main()

LISTING 3

October 2007 • Python Magazine • 43

Page 44: Py Mag 2007 10

COLUMN Elegant XML parsing using the ElementTree Module

COLUMN

Notice that in order to add a child to the root element we used the ElementTree object's getroot function. The getroot function simply returns the root Element object of the XML data.

Now that we have a third child element, let's write the XML data back out to our.xml. Thanks to ElementTree this is a painless experience:

tree.write(xml_file)

That's it!If we want to be really careful when writing the XML

data out to a file, we'll watch out for exceptions. Howev-er most of the time the write method will succeed with-out throwing an exception; it is more important to be sure that the path used is correct. Often times, instead of getting the exception that you want, you end up with an XML file stored in some far off and strange location on your hard drive because your path was incorrect or you did not specify the full path. But, as is often the case when programming, better safe than sorry:

try: tree.write(xml_file)except Exception, inst: print "Unexpected error writing to file %s: %s" % (xml_file, inst) return

To recap you can find all of the code from this section in Listing 4. When you run the code and look at the our.xml file you should see that the the third child element has been added:

<root><child>One</child><child>Two</child>

<child>Three</child></root>

Reading from the WebWorking with a local file is very useful, but you might also be in a situation where you will have to work with an XML file that is located on the Internet, perhaps an RSS feed. Fortunately, since the parse function ex-plained above works with file-like elements, loading a URL is very easy.

First off, you need to import the urllib module. It's a standard module that allows you to open URLs in a method similar to opening local files:

import urllib

In order to open a URL we use:

feed = urllib.urlopen("http://pythonmagazine.com/c/news/atom")tree = ET.parse(feed)

And that's that! This concludes our brief introduction to XML parsing using the ElementTree module. Hopefully throughout this article you have seen how easy it is to create and manipulate XML using ElementTree ...and I've only scratched the surface. For more information take a look at the official Python documentation and some of the great examples on the effbot website. I'm sure you'll be an XML wizard in no time.

1. #!/usr/bin/env python 2. 3. from xml.etree import ElementTree as ET 4. import os 5. 6. def main(): 7. 8. xml_file = os.path.abspath(__file__) 9. xml_file = os.path.dirname(xml_file)10. xml_file = os.path.join(xml_file, "our.xml")11. 12. try:13. tree = ET.parse(xml_file)14. except Exception, inst:15. print "Unexpected error opening %s: %s" % (xml_file, inst)16. return17. 18. child = ET.SubElement(tree.getroot(), "child")19. child.text = "Three"20. 21. try:22. tree.write(xml_file)23. except Exception, inst:24. print "Unexpected error writing to file %s: %s" % (xml_file, inst)25. return26. 27. if __name__ == "__main__":28. # Someone is launching this directly29. main()

LISTING 4

For the last seven years mark mruSS has worked as a software developer, programming in the much maligned C++. In 2005 Mark decided it was time to add another language to his arsenal. After reading Eric Raymond's well known article "Why Python?" he set his sights on the inviting world of Python.

FOOTNOTES [1] http://docs.python.org/whatsnew/modules.html#SECTION0001420000000000000000 [2] http://effbot.org/zone/pythondoc-elementtree-ElementTree.htm#elementtree.ElementTree.XML-function [3] http://effbot.org/zone/pythondoc-elementtree-ElementTree.htm#elementtree.ElementTree.ElementTree-class

!

44 • Python Magazine • October 2007

Page 45: Py Mag 2007 10

COLUMN

by Steve Holden

Random HitsThe Python Community

I've always been fairly community-minded. In the 1970s I was Treasurer

of DECUS in the UK, and in the 1980s I was Chairman of the Sun UK User Group. I accepted those positions because of a belief in the value of communities bound by a common interest in solving problems using specific technologies. This might seem a bit dangerous - the old saying that if the only tool you have is a hammer then all problems look like nails is very true, but the technologies I have been interested in all my professional life are much more flexible than hammers. Which can be a good thing or a bad thing: there are many different types of nail too.

I suppose this focus on community has to an extent structured my career, such as it has been. If anyone can claim to have started PyCON I suppose it's me, and the primary impetus behind the action was my attendance at my first International Python Conference. This was a typical commercial affair costing around six hundred dollars (plus travel and hotel for those who weren't local to the event), and my initial response to it was "I bet there are a lot of people who would like to go to Python conferences but can't afford this".

So I became more involved with the affairs of the Py-thon Software Foundation and then Guido van Rossum (the inventor of Python) asked me to chair the first Py-CON in 2003. We could have gone in for extensive plan-ning sessions, but we might still be planning the first PyCon had we done that - no "big design up front" for the agile community! As Win Borden wrote, "If you wait to do everything until you're sure it's right, you'll prob-ably never do much of anything."

That first PyCon had an atmosphere I shall never for-get. It was almost as though the convicts had taken over the prison, and people were alight with the tangible sense of new possibilties. This was inevitably followed by PyCON DC 2004 and 2005, and now we've had PyCon

45 • Python Magazine • October 2007

Page 46: Py Mag 2007 10

COLUMN The Python Community

TX 2006 and 2007, with a change of venue as we were victims of our own success: we attracted around 250 people in the first year, and by the third year had clearly outgrown our original home at George Washington Uni-veristy. The delegate count was almost 600 in 2007, so by most reasonable standards I guess the idea can be considered a success.

After two years I decided it was time to give up the PyCon chair. I believe there is a danger that these things can become personal fiefdoms, which leads to stagna-tion and loses the delightful spontaneity. I had started to feel a little uncomfortable because there were signs that, while "the Python community" enjoyed these con-ferences, there were many delegates who would have been prepared to help (and indeed who would have loved to help) but whose skills and energy weren't tapped for one reason or another.

Part of the problem was that things hadn't been ter-ribly organized. There wasn't much of a history of non-commercial Python conferences, so people didn't really know what to expect, and I deliberately took a some-what freewheeling approach rather than try to stamp too many of my own ideas on the nascent conference. Community events can tend towards chaos, but over the first three years PyCon delegates seemed to have been empowered enough to use the PyCon to get together and talk about topics of mutual interest.

After my third year I managed to pass on the torch to someone else. Andrew Kuchling, assisted by Jeff Rush, brought a more organized approach to the event and managed to bring in more volunteers as we moved to Texas for 2006 and 2007. David Goodger will be chairing the 2008 event in Chicago and as PyCon enters its sixth year it appears to be *the* established Python commu-nity event.

I believe the main achievement of my three years as chair was getting the Python community to realize that it can organize better conferences than professional conference organizers can. This is a practical demon-stration of the truth that individuals can and do make a difference, which goes hand in hand with my "roll up your sleeves and get on with it" philosophy. It is also a

further demonstration of the effectiveness of the open source approach, and PyCon planning has always been a fairly open process.

How is PyCon "better" than the old International Py-thon Conference? Well, for a start, it is way more afford-able. Although I have at times worked in the proprietary systems world I have never felt entirely comfortable

with the idea that you should sell your products for the maximum possible amount. In the world of open source where a lot of people aren't in it for the money, high prices can exclude the best talent. That isn't really in anyone's interest. The initial ethic was that everyone would pay the same, and even as Chairman I cheerfully forked over my registration fee.

More than that, though, I think that PyCon is more inclusive, allowing a wider range of contributions and a broader perspective of what Python is actually being used for. I hope in the long term that will be good for Python's development, and will help to keep the devel-opers in touch with their user base. This will in turn maintain Python's relevance to contemporary problems.

Now the attendance has grown I am interested to see that the organizers are starting to talk about using con-ference funds to help those who make a positive con-tribution (particularly speakers) to attend, and to offer commercial delegates the opportunity to pay a higher fee. Believe it or not PyCon's low price can act as a dis-incentive, and some people have difficulty persuading their corporate sponsors that a sub-$200 conference can be worthwhile.

"There wasn't much of a history of non-commercial

Python conferences, so people didn't really know what

to expect..."

Steve HolDen is a consultant, instructor and author active in networking and security technologies. He is Director of the Python Software Foundation and a recipient of the Frank Willison Memorial Award for services to the Python community.

46 • Python Magazine • October 2007