I agree that adding tagging support (particularly for multiple contexts) would be interesting. However, I'm still not convinced that tagging really lives up to all the hype, at least not as a replacement
I've tried tagging in several other apps -- there are some excellent implementations. But I still prefer to see a spacial representation of my data and its structure, especially for the kind of information stored in OmniFocus. I think the best way to do that is still with an hierarchy.
I have no research to back this up, but from my personal experience, tagging requires a greater degree of high-level cognitive reasoning, which can be distracting from the task at hand. When organizing or browsing data, I need to pause and think about what tags apply to an item before I can continue.
With an hierarchical system, on the other hand, I can immediately visualize the data structure in my mind and locate items without having to think about their names or metadata (i.e. tags).
It reminds me of Tog's research in the 80s that showed using a mouse was often more efficient than the abstract symbols of a keyboard:
Deciding among abstract symbols is a high-level cognitive function. Not only is this decision not boring, the user actually experiences amnesia! Real amnesia! The time-slice spent making the decision simply ceases to exist.
While the keyboard users in this case feels as though they have gained two seconds over the mouse users, the opposite is really the case. Because while the keyboard users have been engaged in a process so fascinating that they have experienced amnesia, the mouse users have been so disengaged that they have been able to continue thinking about the task they are trying to accomplish. They have not had to set their task aside to think about or remember abstract symbols.
Hence, users achieve a significant productivity increase with the mouse in spite of their subjective experience.
You can read more at AskTog
, or a more recent mention at Daring Fireball