A journal that focuses on Flash Platform development, and a little bit about what I am up to on any given day.

Monday, February 27, 2006

Identity Revisted

Jesse always gets me thinking.

I posted a little while back on Identity 2.0. I watched an awesome presentation by Dick Hardt that really made me take a fresh look at the way World Wide Web has been implemented.

Jesse states that due to the fact that Search Engines can basically sum up everything you've ever done online, and you are not in control over that information, that it is likely in the future that companies may make you pay to hide it from public view. I agree that this is a big concern.

When he states that the answer is to have as many identities as possible to water down the value of each one, I must say I disagree there. The reason is, because I believe that the Identity 2.0 approach makes much more sense, and that many identities is what led to this mess in the first place.

What the Identity 2.0 approach? Well if you watch the presentation, you will understand, but let me try and summarize with some examples:

As the web evolves into more of a social enabler (like it has been(Blogs)). It is likely that most people will want to expose their Identity in social circles. This is much like real life.

If you are in a crowded cafeteria and scream that you want to kill someone, it is likely that your identity will be established quickly, and you will face consequences for that action. The web I believe, should have that same paradigm, because in the same light, if I say something brilliant and that brilliant idea ends up making money somewhere, I want to be able to assert that it was my idea. Reputation is something the internet should reflect, it brings value and credibility.

Right now I could go over to Leo Laporte's Blog, call him an idiot and sign the comment as Jesse Warden. Right now there is not really any effective way to prove that he didn't post it, other than establishing yet another identity silo.

So on the other hand: I have worked for years establishing my EBay reputation, and I want to leverage it at Craig's List. Right now you can't, the reason is that there is no universal identity. If I could though I could easily jump right into a new service and take my hard earned online reputation with me.

Now, in some cases, you don't want your Identity to be available to the public. Say if some guy likes to rent porno flicks, but he is also a supreme court judge. Is it possible that an online service could expose that information or even worse have him pay to keep it secret. Is it required that the service know he is a supreme court judge or should he be able to control what information is disclosed where? Whatever, truth is that the problem is in the core of the internet and the fact that we have not implemented identity like the real world does.

For example, If I am buying beer at the beer store, does the guy behind the counter ask me for my name, address, email, phone number, etc. Or does he just ask me for proof that I am over 19? He asks for proof. There is no reason for him to expect/record any other information other than the fact that I proved the claim that I am 19 or older.

This is the problem with the web today, and it is explained in Kim Cameron's Laws Of Identity as the Law of Minimum Disclosure. Because we have no effective identity management on the web, every application is forced to maintain a silo of information about us in order to serve us. There is no other way for an application to validate our credentials other than that.

If there were, then I could have a Flickr account, and all Flickr would need to know about me is that I have a credit card and I have authorized them to bill it for the pro account I have requested. No other information is needed for them to run their service for me. The only way this can occur is if I have some way of presenting credentials to Flickr that are trusted by Flickr.

Obviously this would send a shockwave through the web world though. It has been the case that most of the successful companies that provide services over the net, have done so by monetizing your identity. That has to stop, and application providers need to start behaving like the real world behaves.

Without it, imagine the web being the operating system of the future. Imagine what it would be like if you had to log into every single dll that ran on your machine.

I believe the web should revolve around me, I don't want to revolve around it. I want to leverage my identity when it is appropriate, and remain anonymous when there is no need for me to be named.

Facets of User Experience

Here is an interesting perspective on user experience design. I have always believed that there was alot more to building an experience for users than just simply the concept of usability. I have taken a little time out to try and expand on some of these concepts because as a designer/developer/architect I believe that a clear understanding will help:

a - Keep me on track in regards to the overall goal of any project.
b - Help me convey the value of user experience more effectively to our clients.

So after a bit of pondering the diagram, I decided to rework it just a little:



It seemed like a better idea to stack the two concepts of Utility and Affordance rather than have them on a slant (I will explain why later). I also altered some of the names of the facets as my description of them would be slightly different than the original blogger's. But hey this is all just an exercise in understanding really.

You could use this honeycomb as an Axis graph to try and associate values with the different facets. While doing this may not be something that yields 100% measurable results, it would be a good way to benchmark priorities for developing a user interface.

After more thought I realized the the end-user experience is really only half the battle for the applications we build for our clients. I see there being a second honeycomb:


The experience of offering a user experience to the end-user is an experience in itself that can define the success or failure of an application. This diagram is an attempt to put that experience into the same context.

Great, so all of this brings a bit more clarity to any conversation on the value of experience. But I felt that there was another factor missing.

As I began playing with the idea of charting numbers, I realized that those values that I would come up with, would have absolutely no value without another relative measurement to bring it all together. What would justify on what scale you would measure those facets, and what then would measure how well an application actually delivered on them.

Well the answer is target audience. "Who is your target audience?", hmm that just doesn't sound right. A traditional marketing background kind of coaxes you to think that way, but that is certainly not the right word to use for the relative measure I am looking for. The term assumes you are aiming at one group when really there may be no way to group people together other than "people that are interested in a service".

Flickr for example. What is the target audience?

Anyways, I have been thinking about a better way to go about things. Rather than focus on demographics, lets assume that anyone in the world no matter who they are may use an application, and it is their expectations of the application that is really the most important factor.

Expectation of Service - I define it as the preconception of how a potential user envisions themselves being served by a service. This expectation is based on what they would already consider being the standard level of service based on past experiences.

Intent to Serve - The actual claim that an application makes as to the level of service it offers to the end user.

I took a quick stab at making a diagram to convey the relationship:


If you consider both measures to be perfect circles for the time being, you could imagine the above scenarios.

Power User: Someone who's expectation of service has almost filled out the application's intent to serve.

Short Term Positive: The user has a relatively high expectation of service, yet the application provides enough depth to the end user that they must invest some time to explore the possibilities.

Long Term Positive: The application's intent to serve will encompass the user's expectation for quite some time. It will take the user quite awhile to know it inside out. Imagine this to be pro desktop software for example. Long release cycles mean that the app must continue to be a great experience for quite some time.

It is important to note though that just because the intent is way beyond the expectation, that does not actually mean the result will always be a good one. It actually may lead to intimidation. If you sat my grandma down in front of Outlook for the first time and asked her to use it for email, she may be overwhelmed.

Negative: Obviously, if the expectation stretches beyond the intent, then the experience is sub-par. It is less than what is considered standard.

If a user over time fills in the intent, it is important to note that at that time the experience has not lost it's value. Instead, that experience has become what that user would consider standard level of service.

So using this view, it becomes easier to set priorities, because as you look at different target audiences, it becomes a little bit easier to get down to decision making when you measure their likely expectation in relation to intent to serve.

you know, I am not positive about the next one, but I will bet that certain facets will assume higher priority as users move from experiencing for the first time leading to becoming power users:



If you consider the above, I would bet (it is obvious) that new users would be concerned with Affordance where established users are more concerned with Utility. Filling out the facets of Affordance quickly is what in my mind leads to a successful application, where users can quickly move to exploring features offered.

So one last concept I have been thinking about. Experience Resolution. I stated above that just because the intent extends way beyond the expectation, that does not mean the experience will be a good one. The reason for that is what I am calling resolution:




Resolution to me in this context is the relationship between Useful and Desirable. If the application leans heavily on Useful with Desirable being low, then the application is over delivering on it's stated Intent to Serve.

This would be like me helping my grandmother set up Outlook to do her email, when Gmail or Hotmail would have done a better job for her.

The intent was, manage your email, but we all know that Outlook is not really for entry level users. The experience is way to thick for the user to realize the stated intent.

Raising the level of desirability is what counteracts this thickness.

Apple is obviously a master of this balance. Unix without the OSX interface is certainly not for everyone, but with the desirablity layer that is Aqua, it becomes very approachable and less dense.

So feel free to comment, whether you agree or think I'm full of BS 2.0. I am continuing to explore some of these concepts and see if I can draw any parallels with Design Maturity (That should be pretty straight forward).

Thursday, February 23, 2006

Test Clip Extension For Flash

I haven't done any JSFL in a little while, but today I whipped something up for our designers at their request. The best Flash extensions for the most part are those that increase productivity, by simplifying those little annoyances in Flash.

Our designers were working on a big Flash animation project that utilized alot of child movieclips. They wanted a way to a "Test Movie" that would just test the clip they were working on.

I built an extension for them called Test Clip that allows them to select an instance of a symbol on stage, or a symbol in the library and run the command. It will export the clip as a swf and launch it in the test enviornment. Really simple, but according to them, very helpful. It is another extension that is great when set as a keyboard shortcut.

If you want to try it out, here it is.

Tuesday, February 21, 2006

Chiming In: Stop Bashing Web 2.0

I just wanted to add the the comments over at 5 1/2 and try and add a little bit here. Some to support, some to challenge:


The current web sucks

No it doesn't, the current web is awesome, if it wasn't, this conversation would not be happening. If all of these great new solutions like Flickr and Delicious exist right now then it is not the web that sucks it is rather the approach to web development that sucks.

You know, Delicious also probably would validate as HTML 1.0 almost, but the value in the Delicious experience is not actually in the quality of the browser interface as much as it is in the quality of the service. I use the Delicious extensions for Firefox, so I rarely actually visit the site ever, just leverage the service, all the time. That is really what the web 2.0 in what you are talking about is all about.

Right now I am able to achieve the majority of what I want to achieve on the web very easily. That makes the web very effective for me, but like you I want more, but the reason I want more is that I know more is possible, the majority of people do not feel that way yet.

My girlfiend's mom and Dad were over tonight, and her mom works for the federal government of Canada. They were talking about how just recently they got the go ahead to start using Google at work. Can you believe that? They were saying how awesome it was and how much it bettered their digital experience. To me it is web 101, I couldn't imagine not having access to it.

So the web does not suck right now at all for most people, we are just way ahead of the pack. With Web 2.0 we just want to layout where we go from here, we are not trying to redefine everything that has been done up till now. That is a really important fact to understand. Nothing is wrong, the future just can be built better from here on in, when the majority catches up to us.


Web 1.0 was about money, web 2.0 is going to be about money AND data

Well truth is, everything is always about money, that's what gives us the ability to sit and spend time thinking about this kind of stuff. So the mention of it if both web 1.0 and 2.0 are about it is kind of silly. Web 1.0 is about data, and Web 2.0 is also about data. Web 1.0 was about delievering data with presentation rules, where web 2.0 seems focused on delivering data without the presentation, focus on service rather than publishing.

I think there are two issues really being summed up here in regards to data:

Folksonomy: Having the users drive contribute and enrich content, therefore creating reputational value around data. that reputational value is what drives transactions to occur (EBay). Yes there is money to be made here, as it is appearing that peers trust the opinions of each other more than they trust that of the man (Digg).

Long Tail: Now that geography is not an issue due to the internet, more money can be made from the cumulative sum of selling everything that is possibly available, than there is from putting all the money into marketing the top 100 items and hoping enough will be sold. (Amazon, EBay)


Web 1.0 was not about money to me. Web 1.0 was about publishing. In Web 2.0 nothing will be published per se. Everything will revolve in a perpetual beta cycle of service. Works will never be considered a whole, but rather part of a cumulative social conglomeration. For example, myself and the original blogger having this conversation over two different (Websites Web 1.0) (Blogs Web 2.0). In the future users will be concerned with contributing just as much as they will be with consuming.

So it is not really about data, because the data has always been there. It is more about facilitating transactions of data as well as the consumption of data.


Web 2.0 is going to be the .com bubble all over again

Perhaps this is true, but that's what you get when people really don't think things out. We are not sitting here discussing how the future is going to be. We are sitting here discussing how the future may be if we take a certain approach. So I agree that all of this may be BS.

As a matter of fact, while I was listening to the following MP3 today I was disturbed to hear what I heard from one of the founders of Flickr.

He said, and I am paraphrasing here, that "Typically people do not start a business to make money, they do it because they want to do something" (2:50 in the mp3). While they have seen success, and I do believe that people should be passionate about what they do, that perspective is in direct opposition to the fundamentals of business. Good business looks for demand and generates a supply to fill that demand. Creating supply with no demand is a recipe for failure.

When we are floating in an R&D period like we have in the web world around 2000 and again now, it seems possible to get away with statements like this, but really there is no success in this approach outside of R&D, and yes it will lead to a crash.

In the end, very few people who invested in web 2.0 are going to gain financially; only users will win

Maybe, the more I think about this concept the more I see Google, Yahoo, and Microsoft winning. they will gain by charging for access to the API of the service offered. I can likely see all of the API's that we are growing to love: Flickr, Google (Everything), Delicious, and everything else coming down the pipe charging us to utilize the API. It just makes sense. It is at that time that all of the "beta" references will start to come off these web services.

Those who control the data will profit from it, and that is truly who stands to win here.We as end users may win in a way, but we will end up paying to do so, as our favourite interfaces charge a premium on the cost that Yahoo wants me to pay for utilizing the Flickr API. Banks already do this with Interac. Interac is probably the best peek into the future of web service APIs and the business models that will arise.


Open-source is not going to save the world

I don't think that open-source is really relevant in the context it has been put in, in the web 2.0 argument. In the web 2.0 world, open source does not mean PHP over ASP or ColdFusion, it means exposing public APIs to be leveraged by other systems.

In that world it makes no difference what you use to implement your solution, as long as it talks to others in a standard way.

Open Source and Web 2.0 mean something totally different to me than open source traditionally means. Interoperability via web services is built on the fact that the underlying technology is irrelevent, because the communication protocol is universal.

So you're are right, it will not save the world. The world does not need saving.


Web 2.0 is not about Flash

You are right, Web 2.0 is not a suggestion of technical implementation because again, based on the definition, the implementation is irrelevent if we all follow the web 2.0 mandate. While standards evangelists will go on and on about AJAX, technology used to implement an interface will no longer need to be standard. AJAX and Flash will not be the only player here, many other applications will get involved as browser divergence continues to occur and on a larger scale.

However, Flash is about Rich Internet Applications which is a manifestation of Web 2.0. Sure there is AJAX, but who cares, use what you are comfortable with and what achieves results for end users.

I have to put this in here, but we built a Rich Internet Application for Investment bankers called the SNL Merger Model. Flash enabled us to build this web application and make it fit right in with the desktop experience of using Excel. We built what would usually be desktop software on the web and Flash got us there.

While it is not everything, it sure facilitates many of the goals of the Web 2.0 vision. dont' overestimate it's power, but don't underestimate it either.

This hammer won't measure how long I have to cut this wood, but it will certainly hold the whole thing together nicely when I drive some nails in with this baby!

You know I like to eat at this place in Ottawa called The Works. They have a crappy website that does not use Flash advantageously but they do have an awesome menu.

They basically have about 30-40 different burgers. Some have BBQ sauce, some have Brie cheese and some even have peanut butter. All of these different combinations of toppings though are delivered to you on the same burger, and that is their business. Imagine in a year from now if The Works stopped dressing the burgers and just cooked the paties up. Maybe a few business would open up right in front that would take the same paties and serve them up with different topings and different side dishes. If they did, then that would help most people understand why discussing on what technology an interface for a web 2.0 application should be built is totally irrelevant.

Hopefully it's is able to be built on all of them.


Web 2.0 is about obvious things, web 3.0 is about complicated things

Don't agree here. Web 2.0 is about simple things now, because those are the only examples we have to work from. Managing social bookmarks(digg,delicious), photos(flickr), making calls over the internet(skype), explaining everything that exists(wikipedia), etc. A few years from now the seriously complicated things that we will be using web 2.0 concepts to achieve will lead to yet another breakthrough hopefully.

What we are doing now though is a gut check. The web development community is reviewing it's own progress and discussing future direction. Nothing is wrong, but new possibilities are arising.

Web 3.0, no way, don't try it. We can barely convey a web 2.0 world from a 1.0 perspective. A 3.0 world will only be visible from a qualified 2.0 perspective.

In terms of making money solving obvious problems as opposed to hard problems, well obvious problems need solutions now, and most people will pay money for a solution to that obvious problem. Hard problems aren't a priority yet, so we can forget about them for now. Or Wait! Have we even thought them up yet? Or are we trying to create future business cases arond problems that don't exist yet? Maybe, that is why bubbles happen I think. To far ahead of the demand by years in fact.

Monday, February 20, 2006

I could have lost it all, but all I lost was a word doc

This morning when I got into work, I unpacked my laptop bag, set up my computer. when I hit the power button I quickly realized that my hard drive had died, and the data on it was pretty much lost. I have had the laptop in it's current configuration for almost 3 years, and while it was still able to perform for me, it was really time for an upgrade.

Well obviously now is the time, but when the thought set in that my drive was gone, and I had had this thing for almost 3 years, a small panic started to build up. What did I lose?

Well, all of my photos and music are stored on my Ipod, which are backed up on my computer at home, I also keep quite a few photos on my Flickr account. All of my work projects both experimental and client work are managed in source control at our office and backed up daily. I check out things that I want to take home, and if I build something at home I take it to work and check it in.

All of my contacts, calendar, email are managed in my Pocket PC which I partnered with both my laptop as well as my computer at home, so once again all good there.

So really all I lost was a word document I was working on over the weekend. I felt awesome, and so digitally free. this sounds stupid, but I remember back in the day (4 years ago) when my old G4 died and I lost my entire digital life pretty much.

Anyways, I have been doing alot of research lately on the web of the future and the internet as a platform, and when my computer died today I truly had the value of that vision resonate through my mind.

As I built up a new computer to work on at the office, I realized where computing still has a bit of a ways to go. Reinstall all of the desktop software I use to do my job, Flash, Dreamweaver, Fireworks, Flex Builder, MS Office, Suitcase, copy over the font library, download Firefox, reinstall my extensions, try to rebuild my bookmarks, etc.

Anyways, the day my computer dies, and I am up and running doing my job on a new computer lickety split, is the day that web 2.0 will be fully realized.

After the experience of rebuilding a machine I can imagine quite a few valid Web 2.0 business cases. Looking for venture capital?, just kill your production(/personal...) machine. It is enlightening.

Browser Divergence

I wanted to post a response to John Dowdell's recent post, but I thought it would be of more value to respond as a full blog post as opposed to a comment. Here is the original post on Browser Divergence.

I had this exact experience today with an audio player, yet it was more from the perspective of a combined experience rather than one offering. While I was working today, I was trying to keep up with the Olympics. The Canadian Women's hockey team was playing in the gold medal game, and I wanted to know how it was playing out.

I put on our local sports radio stream The Team 1200, They were not broadcasting the game live just giving updates here and there. So to try and keep up I also had CBC's live online Olympic coverage up. It is done as a blog that gets updated whenever something of signifigance happened. (As an aside, the live olympic coverage has been less than satisfactory online from all content providers I have visited, but I will save that issue for another post.).

While following this coverage, I was also working on a Flash Web Application called PermissionTV. I was adding and debugging features for most of the day, and when my radio coverage of the game got boring or they started to talk about baseball trades (I'm Canadian, hockey first!) or what have you, I switched over to listening to the latest TWiT Podcast in ITunes.

All of this activity was taking place in my browser of choice "Firefox"(except for the podcast) all at the same time.

My stream was running in one popup window that I allowed, while ITunes ran in another window. the olympic page was in a tab in my main browser window and everytime I wanted to see if anything had occured in the last minute I would flip to the page, press refresh, and sigh with disgust with such a futile system for something so simple. Next was an instance of the PermissionTV web player running from a local staging server, and sometimes a second instance running in another tab of the application running from a live staging server.

So keep in mind that what I am actually doing is trying to develop a Flash application in and amongst all this. I have Flash Open, and it has about 5 actionscript files open for editing, I have the source control application VSS open as well, and to top this off good old Outlook is running in the mix as well, cause you got to keep on top of your clients and in touch with project managers.

So basically it is tempered chaos running on my machine throughout the day. This is one reason I hate popups; if it takes up slots on my taskbar when it doesn't need to I usually get agitated.

Tabbed browsing was an awesome innovation. Firefox does it nicely, and I can easily have a few sites, or a few pieces of media going at once and save valueable space in my taskbar. However, when I was trying to listen/maintain the content in the midst of my whirlwind of compiles and tests of the app I was working on, I kept closing browser windows by mistake, duplicating them sometimes, etc.

I actually did get frustrated today with all this, and that is why I decided to write this long explanation of something so silly. I didn't really feel that I had control of my experience the way I wanted it. While many modern browsers today offer alot of customizability through extensions and the like, experiences on the web are locked into a document centric paradigm, and documents don't really play very nice with each other.

I am a power user of the web, and I am more concerned with the cumulative experience of many offerings than I am with individual offerings most of the time.

Browser divergence is an issue that is related to the original poster's problem but the bigger issue at play is the fact that browsers do not facilitate a combined experience very well.

The reason he wants to put his audio in a popup is simply because he is putting himself in the user's shoes and imaging a likely use case. That use case is an experience that will most definitely be a combined one. Not many people sit at a screen staring at nothing while listening to audio, and that is definitely truer the longer the audio.

He wants to give the user the ability to listen and at the same time go about their usual business. Due to all of those people that have abused the only mechanism available to achieve this, by throwing us up unwanted ads on viagra or poker, we have lost the ability to really facilitate a credible combined experience.

So now you see efforts like Songbird. An interesting idea to mix web browsing with listening. Good shot at the combined experience. I support the effort, yet in this case I think there is a little to much effort on being "Alot like Itunes"(Way too much alike).

How about efforts like Flock? Now that is more like it. Full on browsers being built to focus on the need for combined experience and targeted at a specific user type. This to me is brilliant, viewing web browsing vertically rather than horizontally.

So again, the top of the pile for me is Adobe/Macromedia Central. Divergence?, how about redefining the browsing experience. I still have stuck hard behind this product and it's vision as many people that read my posts know. I still think that when Tim Orielly is done defining the Web 2.0 movement, and the browser vendors start piecing it all together in their software, they will realize that it looks alot like this little application browser released in 2002-2003(?) that everybody crapped on.

It is really what I want though. I want an enviornment where I don't have to rely on some up and coming web developer to remix the olympic radio coverage(however scarce) with the olympic scorecard into something that makes usable sense, because my browser will easily facilitate it for me. My browser will hopefully be focused on delivering me a service and not a document. As a developer I can focus on building my application as well as offering it's functionality up to play in a combined experience.

So bring on the divergence, the more divergence that occurs the faster we will get to the result that we really actually want.

The reason that divergence feels so bad is because the document paradigm has beaten us all into submission. The majority of web content is published as documents and not as "content" and the end user has very limited control over their web browsing experience. We as web developers have always felt like the entire experience was under our control.

Browser vendors are working for their users to give them what they want and that is more control. Right now that begins to make life hard for traditional web developers. More and more developers have to accept divergence and design accordingly, slowly letting go of the control they once had over the experience.


--I keep adding to this as I go --

Divergence is also at the heart of the remixable web. What about RSS or deeper even, XML? RSS/XML does not suffer any setback from divergence. Web Services (SOAP/XML RPC) also do not lose value in a world of divergence. In fact they thrive and grow with it in focus, and promote more divergence to occur.

Pointing back to efforts like Songbird and Flock, I must say that I am excited about these efforts because they will help push traditional web publishing into the Web 2.0 future as is. These efforts will quickly highlight all of the major flaws in the HTML "document" approach and prompt a change in best practices that will hopefully fix many of the problems I was experiencing earlier today.

Sunday, February 19, 2006

Design Maturity

I am the type of guy that has believed for quite a long time that design drives the development of most of the things Teknision builds and most definitely applications. When ever I speak publicly or post on mailing lists, or even here blogging, I claim this.

There have been those around the community that have chastised my opinion, stating the architects should drive development, and to have it any other way is wrong.

On the other end of this argument, I went into speak at Algonquin College to the Graphic Design students. I told them I started my career as a designer, and still am today, yet I haven't done any graphic work myself in 2-3 years. I claimed obviously that there is alot more to design than graphics.

I just wanted to point you to something interesting, an attempt at defining a concept called Design Maturity. I came across it with Gabor Vida, as we have been doing some research on ROX (Return on Experience).

It can be quite sobering for both those on the design, development, and strategy side when you honestly evaluate yourself or your own organization and it's position on the continum.

Thursday, February 16, 2006

Multi-Touch Screen

This went around the news world yesterday but I was head down in some work and didn't get a chance to peek at it until today:

Multi-Touch Screen Video


Remember Minority Report? I can't wait to be designing applications that utilize technology like this. Technically you could do this in Flash right now:

Macrumors Article

If you read the description of the hardware they used to build this demo, I could see something similar being done with Flash.

Monday, February 13, 2006

Identity 2.0 made it all come together for me

I find identity discussions in relation to Web 2.0 fascinating. Today I watched a video by Dick Hardt, founder of SXIP and was totally impressed by the clarity of his presentation in comparison to most web 2.0 discussions that are generally vague. This topic is what glues the whole Web 2.0 concept together for me, and where the next generation web experience that we dream of will either succeed or fail.

If you watch the Adobe Analyst Meeting video on the engagement platform, particularily the part where Kevin Lynch explains a future app that mashes up a calendar app with a travel app, you will quickly come to the realization that this will be impossible without effective identity management. It is at the core of Web 2.0.

If you dig deeper, beyond the presentation and into Dick's blog, you will find a wealth of interesting discussion on the topic of identity, and will certainly come across Kim Cameron's: Laws of Identity, and that is where the real meat of the topic for developers begins.

Identity first really appeared on my radar when Macromedia Central was first debuted. It was neat how you had personal data that you could manage within the "browser" that applications could utilize with permission. This is really the direction of Identity 2.0 except current discussions are focused on a solution that would be timeless, open and scalable. An identity implementation is even being considered for Firefox 2.0

Identity 2.0 is based around the concept of having web applications revolve around the user's identity and data rather than forcing every application to maintain data on the user individually. It would facilitate interactions between web applications that would not require you to authorize yourself with each remote service, and allow you to expose personal data to applications with permission.

I found it most interesting in the Laws of Identity how they explain really what an identity means. It is more than just your name, address and credit card number. In fact, most people would have many identities: Personal, Employee, Citizen, etc and they would all be provided by different sources. Having all of these identities and using them where appropriate would facilitate a level of service never seen before on the web. Why is that?, well because you eliminate the need for every service or application that is developed to maintain "silos" of account information.

I am guessing that a personal identity could carry some kind of key that could allow other applications to interface with your Flickr account for example without having to reveal your account name and password to another web app (which is what you do now). Kevin Lynch's Calendar/Travel app takes it one step further by demonstrating the ability to have an identity that represents a group, where a group of friends all check the reservations against their calendars at the same time.

While many will see this as already possible don't forget that the new concept is that identities become a portable asset owned by the user, as opposed to data technically owned by the service provider.

Right now if I closed my Flickr account, is Flickr at liberty to destroy the data about me that they have? Maybe, but likely not. Is that information theirs to keep? Probably not. I should be able to leave and take my information with me, and the best way to do that is to never give it to Flickr to keep in the first place. A good Web 2.0 app of the future should be totally free from the need to store a user's personal information, it should just simply focus on the job it was built to do.

The best example is XBOX Live. You log into XBox live and play any game using that profile. Imagine if every game demanded that you create a personal profile. Would you bother?

While there is alot of debate around web identity and whether people would actually accept such a concept (MS Passport failed for the most part), remember that it will ultimately lead to more anonimity as opposed to less. Many services like Flickr for example already have a tonne of information about you, and they are in control of it. The vision of Identity 2.0 is not to track your behaviour, it is to make the web focus around you're rights, and to put you back in control of your data.

New Ottawa Adobe Building

Just wondering if anyone knows the details behind the new Adobe building being built in Ottawa. Is it going to just be the new home for the employees of the aquired Accelio(Jetform) people, or is there going to be new departments of Adobe located in Ottawa?

I walked by the buildings under construction on Saturday after skating the whole canal. They are looking pretty cool so far.

Anyways, if anyone knows anything, let me know in the comments.

Thursday, February 09, 2006

I knew Ninjas were behind it!

I am glad that the ninja community finally came forward and spoke up for Metal.




Ask a Ninja

I am a big metal fan myself, and I knew all along that metal and ninjas must have common roots.

This has got to be the best episode of this podcast yet! Lovin it.

Wednesday, February 08, 2006

Flash Team > no need to focus on developers anymore

The Flash Team is calling for wishlist features for Blaze aka Flash 9. In particular they are asking about additions to the code editing enviornment for Actionscript. I think at this stage, most developers that would be asking for major additions have already got their wish, and that wish is Flex Builder 2.

With all of the beautiful features in Flex Builder 2 built solely for developers, there is really no need to focus development of Flash on developers anymore. I am positive that in the next generation, almost any Flasher that writes serious code will opt for Flex builder to manage and write that code. That does not mean that the Flash IDE is deprecated though, our design team, and designers all around the world still use it everyday.

I would emplore the Flash team to focus there. Continue to give them the ability to build the most beautiful experiences out there, and leave the coding to a tool built to handle it well. I am sure that with the Adobe merger there are Flash IDE possibilities related to After Effects (for example) that can be persued that would be way more advantageous to the Flash Community as a whole.

What's the best approach? Well, even though I am a full blown "developer" these days, I still work in Flash as well. I have to tweak assets that the designers prepare and do optimizations all the time. I would focus on making the two tools sing together, and work on providing a wonderful way that designers and developers can collaborate on projects where both tools are in play.

In order to accomplish this there has to be a way that both a developer and designer can compile projects from their respective tools. People burned me in the past when I suggested the idea that Flex Builder 2 be able to compile projects in which an FLA was the root file over an MXML file. That concept still makes sense to me however, and I am willing to re-light the debate on that one with whoever wants to get into it.

I don't see Flex Builder being able to open the FLA for editing or anything, just having the ability to compile a SWF from that FLA.

Designers would have to be able to do the same. but that should be ok, seeing as technically things wouldn't be much different than they are now for them.

The only hangup is binding classes to movieclips and the root. I as the programmer would have to go into the FLA file in Flash just to bind movieclips to their classes. This will end up being unavoidable I think, because most of the time there are things I have to tweak in order for the resulting component to work as I want it to. Our designers do a great graphical job on everything they make, but in every project I will have to: pull out a mask they made and apply it with actionscript, and other tasks up that alley.

It would be cool though to have some way that a developer could manage those class bindings without needing to open the FLA file.

To conclude here, I think that in less than 2 product cycles from right now, major integration between Flex Builder 2 and Flash will be paramount. I forsee alot of projects happening where the products will be used in tandem, so I say:

Focus on integration of the two products, coders will all eventually opt for Flex Builder 2 for code.

I would even go as far as saying:

"Flex Builder 2 is not a good name for the product it actually is"

The reason behind that is, that I would use Flex Builder 2 everyday (I am now actually), but I will likely use it more for editing Actionscript for Flash projects than I would using it to develop Flex projects. That is until Flex projects become the majority of the work we do for our clients.

Tuesday, February 07, 2006

Waiting impatiently for Apollo

I have had so many different perspectives of the product Macromedia Central. A few months ago I blogged my thoughts on Central and where I thought it should go in the future.

At the time I was coming off the SNL Merger Model project, and my perspective was focused on the experiences of executing that particular project.

Now though, I feel my tune starting to change. I do think that many of my priorities stand, but I keep reconsidering Central functionality in a different light with all of the Web 2.0 momentum that has building up lately.

I was listeneing to an episode from the MAKE magazine podcast that featured an interview with Tim O'Rielly. It was called: Distributing the Future - Data for Web 2.0

The interview really just focused on all the priorities for Web 2.0 that Tim has layed out in the past. When you hear him speak it though, it almost sounds like a Macromedia Central feature list.

In the past I shot down the concept of applications co-existing, and sharing data. While it is a cool feature, it just wasn't applicable to the priorities of most of the client work we had been doing over the last 3 years, even in the RIA space. With current changes of direction in the industry however, it really starts to make alot more sense.

I think about Flex 2.0, and the fact that the framework is going to be free, and the toolset around it is robust. I think in retrospect, that it was the missing part of the Central puzzle the whole time. Applications can be developed alot faster now, and with the current component set for Flex, consistency would be pretty easy to achieve.

I already have a tonne of little applications lying around that I have prototyped in Flex 2. Many of them are for real client work we are doing right now. One project has involved alot of XML flat files, and I have built a few utility apps that our client has been using to manage the process of creating this content.

Members of our experience design team that do usecases and wireframing have expressed intrest in learning MXML just enough so that they can quickly mock up ideas. When we were working on our Timetracker application internally, they were totally stoked as I was building the frame of the UI right in front of them as we were discussing it.

I feel that I could convince clients to use something like Central alot easier now than I could before, due to the existence of Flex 2, and most of the time it may not even be for the actual project at hand, but rather for all of the supporting work built around a large project.

"Apollo", the "Universal Client" is coming, and the "Adobe Engagement Platform" is starting to be explained for all to see. I am really excited about all this, and I think it is time for Central to be reborn and succeed.

It's a bit of a bummer that we are going to have to wait for Apollo to implement AJAX and PDF into the enviornment, but then again there is no reason to exclude the other approaches to developing RIAs. I just wish I had a new Flash Player 8.5 Central now. I could put that bad boy to work lickity split.

Just ditch the liscencing...

Monday, February 06, 2006

Flex Timetracking

Internally here at Teknision we have been beta testing a Flex app that we started developing upon the release of the Flex Alpha. It is a Flex UI for our internal TimeTracking application that all of our employees use:



Everyone in the shop uses the Timetracker everyday, so we thought it would be a great project to use to get everyone in the shop up to speed on Flex 2. So far in the testing I have been doing with it, I am quite impressed with the results. Mike and Stacey, who did most of the work on it did a great job.

While we haven't focused on a custom skin yet, the application UI feels nice with the addition of drag and drop, favourites, notes, etc. Also the fact that it is built in Flex makes it easier to build upon in the future.

Our older version was designed in Flash a year ago and has worked really well for us:




but more and more the power of the Flex Framework has been impressing me, and more and more I see it being at the center of my toolset.

Saturday, February 04, 2006

XHTML part 2: Rummaging through the garbage!

Yesterday I did a big post on XHTML. I claimed that XHTML was possibly one of the best innovations on the web as it would allow levels of syndication and remixing never before seen. I also claimed that XHTML has not been explained well to most people. Some people seem to understand the requirement that it is HTML that can be parsed by an XML user agent (most don't even get this though), but almost everyone doesn't quite get the future proofing concept.

So I decided to build a little tool to rummage through people's garbage. A crystal ball for XHTML if you will:

XHTML Future Proofer


I built this application with Flex 2.0, so you are going to need the newest beta Flash Player 8.5 to use it.



To use it just enter a URL to any XHTML Page. Likelyhood is it will not even parse correctly, but if it does you will see the DOM exposed by the document based on the XHTML rules for XML user agents.

I still have work to do, and more reseach to do around this. I am trying to determine what the best way to use XHTML for future proofing is.

But using this tool you can see why my previous post is valid. We are just shovelling garbage into the future.


////EDIT/////


I have just been playing with my little viewer a little bit, testing out as many sites as I can out there. I finally came across one that has a DOM that makes sense when you parse the XHTML.

http://www.digg.com

it is still not perfect but by far the best yet.

Friday, February 03, 2006

This morning at Algonquin College

This morning Steve and I went to speak to students of Graphic Design at Algonquin College. We were invited to speak about interactive design and what they can expect out there in the industry.

We went and did a pretty full talk on our history in building up Teknision, then shared our visions of the future of the Graphic Designer role in the web of the future.

It was a great visit. The class was very interactive and very well educated on topics even way outside the realm of design. I had a side talk with one student about the validity of open source software in business...? I'm serious!

Anyways, many of the students kept speaking to us aside saying that they were having trouble making contact with the design community here in Ottawa. According to them, many firms in the city won't even give them the time of day.

We told them to focus on getting themselves out there with great websites, and focus on the global scene. Get invloved in design/technology communities online, and start blogging. Put your personality and experiences in relation to their future career out there on the net, and get talking with people that are going to be your new peers.

After all the discussion we sat and watched video show reels that all of the students had been working on. I was really really impressed! Most of the work was very well done, and more professional than most student work I have seen in awhile. Some of the Flash work we saw was also awesome.

Algonquin actually picked up James Acres this semester to help teach interactive design in the course. His influence, and experience can really be seen rubbing off on the students in the work they showed us. Very progressive, beautiful stuff.

Anyways, if anyone from the design community in Ottawa is reading this, give these guys an interview. You will be suprised at what they show you.

Looking forward to portfolio review at the end of the year =)

XHTML, future proofing our crappy markup.

XHTML was probably one of the best innnovations of the web I can think of. XHTML was designed to allow content to be marked up and delivered to a browser (including old ones), and have it render correctly as HTML, but also to be passed to an XML user-agent and be consumed there as well.

This was achieved by taking HTML and making it comply with the parsing rules of XML. Tags must always be closed, attributes must always be in quotes, etc.

here is the W3C link to the spec.


The concepts that drive XHTML really play nice with the concepts that drive CSS, which in a standards oriented world is the natural companion of XHTML.

The goal was to give the mark up a minimal role in presentation, just mainly providing the document structure and then let CSS take over and pretty it up. This all makes alot of sense and I think it was a great idea.

Most of our blogs today are driven by this technology obviously.... Well to web professionals reading this that is obvious, but to the average user, the change in technolgy behind the scenes really has no impact on their web experience. So really this innovation was focused on simplifying the code we had to write, and making it easier to read from a humans point of view.

More than that, standards evangelists claim the the real point of using XHTML is to future proof your mark up, and this is really the concept that resonates in my mind.

In my eys XHTML offers the oppourtunity of the ultimate syndication format. One in which any data in any format can be shared accross the web. This is much more useful than RSS which is limited to distributing lists only. XHTML is approachable by anyone, and easily rendered within any client that supports HTML.

Unfotunately though, while this is an awesome idea and worth persuing, the way we are going forward with it now needs a little more focus on the future. I would argue that there is little "future proofing" left to do. The future is now, and we are missing a huge oppourtunity.

Let me back up that point, and make a futher claim that:

Most of the XHTML that we are producing is just future proofing badly structured documents, that will end up being very badly formatted data in the future.

Last night I blogged about my experience of trying to consume an XHTML document using Flash and render the contents of the blog post in a Flash text field. What I saved for today was my frustration in interpreting the mark up using Flash.

Now to set this off on the right foot refer to the following claim in the XHTML specifcation:



3.2. User Agent Conformance

A conforming user agent must meet all of the following criteria:

3. When a user agent processes an XHTML document as generic XML, it shall only recognize attributes of type ID (i.e. the id attribute on most XHTML elements) as fragment identifiers.



This is the critical part of the spec for me. It tells me that as a developer of XML user agents, when I parse an XHTML document I am expected to only pay attention to the tags that are flagged with an ID. Cool, that makes a lot of sense, and should be easy to handle.

So I extend XML in Flash, and add some methods that easily allow me to find the tags I am supposed to be able to consume. Done, but then I discover something that just isn't right. when I look for the tag that is built to house my post content, I come across a div tag called:

id=''content"

within that there are more div tags:

id="main"
id="main2"

a h2 tag wraps the date which is in here apparently a child of content. note the date had no id tag. I then get to another div which is unnamed that contains the actual post contents.


So my issue becomes the fact that there is no rhyme or reason to what the contents of the tag named 'content' actually is. It's children are a combination of presentation data mixed with the actual data I need.

What the fix would be is to actually label the tags that do contain specific relevent data as their value. They could keep their div called 'content', but as an XML user-agent the rest of that presentation data within that tag is useless to me, unless those children have id's as well.

Label the specific tag that wraps the node I am looking for with an id that makes it easy and predictable for a machine to consume.

Label it so that the XML user agent does not need a high level understanding of HTML to be able to make sense of the contents. If the contents contain alot of HTML then in the future we will be holding back technology by forcing it to understand very old deprecated techniques of organizing content.

If I did want that date out of the 'content' tag, I would have trouble finding it. It is included as a child tag with the name 'h2' which does not describe it's content at all, and it has no id! The machine cannot make concrete decisions on what this content might or might not be.

If content creators actually focused being meticulous with identifying data within XHTML, you would evetually see a wave of best practices emerge that could really take the concept of syndication and remixing to a whole new level with the masses.

I fear that XHTML has not been explained correctly to most people, and that the term "future proof" needs clarification. I would say we want to future proof the information, not future proof the layout of the document.

The focus of XHTML should really be to describe the contents of a document so a browser can render it, but also design it so that a machine can consume it without having to treat it as a document, but instead treat it as structured data. Good XHTML should contain indicators that allow the machine to easily drop all the crappy "HTML carry over" and just strip out the good stuff.

Thursday, February 02, 2006

Main Event : Text Reflow vs Copy and Paste

I have been doing a little playing tonight where I am using Flash to read my XHTML pages from my blog hosted on Blogger. I am disappointed with Flash Text Fields. Take a look at the image below:



I am just extracting the body of a blog post, and dumping the contents into the text field. The contents usually contain images, and luckily Flash text fields support the use of img tags for loading all of the media types that Flash can load via loadMovie().

As you can see, the results are subpar. The reflow is totally screwed up. Not only that, but images end up overlapping each other for some reason.

So I refer to the docs:




About specifying height and width values


If you specify width and height attributes for an img tag, space is reserved in the text field for the image file, SWF file, or movie clip. After an image or SWF file is downloaded completely, it appears in the reserved space. Flash scales the media up or down, according to the values you specify for height and width. You must enter values for both the height and width attributes to scale the image.

If you don't specify values for height and width, no space is reserved for the embedded media. After an image or SWF file has downloaded completely, Flash inserts it into the text field at full size and rebreaks text around it. More than that, it seems that no matter what mark up you add, that the text always wraps the images.
You cannot specify a line break so the text will reflow to the next line.



NOTE
If you are dynamically loading your images into a text field containing text, it is good practice to specify the width and height of the original image so the text properly wraps around the space you reserve for your image.




Ah man!, so in order for text wrapping to work correctly I have to specify the sizes. That really sucks bigtime. All I want is for the images to appear at the right size, use a resizing text field, and not have the text wrap the image, but rather have there be line breaks between the images and the text blocks. I try for a moment to specify the size explicitly, and while the image appears at the right size, and the text wraps expecting the space the image will use, there are still major issues. The wrap still looks crappy. Multiple images will still end up overlapping, and if you make the textfield one that resizes the rewrap ends up looking awful.

so I tried something else:



the docs say that you can place a symbol from your library in a text field by using it's linkage name, So I try whipping up a component that I can load in there to load an image, and handle proportional sizing within the image. My goal is to try and make the component watch the size of the text field, and adapt to it's width forcing the text to reflow underneath.

Well it doesn't work. the text goes over it, not even paying attention to my dynamic sizing.

So at this point I realize that I am going to have to create a full boar component that will render things the way I want them to look. I am going to have to load and position my own images and text fields. A bit of a pain in the but, but definitely doable. As a matter of fact it will probably be better when it comes to font handling as well, as TextFormat objects are much more powerful than trying to rely on Flash's Text Rendering engine for HTML.

But then I remember the big bummer. One that haunted me in the past when designing the SNL Merger Model, and one that I hear many other Flash/Flex RIA developers complain about:

Copy/Paste.

Can't do it...... A selection can only span over 1 text field at a time. While I can still create a really nice rendition of my blog post in Flash I still have to compromise on one of the most fundamental/basic features of computing. With the Merger Model, we constantly had to beat around the bush on why you cannot select all the items in a "Table" (really just multiple text fields.)

Ok so, I'll just deal with it for now, but I am hoping that this could be listed as a priority for the next Flash Player release.

This is what I want to see:


1/ Text reflow consistent with HTML browser rendering. If I put a linbreak after the image, the text should begin on the line below the image.

2/ Selections that span multiple text fields.

3/ Text field linking. The ability to cause overflow to reflow to another text field.

Wednesday, February 01, 2006

Pondering Web 2.0

Over the last year there has been so much talk going on about Web 2.0. Many people are spending alot of time and effort creating definitions, writing manifestos, hosting conferences, and trying to prop up a new "Boom".

I have been listening carefully, as I am sure many in the Flash community are as well. I have got my hands on as many conference tapings as I can, downloaded related podcasts, and reading as much as I can online.

While there is surely great ideas and great debate out there(some for example):

Articles By:
Tim Orielly
Nicholas Carr

Podcasts:
Web 2.0 Essentials
Web 2.0 Show

I still find myself uninspired by all the talk. The more I listen to, the more I realize that many of these concepts are what caused Steve and myself to start Teknision in 2001.

Now I know that sounds egotistical in a way, but I do not mean that we would have had anything to do with starting a movement, I just mean that most of these concepts are old to me. Most of the things that many of these conferences refer to as being the goal, I feel that myself, and certainly the Flash Community as a whole achieved long ago.

Ajax for example, the coined term for using javascript to request/manipulate XML without refreshing the page, facilitating the development of rich interfaces. People are so excited about it, and in these conferences it seems to be described as the flag bearer of Web 2.0. When I hear all this chatter, it takes me back to Flash 4 released in 1998-99 (please correct me with the exact date).

loadVariables was a beautiful thing. I remember when I first tried loading some data into Flash asynchronously from an ASP page, I sat back in awe. Then in 2000 I set off on a huge mission to learn all of the fundamentals of XML because it was introduced into the Flash toolset. I remember telling people that we could build Flash sites in which the content could be populated from a database, and that XML was the future of the web.

I remember getting really good at impelmenting this, and had a great partner at my side (Steve) who is an awesome designer. We left the companies we were working for at the time, because we wanted to set off on a mission to build a better web. So we started Teknision in 2001. Our pitch was: "We can build rich interfaces for websites or applications in which the content or data can be driven by a datasource". We had many an opportunity to do so over the years and have tonnes of examples to showcase.

So where am I going with this, well the Ajax chatter bores me. While it is an alternative to Flash, and alternatives are great, asynchronous XML requests have been defining the web for quite some time, and those that deem this as a new concept leave me wondering if they actually have been paying attention to what has been happening over the last 5+ years.

The story should not be:

"Ajax is redefining the way we develop for the web"

It should be:

"Unfortunately it took 5 years for the standards movement to realize that XML could be used asynchronsly to improve web development"

Again, I know that sounds harsh, but it makes me cry when I hear newborn Ajax promoters tout this ability. It leads me to believe that all of the super innovative design and development that has been accomplished by the Flash community has gone unoticed by the rest of our industry. That most people are still hung up on skip intro, and crappy banner ads.

That sounds even more harsh, but listen to this if you have the time:

In this podcast:
Web 2.0 Essentials

There is an episode called: Beyond Usability and at time code: 40:18

He talks about the Broadmoor Hotel Reservation application as he is describing usability in Rich Internet Applications. He does state that it is one of his favourite examples to show, but then says sarcasticly:

"But it's a Flash interface.. of all things"

Hmm, so the Flash is in there, first time I heard it mentioned in the whole conference, and while they are using it as a favourite example, they still manage to bash Flash at the same time.

I wonder why that is, or why Flash is deemed as not a good way to go by these Web 2.0 advocates. Does he even know that the example he cited is an application that has been around for what?: 2-3 years, or even that the term Rich Internet Application was coined by Jeremy Allaire CTO of Macromedia upon the release of the MX family in 2002?

Anyways, I think it is because of this mentality that posts like this are happening:

Flash Player Adoption Signifigance
Why Flex Matters

(Sorry John, didn't mean to pick on you)

It is starting to sound like there is a little bit of nerves around Flash Platform, and it's competitors going into the Web 2.0 era. I really think this is because of the fact the the competitors are stealing the Flash story and telling it better (or just louder) than the Flash Platform community to the right people.

Some would argue, that the Web 2.0 manifestos are more about ideas than they are about technology. That is true in a way, Remixing, The Long Tail, Openness, etc, but if that is true I would say again that there have been so many examples of this behaviour in web applications that are 5+years old, so once again not a new concept.

Most Messageboard systems, or even mailinglists follow these same trains of thought, and they are technologoes that seem to be fading away. There is a local community site here in Ottawa created by a local DJ called http://techno.xvi.com. This is a seriously old school site but it was one of the most "Web 2.0 ish" examples I can think of. It is totally Web 2.0 out of concept not because of implementation.

Many of the elements of current Web 2.0 definitions really do not apply to all businesses. I'll give an example, SNL Financial, who we developed the SNL Merger Model for, have no interest in making their data available to the general public as it is the core of their business in collecting it. While there are communties around the product and data, the users are competitors in business and aren't interested in sharing their findings with each other. That being said, the Web 2.0 community enrichment definitions don't really apply well here. However, We developed a beautiful rich interface on top of an existing service oriented architechture and the result was a web application that behaves and feels like a desktop application. Very web 2.0.

SNL's core product The SNL DataSource, is subscribed to by many banks as a series of web services that they can use to build their own applications around. While this is extremely Web 2.0, they have been doing it online for many years now, and it again isn't really something that justifies renaming the web.

So then would our SNL Merger Model application be considered Web 2.0? It certainly implements a few of the features of Web 2.0 but certainly does not in others. Techno XVI doesn't really have what I would consider a rich interface, but it definitely has all of the community and openness features that Web 2.0 talks about. Would our chat on teknision.com fit in? I mean, it is way beyond asynchronous XML, it uses a real time binary protocol (rtmp) and leverages a remote translation web service at the same to act as a universal translator.

To me Web 2.0 should define a much deeper level of change in experience and platform, I love the statement:

"The Web is the operating system of the future"

Well, to me Web 2.0 has very little to do with a browser. It is a very small piece of the puzzle. I would claim (I will get roasted by some for this one) that an application I would deem as Web 2.0 is:

XBox Live

That to me is a totally new experience designed around the concept of the web. Maybe some would say that it is more of use of the internet than it is the web, but to me that is an example of a Web 2.0 experience. You are leveraging and enriching data, building community around it, giving users control of their experince, and delivering it in a whole new way, not just focused on the browser. Users can interact with their accounts from the Web though. Remember Halo2 game RSS feeds, or postgame carnage maps?

The evolution of mobile and it's direction seem very Web 2.0 to me as well. As technology in that space evolves, users are receiving radical new experiences focused around a digital life to go.

A dream out there is household appliances that interact with applications. Turn your lights off when your not home. Is the stove on? That to me really represents a new experience for an end user. There is no community around it, or no real need for openness , but still the evolution of the web in my mind.


All of the above are examples of breaking out of the browser and breaking out of the document. Which is why:..

...it keeps coming back to one thing for me. Web 2.0 by most definitions seems to me to be a fancy way of saying an application designed on top of a services oriented architecture in which your audience is the prime contributor. So at the end of the day, this definition leaves me frustrated, because it is a mishmash of architecture and social behaviour that do not seem to apply to the web in general, but rather suggest a great way to solve certain problems. Is it an architecture, or is it a concept? Which one? A bit of both doesn't mean anything (to me anyways).

I think Web 2.0 is too broad a term to describe what they are trying to describe. A "Rich Internet Application" seems better to me. That term really focuses on the concept that there is a rich UI, and that there is an application feeding it data. How that application is supposed to work, and how it is supposed to serve it's users is left undefined. Web 2.0 Definitions that include a reference to Ajax /Flash corrode the idea of Web 2.0. You are suggesting architecture for something you are stuggling to define. Where as Flash or Ajax used within the context of Rich Internet Applications make alot of sense.

A Rich Internet Application is a component of Web 2.0 right? If so, what does Web 2.0 encompass that is outside the realm of a Rich Internet Application?

While I think there is great development/ideas arising in the web world, but creating a blanket name "Web 2.0" that suggests that the whole web is changing, just seems like hype for luring investors to me. Again I prefer the concept that users will start to see new things on the internet, but there is not an accross the board change happening.

Again, I certainly haven't put this all into perspective yet, as many out there trying to figure it out say as well. The biggest question we should all ask ourselves is why we feel the need to make such a big deal about it? Is there really any value in defining Web 2.0?, and if so, who is benefitting from the existense and importance of that definition?