Charles Chen

"Although I do not have any visual impairments, I discovered that Fire Vox could make the web more accessible to me."

Charles in front of Golden Gate BridgeI am someone who has always loved computers. I worked as a free-lance computer technician throughout high school, and I still do that job today for family and friends. When it was time to go to college, I was not completely sure what I wanted to do, but I knew that it would be something that involved my passion for technology. I decided to major in Electrical and Computer Engineering at the University of Texas at Austin, and college gave me my first taste of programming. Although I had never programmed before, I found that doing programming fit me like a glove and as a result, I ended up choosing software engineering as my focus area.

What got me involved with accessibility was a software engineering course that focused on giving students real-world experience by putting them into teams, giving them a client, and then having them build software that meets the client's requirements. My team's client was the late Dr. John Slatin. Dr. Slatin was an English professor at UT who was blind and also served as the director of the UT Accessibility Institute. Since Firefox was rapidly gaining popularity, he wanted his screen reader to work with Firefox so that he could try it out. This project gave me an appreciation for the challenges posed in crafting a good auditory user interface.

Charles presentingAfter working on this project, I became interested in creating a cross-platform text-to-speech library for Firefox. I wanted it to be free and open source so that anyone can use it to create speech enabled extensions. I created the Core Library Components for Text-To-Speech, aka CLC-4-TTS. My original intent was not to make Fire Vox; in fact, I did not even call it Fire Vox. Instead, I just called it the screen reading demo extension and its entire purpose was to prove that CLC-4-TTS worked as a library. As time went by, I kept building on this demo, and after a few months, I decided that it had enough features to be considered more than a demo. The rest is history.

Although I do not have any visual impairments, I discovered that Fire Vox could make the web more accessible to me. I am Chinese and can speak Mandarin fluently. However, growing up in the US, I was not exposed to Chinese text on a constant basis. Although I can read the some of the more commonly occurring characters, I still have great difficulty getting through articles. After I bought a Chinese voice, I found that a whole new world of Chinese articles opened up for me. Instead of spending hours scratching my head and looking up words in a dictionary, I could now effortlessly get through entire articles by having Fire Vox speak them to me.

Now I work at Google and share a project and cube with T.V. Raman. We are working on the Google-AxsJAX project, an open source JavaScript framework for enhancing the accessibility of AJAX web applications. AxsJAX provides a clean, easy-to-use API that enables web developers to trigger the user's assistive technology into speaking specified text, to define the various trails that users can take to navigate through the different sections of the content, and to apply a magic lens which can enlarge text, change the colors, and perform other style modifications on a particular piece of content. These abstractions allow web developers to focus on what the user interaction model for their application should be rather than worry about how to use web markup to make the various assistive technologies do the right thing. Since AxsJAX scripts are just JavaScript, they can be served by the web application itself (as in the case of Google Reader) or injected into the page by other means.

While AxsJAX was developed with the goal of making Web 2.0 applications more accessible, we have found that it can be used to improve usability in general. For example, the AxsJAX script for Google Search provides keyboard access to move through the results, goes to the next page of results when the end of the current page of results is reached, magnifies the current result, and causes the user's assistive technology to read the current result. This is great for blind and visually impaired users, but it is also helpful for users without visual impairments as keyboard navigation (especially loading the next page automatically) can make it much faster to go through results and magnifying the current result makes it easier for users to focus their attention. Another interesting use of AxsJAX is applying it to the chat system inside of Gmail so that messages are spoken automatically when they are received. This helps accessibility, but it also produces a neat effect when combined with Google's translation chat bots. These chat bots will translate the user's message and send it back in the chat window; thus, by using the AxsJAX script with an assistive technology that can handle multiple languages, we were able to create a talking translation bot.

Web 2.0 holds many challenges for accessibility, but also many opportunities. Web 2.0 applications have stretched the capabilities of the browser via AJAX for the mainstream user; now let's apply that same innovation to accessibility and create a more usable web for everyone.


The DAISY Consortium would like to express sincere thanks to Charles L. Chen for his story. This will help broaden understanding of Web 2.0 accessibility challenges. Visit his personal Web site at CLC World.