Introducing Tiny Ears

Last week I met with Ellen De Vries of The Copy House for a naming session. A naming session is where uncreative people like me get to sit down with a creative person who is good with words to try and come up with a name for a project/business/enterprise. And, lo and behold, it works! During the session we came up with a few different names that we were happy with then systematically eliminated a bunch of them. Elimination was due to a number of things, including the impact of a name, trademarks already being taken, companies already formed with a name and simply because the more we thought about them, or said the names out loud, the less appropriate they seemed.

After all this I was left with 3 names to take away and ‘try on’ for a few days. After talking with a number of people, and mostly because it just fit, we are now ready to introduce the world to Tiny Ears, the technology that will, hopefully, be powering your educational applications very soon.

If you’re struggling to name your projects or company, I highly recommend organising a naming session with someone. I would also recommend that you use Ellen as she made the whole process fun as well as useful.

Storybook Project Overview

I recently wrote a basic overview for the Storybook project that I’m taking to Chile, and I’ve realised having it on here is probably the best place to put it, so here it is.

Project Overview of Interactive Storybook for iPad
from Radical Robot

The Interactive Storybook for iPad project is designed to create an educational storybook for children between the ages of 4 and 7 that will help them learn to read sentences while providing an engaging and entertaining experience. The age range is designed to target children moving from reading single words, to sentences.

Many interactive storybooks are designed to entertain and tell a story, but provide little towards assisting the child to read the story for themselves. In most cases, this assistance is provided by parents or learning professionals, however these people are not always available to assist the child when they want. This project will utilise Speech Recognition technology to listen to the child as they read the story out loud, so as to provide encouragement, feedback, assistance and rewards at the point at which it is needed. Face detection will be used to determine whether or not the child is reading from the app rather than talking to someone off screen and will therefore serve to provide more accuracy to the speech recognition. Speech recognition can be disabled for adult led enjoyment.

As the child reads the story, the app will listen to their progress. When the child reads a word incorrectly, stumbles on pronunciation or takes a long time to read the word (the app only ‘listens’ when the child is looking at the app). It will then step in and prompt the child with assistance using the Phonics learning system. When the child correctly pronounces the word, then audible (‘Well done!’) and visual (animations) feedback is used to provide rewards.

The app will monitor the child’s progress over time (between readings of the story as well as within a reading session) so that feedback can be adjusted to the child’s progress. This means that words that are consistently mispronounced will receive more intervention and greater rewards for success than words that are more often correctly read. The actual form that the feedback/reward system will take is currently in development.

As additional rewards, at the end of every page, the story so far can be animated and interacted with. These animations can be expanded into games that will assist the child with learning the words that they have been struggling with in a fun and interactive way.

The app is designed to be a fun experience whether read together with a parent or alone. When read with a parent then the speech recognition and face detection can be disabled so that mum or dad can provide the learning value and assist their child in playing the games. However, whenever the child wants to play and mum’s busy, then the Speech Recognition can be activated to provide the required assistance and encouragement during self directed play.

This app is still in development, with a working prototype expected around March. Development at this time is focused around creating the speech recognition technology for children.

A Guide for my Friends: Things I Won’t Eat

It turns out that I’m quite a fussy eater, although I think I eat quite a lot of things. In order to assist my friends whenever they cook for me I thought I would write a list of things that I won’t eat.

I want to make clear that I am very texture driven with foods and the 90% of the things I won’t eat are down to texture. In many cases, excluding pulses & peas, if you were to liquidise these things I would eat them happily.

This may not be a definitive list, but it’s all I can think of for now.

Things I won’t touch

  • Bananas
  • Pulses of any kind (beans, chickpea’s etc)
  • Cucumber
  • Celery
  • Peas
  • Squashes (pumpkin, butternut etc)
  • Beetroot
  • Brussel Sprouts
  • Chicory (and anything else that has an aniseed flavour)
  • Peaches
  • Kiwi fruit
  • Turnip
  • Parsnip
Things that I’d rather not have to eat
  • Seafood/Fish
  • nuts
  • Radish
  • Lentils


***Update: Since I started my worldwide travels I can amend this list slightly. It turns out that it’s not avocado that I can’t abide, but british avocados. Elsewhere in the world they are much, much nicer.

***Update 2: Turns out I like hummus now. Not sure when that changed. I have since attempted to see whether my opinion of all pulses has changed and can sadly report that no, most pulses are still out. I am no longer freaked out by lentils though, although they’re not going on the favourites list either!

Creating an Intelligent, Interactive Storybook for Kids

Apparently, 70% of all parents with iPads let their kids use them. I’ve seen kids as young as 2 pick an iPad up and start using it with little or no direction. It seems that with this tablet there is a device that kids can use, regardless or their current educational state or coordination. iPad developers seem to have caught on to this and the kids app market is a steadily growing one, with educational and gaming apps appearing all the time.

I first encountered kids using iPads when I spent time with Ian’s sister and family in Canada earlier this year. I’d taken my iPad with me and downloaded a bunch of kids games for Olivia (3) and Chloe (1) to play with. One of the first thing I noticed, however, was the poor quality of many of the apps, even those with big names like Peppa Pig behind them. Storybooks especially seemed to be nothing more than a digital version of the analogue book with the added disadvantage of not being chewable or usable for hitting your little sister with.

I also looked at how the girls interacted with the educational apps that I downloaded. Often they were no engaging enough to retain interest for more than a minute or two at a time, or did not offer enough incentive to continue when things got hard. Apart from one game, a memory card game, the average time that Olivia engaged with an app was about 2 minutes.  I started to wonder if I could do better and came up with the idea for an interactive storybook that helped kids between the ages of 4 and 7 learn to read.

I saw two problems with trying to create an educational storybook app. One was of entertainment and the other was feedback. While watching Olivia use the educational apps, I noticed that the amount of engagement she had with the app increased if an adult was there with her as she used the app and gave constant feedback and assistance when she needed it. However, the adults were not always available to give her the attention she needed and so she would put the iPad down and do something else instead.

I wondered if we could create the sense of feedback and support that an adult would normally provide by using Speech Recognition to listen to the child as they read the storybook and provide the feedback, encouragement, rewards and assistance at the moment it is required. We could use face detection on the front facing camera on the iPad 2 to determine whether or not the child was reading the book and use that as a guide to whether or not to listen or ignore what the app heard. Combine this with fun interactions, animations, multiple paths through the story and embedded games and I felt we could create an engaging, interactive learning experience.

Working with the Strong Steam team we are creating the Speech Recognition and Face Detection technology to use within the app and are in the process of putting together a prototype app. We’re looking for an early years learning professional with an interest in technology to help us out with how we make the app as engaging as possible while retaining the educational element. If you are one, or you know of one who might be interested, please do get in touch.

Radical Robot is off to Santiago for Startup Chile

Two weeks ago, Radical Robot received some fantastic news – we got accepted into the Startup Chile accelerator programme. This is very exciting, not only as it means that we get to spend 6 months in beautiful Chile, but it also means that we get a period of funded client-free time to work on our fantastic new product – an interactive educational children’s iPad storybook with speech recognition.

Since Radical Robot started a year ago we have become a successful mobile application agency, but we have always wanted to be more of a product house than a client led agency. Social Ties was our first attempt at developing a product, but although we are very pleased with the app and are happy with our downloads since it went live in both the Android and iTunes app stores, we have realised that monetization of the app would lead us down a path that we were not that interested in. This realisation has allowed us to persue a number of other product ideas and we are happy that our favourite idea has been accepted into Startup Chile.

The idea is to create an interactive storybook for children aged 4-7, kids just learning to read sentences. This app will utilise Speech Recognition to listen to the child as they read the story out loud, enabling focussed feedback, assistance and support at the moment it is required. Along with story animations, positive feedback, reward systems and embedded games, this creates an educational and entertaining tool for early years language learning. The speech recognition can be disabled to provide a fun experience that parent and child can enjoy together, or left on to be used as a self directed learning tool for when mum and dad are busy.

Right now, the aim is to concentrate on developing and perfecting the speech recognition technology. We are using OpenEars / Pocket Sphinx as a basis and training it to deal with kids voices. Then we shall be looking for a story to form the basis of our first app. We would dearly love to meet someone with speech recognition experience to join our AI team as a consultant, and learning professionals who would like to get involved in creating this exciting, cutting edge learning experience. If you are interested, please contact me at, on twitter @fluffyemily or call on 07768646287.

Also, we need to come up with a cool name for this new product. If you have any suggestions, please do let me know via the email address or twitter handle above. We’re very excited here and are looking forward to our adventure and getting stuck into this fascinating challenge.

Adventures at Open MIC 8

A couple of weeks ago, a friend of mine in Bath, Julian Cheal, pointed a friend of his, Chris Book my way on Twitter. Chris runs OpenMIC (Mobile Innovation Camp) and was looking for Brighton based mobile developers to meet up with to promote the upcoming OpenMIC 8, held at The Grand Hotel in Brighton on 4th November.

Naturally I invited him down to Brighton to attend the monthly meeting of the Brighton iPhone Creators group and get royally drunk with some fellow mobile devs in The Basketmakers, and equally as naturally Chris agreed.

The next day, slightly worse for wear, Chris and I met for a coffee to chat about life, business and mobile development. During this time I mentioned that I was currently investigating PhoneGap and was intrigued by Titanium as alternatives to native development. I knew very little about them at the time and wanted to learn more. Obviously whatever I said must have inspired Chris for later that day he asked whether I would like to present the result of my investigations at OpenMIC. After some consideration I decided that I would be a fool to pass up the opportunity to speak at a conference for the first time and accepted. I then realised that I had less than two weeks to pull everything together.

In reality, I had even less time than that. With other work commitments and the existing commitment of talking at the Five Pound App that Tuesday to present an app that Ian and I had been working on, it ended up boiling down to one full day and 2 evenings. I had decided to write the same app twice, once in each technology, and compare the results. I wrote the Titanium app on the Sunday afternoon, the PhoneGap app on the Monday evening after a full days work and the presentation on Wednesday.

Luckily for me it all came together (quite late on Wednesday night) and Ian and I rocked up on the Thursday morning at The Grand raring to go. There were 4 talks in the morning session, followed by a developer panel, then after lunch  2 ‘barcamp’ style sessions before the evenings socialising (partly funded by Microsoft).

Up first was Tom Hume of Future Platforms talking about how he felt that using web technologies in mobile app dev was the future. He explained how HTML and CSS were mature technologies that had already solved a lot of the UI problems that were still incredibly difficult to achieve when developing natively and proposed that mixing the two, using HTML where most suitable, and native when tackling more difficult problems, was his ideal.

Then came Stuart Scott, CEO of infohand, who had a lot of commercial experience and knowledge in the mobile world. He explained about the economics of mobile experiences and developing mobile applications and how perhaps using the web in mobile could help reduce the costs of creating engaging applications.

Next up was Mike Ormond, a developer evangelist at Microsoft. Mike showed us Windows Mobile 7, a handset and operating system that had so far been unmentioned when talking about cross platform development. Windows mobile 7 looks fantastic and his argument was that in order to keep the experience fantastic you had to develop specifically for the device, and that meant natively. Despite the talk being centered purely around WM7, I felt that the point stood in relation to all devices.

Finally it was my turn. Mine was the only purely technical talk, showing code. I felt rather embarassed when we got the first page of code and discovered that the colours had washed out totally on the projection and it was very hard to read what it said, but subsequent feedback was that this made me explain the code in a detail which I might otherwise have missed and so made the talk more useful. I learned a valuable lesson about displaying code snippits on projectors. You can view my presentation on SlideShare, download the code at GitHub and read more about it on this blog post.

I got asked to stay up on ‘stage’ for the developer panel which I gladly did. The panel was asked to explain why should a client pay multiple times for an application written natively for many devices, and not just pay once for a mobile website to cover all devices. The general conclusion was that it all came down to that last 5% of polish that makes a good mobile app a fantastic mobile app and that last 5% was far harder to achieve using the mobile web that with native apps.

Somehow I got volunteered to run a barcamp session about PhoneGap and Titanium in the afternoon and also managed to get a good play in with a Galaxy Tab that one of the delegates brought along. After that it was beers and more beers and then curry with beers. All of the delegates that I met were fantastic. So many knowledgable and friendly people and I had lots and lots of fun.

I am now trying to organize my time so that I can make it to the next openMIC on 2nd December in Oxford. Chris, Pinar and co have developed a great conference and I highly recommend that you find the time to go if one happens to pop up in your area in future, or make the effort to attend one somewhere else in the country.

Experiments in PhoneGap and Titanium

I talked recently at OpenMIC8 at the Grand Hotel on 4th November 2010 and the theme was native vs web. I decided to talk about the investigations I had recently been doing in cross platform development tools such as Titanium and PhoneGap.

I decided to write a simple app twice, once in each technology. I decided to replicate the first proper iPhone app that I wrote. This was a simple app that I wrote for Ian for a presentation that he was giving about the uses of Artificial Intelligence in the modern world. The app used a webservice that he had created that used Optical Character Recognition to read text in images and return the transcribed text. This meant that the app needed to do 4 main things:

  1. Take a photo with the camera (or choose from library)
  2. Upload an image to an external API
  3. Parse JSON response from API and display
  4. Play streaming Audio of transcribed text

As the first native iPhone app that I had written, only having implemented tutorial examples before, this app took me approx 14 hours to create. At this point all I had done in PhoneGap was some toy examples and I had not got further than downloading and installing Titanium. I thought this would make a good comparison point for the two techniques.

PhoneGap is the most versatile of the cross platform techniques. You write HTML and CSS which gets executed from inside an app using WebKit. It is basically a mobile website embedded inside your app. PhoneGap works on all iOS, Android, Symbian, Blackberry and Palm devices, but you must download and install each SDK and development environment, and then integrate PhoneGap with each. You then build and deploy your app inside the platform build environment. However, you can pretty much write your mobile site inside your web dev environment of choice and then just import it into the platform environments when it’s done. Then it’s just a case of making small, platform specific changes (there are small differences in the availability of methods for each platform).

Titanium is currently only available for iOS and Android, although there is a Blackberry version in beta. You download Titanium and it installs Titanium Developer, a project creation and execution environment. You then use your own favorite editor to write Javascript. This Javascript again runs within webkit, but instead of providing an HTML interface, it creates and uses native components, giving your app the look, feel and more of the speed of a native app. From within Titanium Developer you can then execute your app on iPhone devices or simulator, and Android devices or simulator at the touch of a button. You can also either use TD to deploy and package your app for the store, or to generate a platform specific project to import into the plaform development environment for packaging.

With both PhoneGap and Titanium, you can write your own native functionality and provide an interface to call from Javascript inside the non-native part of your app.

I started reading Titanium’s documentation, hungover, at 1pm on a Sunday afternoon. I found the documentation pretty good and it was very easy to find examples of everything that I needed to do. The Kitchen Sink app that you can download from GitHub contains a demonstration of all the features of Titanium and a couple of quick Google searches gave me everything else I needed.

By 6pm I had the main bones of the app, but another couple of hours (including time to create some images) gave me the final thing. The whole app is in one file, app.js. Despite not knowing a lot of javascript I found the whole experience easy and fairly enjoyable.

You can view the code here.

On Monday evening, about 6pm I started on the phonegap app. I had already done most of the examples that I could find so there was no need for the initial documentation reading that I had needed to go through for Titanium.

At this point I need to point out that I do not do web programming. I know no CSS and have very little HTML experience and no HTML5 knowledge. In fact, I have avoided as much front end web programming as I could throughout my whole career. I chose to use JQuery Mobile as I’d looked at it previously and I found that it provided some very nice looking components which massively reduced the need for me to know any css at all. For me this was hugely important. It enabled me to get to the point of having a most of my app within 2.5 hours of starting. It did, however, look utterly appalling. For this I needed to recruit some help from a friend of mine who knew CSS. I spent 30 minutes with him on Tuesday and we got it all looking fine.

However, it took me an awful lot longer than 3 hours to complete the app. It took 4 hours and 3 different people to solve the biggest problem I had – uploading the image to the webservice. In both the Objective C and Titanium versions this had caused only small, easy solvable problems. However all of the upload examples for PhoneGap used code that attached the image data as a URL query parameter to the POST. The webservice, however, needed the image attached as a file on a multipart message. Constructing the multipart header was no problem , but it took quite some time to discover that the message was being posted base 64 encoded. The webservice only returned a HTTP Status 500 so we had to debug for some time before we figured out what was wrong. However, even when i used a Javascript base64 decode function before posting, the webservice still failed to understand the message.

In the end we created a new webservice that did the base64 decode after posting, but this was not an idea solution. I have since (with help from Keiran Gutteridge) I have worked out how to add my own PhoneGap function to send the image base64 decoded so I can use the original webservice. This headache sadly took up most of Monday night and part of Tuesday and Wednesday. It also took 3 people. PhoneGap total: 8 hours & 4 people.

You can view the PhoneGap code here

In conclusion, writing cross platform code is far quicker and simpler than writing native. The results, for very simple apps, are almost as good. Titanium, building native components, gives an device experience almost as responsive as if you had written it natively. If I had been an experienced web developer, the look and feel of the PhoneGap app would have been almost as good as the real thing, and if I were to do PhoneGap development in future, I would have to ensure I had a CSS and HTML expert to handle the creation of CSS transitions and beautiful interfaces. Even then, the responsiveness of the result is somehow not quite good enough. It’s just slightly laggy and the transitions just don’t look perfect. If you app relies heavily on that last 5% of Wow factor to set it apart, I would still choose native.

Also, even though Titanium gives access to far larger parts of the device than PhoneGap, to do really complex functionality you still need to write it yourself natively and then you need to wire it in to your chosen non-native environment. In my opinion you would need a really good reason not to go native, but if that reason was there, then I would choose Titanium. If I were an experienced web dev, perhaps I would have more time for PhoneGap, but for me it’s not really an option. I will probably use Titanium again, for rapid prototyping, but would write natively once the prototype was finalised.

Open Plaques iPhone App now in Store

Well, it’s done. The requisite number of months have passed and I am now officially a freelancer. I am also now officially a published iPhone app developer as, on October 5th 2010, the Open Plaques iPhone app got approved and went live. You can download the app from iTunes or directly from your phone. You can also find out more about the app from this page

Rather disappointingly it was not in the app store in time for the Open Plaques Open Day on the 25th September which was the original goal, but I have learned a lot about the app store approval process in the mean time.

It took me two attempts to get the app into the store and I thought it might be worth documenting those reasons here. The first rejection happened because the app crashed in iOS 4.1.  Fair enough, you might think, and yet I submitted the app 6 days before 4.1 was released and so was rather annoyed to find it rejected for that reason. A lesson learned there. If you are submitting around the time of an iOS release, submit your app compiled on the beta version.  Even more annoyingly, all I had to do was recompile the app with the new API and resubmit, so it’s not as if I had actually done anything wrong.

The second rejection was for a more justified reason, I suppose. Google Maps have a logo at the bottom of the map. You, as a developer, are obliged to ensure that this logo is displayed at all times, otherwise the T&C’s that Apple have with Google are violated. My map was slighly too big, causing the logo to not appear on the page.  This change required a 2 minute change and resubmission.  Rather delightfully the app was accepted on the third submission.

I am currently working on the first update to Open Plaques. The update consists only of performance improvements and fixing a couple of small memory issues in the first version. I do also have a loose roadmap for future versions. The ones displayed in Black are implementable now, the ones in Blue are awaiting the creation of a suitable API at

  • Display Plaques in an alternative Table View, each entry showing plaque description, location, distance from current location and direction that you should walk in to find it. Clicking on the table row will take you to the Plaque Details screen
  • There are often more than one image for each plaque in the flickr pool. If there are more than one then all images will be displayed on a swipe-able view in the Plaque Details screen.
  • Allowing the user to take a photo of any plaques found from within the app itself. This photo will be then uploaded to the openplaques flickr group,  machine tagged with the id of the plaque so that can pick it up.
  • Adding search to be able to view plaques in other locations around the world, or by subject, organization, name or role.
  • Displaying plaques based on the current view within the map, rather than just on the users current location
  • Allow the user to upload plaques currently not on the app directly to
  • The ability to access further information about the subject of plaques from the Plaque Details Screen

If you have any other suggestions or feature requests for what you would like to see in the app, please do contact me either here, email me, or raise a feature request on the github site

What to build? A dilemma

How do you choose which of your many ideas to pursue? Do you choose the one with the interesting technologies that you want to explore, or do you go for the one that you think would be the most useful?

I am currently suffering from this dilemma in relation to which app I concentrate on as my ‘signature’ app.  If you’ve got time to build one app, in this case an iPhone app, to demonstrate your skill as a programmer to future potential clients, how do you decide what to do? I have 3 choices in this regard:

  1. Hikers footpath planner. A really complex and difficult app that will take a long time to put together but that demonstrates all my skill as a back end developer while also showing an ability to do some nice things with the front end. This app, although sounding ideal, is a risk. I have limited time and the app is complex enough that it might not meet the App Store requirements on stability within the time frame and, with my current dev time limited to evenings and weekends, might not get be completed to the final most impressive stage by the time my (admittedly self imposed) deadline come around. This app has, I believe, a wide audience, but only if done exactly right.
  2. OCR Menu reader and translator. An app that allows me to explore some really interesting technologies and demonstrate some skills with both backend and front end work. The app, however, would be trying to compete directly with another app, Google Goggles. Do I really want to develop in my spare time an app that could potentially be utterly eclipsed by a company who have million of dollars to spend on getting it right?
  3. Open Plaques. An app that demonstrates some back and front end skills and that solves a specific purpose for a group of passionate individuals. This app would provide a service to this group of people that would not only ease the lives of the owners and developers of the Open plaques site but also make is much easier for users of the site to add content. This also will use some of the technologies that would be used in the OCR menu reader.

So I guess the question is, is it better to make something that is popular and useful, or something that shows off my skills but with an uncertain audience or competing against a much more powerful opponent? I can’t help thinking that the popular app, if I add lots of bells a whistles, would be the way to go. I can always work on the hikers app at a later date, for if nothing else I want one even if it never makes it as far as the app store. Maybe I should just take a well learned lesson from Software Development, KISS (Keep It Simple, Stupid). If anyone has any opinions, I would really like to hear them.

BuildBrighton needs your help

Here at BuildBrighton we’ve been looking, for the last 8 months, for a suitable, affordable space for the group to move to. It has become clear to us that our current arrangement with The Skiff, although fantastic, is not enough to attract new members. Most potential new members do not see the benefit in paying a subscription for a meeting only one night a week, and are sometimes put off by the officey nature of the space and the lack of any decent machinery of equipment. It is a catch 22 situation, however, as without subscribers we cannot afford to make commitments to long term outgoings as our earnings are sporadic, making some money here and there through workshops and council grants etc.

We have long wanted to turn BuildBrighton into Briton’s answer to Noisebridge or NYC Resistor. We have found two fantastic spaces that are beyond our current monthly earnings by different degrees. We also have 100 non-members on the hackspace mailing list and 250 non members following the BuildBrighton twitter account.  How many of these potential members would convert into actual members if we had the right space? What is it that BuildBrighton would have to provide for them to make that conversion, and how much would they be willing to pay in order to get that? We have no desire to out price ourselves, and yet we need to charge an amount that can sustain the group financially.

In an attempt to find these things out, we have created a questionnaire and are asking people to fill it in to provide the answers that we need. If we have a good idea of how quickly and to what extent the group can grow, we can make decisions about how much risk we can take on with a new place. You can fill the questionnaire in here.

The two spaces that we are looking at are The Metalworks, approx 1000 sq ft in Central Brighton, next door to the new space that The Skiff are moving to shortly, allowing us to maintain our ties with those who supported us in our early days. The other is a 4000 sq ft workshop space at New England House, which is not only enormous, but filled with already created smaller spaces that could be converted into useful areas like dark rooms or craft rooms, or rented out to artists or commercial creative enterprises as studios or work spaces. Both spaces have enormous potential and we can see the group flourishing in such environments.

Please do fill in the questionnaire, comment or contact me if you wish to be involved in BuildBrighton, want to find out more about us or just want to offer us your support. Knowing that we are needed is a big reason why we do this.