Archive

Archive for the ‘Mobile Dev’ Category

Playing movies with an alpha channel on the iPad

May 16th, 2012 21 comments

We hit a snag the other day with the development of Tiny Ears. Our strategy to save space on the animation in each storybook was to create lightweight movies to play at the appropriate moment, rather than to have to implement large quantities of frame animation with all of the assets involved (including now retina support with the new iPad). To save further space we decided that we would like to animate only those things that moved by creating a video animation with an alpha channel to make the background visible through the video.

Problem was, when it came to implementing this strategy on the device itself, the transparent .mov file renders with a black background. Why? AVPlayer and MPMoviePlayer  do not have alpha channel support. Desperate not to have to change our strategy and recreate our animations complete with background or return to frame animation I spent some time researching a possible solution on the Internet. After 1 day of looking the best I had come up with was a solution suggesting we use OpenGL to sample the video as it played and turn certain colours transparent I was ready to give it up for lost.

Then, at last, I came across AVAnimator. Hidden in the depths of Google lay this single page site detailing this wonderful library that seems to do pretty much everything that you would want to do with movies in iOS but can’t. There is little documentation and that single page is the only information that exists, but it was enough. Here was a native movie player for iOS with alpha channel support (and a lot more besides but we won’t go into that now).

The code itself was really simple to implement, but in order to play the movies you have to do a little bit of transformation first to turn them into the .mvid format that AVAnimator requires. The tools you need are qtexportani and QTFileParser.

Unpack qtexportani. Open a terminal in that location and type the following:

./qtexportani <name>.mov

This will create you a file in the same directory called export_<name>.mov.

Now unzip QTFile112.zip. Go into QTFileParser and open the XCode project. Build & archive the app and select Distribute. Select Save Built Products and choose somewhere to save it. Then, with the terminal in the same location as the app you just built, run the following command:

./qtaniframes -mvid export_<name>.mov

This will save you a file called export_<name>.mvid.

At this point, don’t be afraid of the fact that your new .mvid file is substantially larger than the original .mov. We’re gonna 7Zip it to make it nice and small. The nice thing about AVAnimator is that you can 7Zip all of your .mvid media into one archive and use that in your app, making all of your media have a delightfully small footprint. I’m not gonna tell you how to 7Zip your files – you’re geeks you should be able to handle that on your own. But at the end of it you should have something like <name>.7z that contains all of your mvid media.

Now comes the fun bit. From the AVAnimator site I could not find just the source file download, but you can grab it by downloading any of the links from their website. I grabbed StreetFighter cos that was the example app that did exactly what I wanted.

So, in your xcodeproj, import all of the files in the AVAnimator folder that you will find in your downloaded project. You will also need to import all the files inside the folder called LZMASDK. In the UIViewController where you want to play your animation then add the following code:
// create the animator view

AVAnimatorView *animationView = [AVAnimatorView aVAnimatorViewWithFrame:CGRectMake(0,0,1024,768)];

// create a new object to store your media in

AVAnimatorMedia *media = [AVAnimatorMedia aVAnimatorMedia];

// Create a 7Zip resource loader

AV7zAppResourceLoader *resLoader = [AV7zAppResourceLoader aV7zAppResourceLoader];

// tell the resource loader what the name of your 7Zip archive is, and the name of the media file inside it you want to play

resLoader.archiveFilename = @”media.7z”;

resLoader.movieFilename = @”export_video.mvid”;

resLoader.outPath = [AVFileUtil getTmpDirPath:animationFilename];

// tell the media holder which resource loader to use

media.resourceLoader = resLoader;

// Create decoder that will generate frames from Quicktime Animation encoded data

AVMvidFrameDecoder *frameDecoder = [AVMvidFrameDecoder aVMvidFrameDecoder];

media.frameDecoder = frameDecoder;

media.animatorFrameDuration = AVAnimator30FPS;      // this is a constant I made for the frame rate

[media prepareToAnimate];

// request to be notified when the movie is played

[[NSNotificationCenter defaultCenter] addObserver:self

selector:@selector(animatorDoneNotification:)

name:AVAnimatorDoneNotification

object:media];

// you have to add the AVAnimatorView to the superview before you can attach the AVAnimatorMedia

[self.view addSubview:animationView];

[animationView attachMedia:media];

// play the movie

[media startAnimator];
And that’s it. Everything to you need to play a movie animation while preserving the alpha channel creating a transparent animation. In a few short lines of code. Thank you AVAnimator, you wonderful thing you. It’s astounding that more people don’t know you exist.

Tiny Ears Update

January 25th, 2012 No comments

Well, the first week of Tiny Ears here in Chile has gone pretty well. We’ve had a meeting with Professor Nestor Becerra-Yoma and his team at the Speech Processing and Transmission Laboratory at Universidad de Chile and their software seems to meet our needs well and we are now trying to come up with a deal whereby we can use their software whilst trying to find funding to allow them to enhance it with all of the extra bits that Tiny Ears requires.

There will, no doubt, be a post to come whereby I detail all the fun and games to be had trying to find funding here in Chile, but I think what is there presently will be enough to provide a good enough prototype with which to secure funding. Yet again I am wishing that  I was able to find a co-founder to help me with these business matters as I would far rather just concentrate on building the product and leave these other consideration to someone else, but I guess that’s what a startup entrepreneur is, someone willing to dirty their hands with all the jobs that need doing.

On that basis, and with that weight lifted from my mind, I am ready to start working on the storybook itself. I will be getting in touch with the various loosely organised Tiny Ears members who have agreed to work on the project and try and get everyone together on a Skype call to discuss what needs to be done. Now that the Speech Recognition part is at least partially resolved then all the different members of the team who I had been putting off getting overly involved due to the uncertainty of the outcome of the meeting with the Universidad de Chile. I really feel that Tiny Ears can start to move forward with purpose now.

Creating an Intelligent, Interactive Storybook for Kids

October 11th, 2011 No comments

Apparently, 70% of all parents with iPads let their kids use them. I’ve seen kids as young as 2 pick an iPad up and start using it with little or no direction. It seems that with this tablet there is a device that kids can use, regardless or their current educational state or coordination. iPad developers seem to have caught on to this and the kids app market is a steadily growing one, with educational and gaming apps appearing all the time.

I first encountered kids using iPads when I spent time with Ian’s sister and family in Canada earlier this year. I’d taken my iPad with me and downloaded a bunch of kids games for Olivia (3) and Chloe (1) to play with. One of the first thing I noticed, however, was the poor quality of many of the apps, even those with big names like Peppa Pig behind them. Storybooks especially seemed to be nothing more than a digital version of the analogue book with the added disadvantage of not being chewable or usable for hitting your little sister with.

I also looked at how the girls interacted with the educational apps that I downloaded. Often they were no engaging enough to retain interest for more than a minute or two at a time, or did not offer enough incentive to continue when things got hard. Apart from one game, a memory card game, the average time that Olivia engaged with an app was about 2 minutes.  I started to wonder if I could do better and came up with the idea for an interactive storybook that helped kids between the ages of 4 and 7 learn to read.

I saw two problems with trying to create an educational storybook app. One was of entertainment and the other was feedback. While watching Olivia use the educational apps, I noticed that the amount of engagement she had with the app increased if an adult was there with her as she used the app and gave constant feedback and assistance when she needed it. However, the adults were not always available to give her the attention she needed and so she would put the iPad down and do something else instead.

I wondered if we could create the sense of feedback and support that an adult would normally provide by using Speech Recognition to listen to the child as they read the storybook and provide the feedback, encouragement, rewards and assistance at the moment it is required. We could use face detection on the front facing camera on the iPad 2 to determine whether or not the child was reading the book and use that as a guide to whether or not to listen or ignore what the app heard. Combine this with fun interactions, animations, multiple paths through the story and embedded games and I felt we could create an engaging, interactive learning experience.

Working with the Strong Steam team we are creating the Speech Recognition and Face Detection technology to use within the app and are in the process of putting together a prototype app. We’re looking for an early years learning professional with an interest in technology to help us out with how we make the app as engaging as possible while retaining the educational element. If you are one, or you know of one who might be interested, please do get in touch.

Radical Robot is off to Santiago for Startup Chile

October 7th, 2011 No comments

Two weeks ago, Radical Robot received some fantastic news – we got accepted into the Startup Chile accelerator programme. This is very exciting, not only as it means that we get to spend 6 months in beautiful Chile, but it also means that we get a period of funded client-free time to work on our fantastic new product – an interactive educational children’s iPad storybook with speech recognition.

Since Radical Robot started a year ago we have become a successful mobile application agency, but we have always wanted to be more of a product house than a client led agency. Social Ties was our first attempt at developing a product, but although we are very pleased with the app and are happy with our downloads since it went live in both the Android and iTunes app stores, we have realised that monetization of the app would lead us down a path that we were not that interested in. This realisation has allowed us to persue a number of other product ideas and we are happy that our favourite idea has been accepted into Startup Chile.

The idea is to create an interactive storybook for children aged 4-7, kids just learning to read sentences. This app will utilise Speech Recognition to listen to the child as they read the story out loud, enabling focussed feedback, assistance and support at the moment it is required. Along with story animations, positive feedback, reward systems and embedded games, this creates an educational and entertaining tool for early years language learning. The speech recognition can be disabled to provide a fun experience that parent and child can enjoy together, or left on to be used as a self directed learning tool for when mum and dad are busy.

Right now, the aim is to concentrate on developing and perfecting the speech recognition technology. We are using OpenEars / Pocket Sphinx as a basis and training it to deal with kids voices. Then we shall be looking for a story to form the basis of our first app. We would dearly love to meet someone with speech recognition experience to join our AI team as a consultant, and learning professionals who would like to get involved in creating this exciting, cutting edge learning experience. If you are interested, please contact me at Emily@radicalrobot.co.uk, on twitter @fluffyemily or call on 07768646287.

Also, we need to come up with a cool name for this new product. If you have any suggestions, please do let me know via the email address or twitter handle above. We’re very excited here and are looking forward to our adventure and getting stuck into this fascinating challenge.

Adventures at Open MIC 8

November 7th, 2010 No comments

A couple of weeks ago, a friend of mine in Bath, Julian Cheal, pointed a friend of his, Chris Book my way on Twitter. Chris runs OpenMIC (Mobile Innovation Camp) and was looking for Brighton based mobile developers to meet up with to promote the upcoming OpenMIC 8, held at The Grand Hotel in Brighton on 4th November.

Naturally I invited him down to Brighton to attend the monthly meeting of the Brighton iPhone Creators group and get royally drunk with some fellow mobile devs in The Basketmakers, and equally as naturally Chris agreed.

The next day, slightly worse for wear, Chris and I met for a coffee to chat about life, business and mobile development. During this time I mentioned that I was currently investigating PhoneGap and was intrigued by Titanium as alternatives to native development. I knew very little about them at the time and wanted to learn more. Obviously whatever I said must have inspired Chris for later that day he asked whether I would like to present the result of my investigations at OpenMIC. After some consideration I decided that I would be a fool to pass up the opportunity to speak at a conference for the first time and accepted. I then realised that I had less than two weeks to pull everything together.

In reality, I had even less time than that. With other work commitments and the existing commitment of talking at the Five Pound App that Tuesday to present an app that Ian and I had been working on, it ended up boiling down to one full day and 2 evenings. I had decided to write the same app twice, once in each technology, and compare the results. I wrote the Titanium app on the Sunday afternoon, the PhoneGap app on the Monday evening after a full days work and the presentation on Wednesday.

Luckily for me it all came together (quite late on Wednesday night) and Ian and I rocked up on the Thursday morning at The Grand raring to go. There were 4 talks in the morning session, followed by a developer panel, then after lunch  2 ‘barcamp’ style sessions before the evenings socialising (partly funded by Microsoft).

Up first was Tom Hume of Future Platforms talking about how he felt that using web technologies in mobile app dev was the future. He explained how HTML and CSS were mature technologies that had already solved a lot of the UI problems that were still incredibly difficult to achieve when developing natively and proposed that mixing the two, using HTML where most suitable, and native when tackling more difficult problems, was his ideal.

Then came Stuart Scott, CEO of infohand, who had a lot of commercial experience and knowledge in the mobile world. He explained about the economics of mobile experiences and developing mobile applications and how perhaps using the web in mobile could help reduce the costs of creating engaging applications.

Next up was Mike Ormond, a developer evangelist at Microsoft. Mike showed us Windows Mobile 7, a handset and operating system that had so far been unmentioned when talking about cross platform development. Windows mobile 7 looks fantastic and his argument was that in order to keep the experience fantastic you had to develop specifically for the device, and that meant natively. Despite the talk being centered purely around WM7, I felt that the point stood in relation to all devices.

Finally it was my turn. Mine was the only purely technical talk, showing code. I felt rather embarassed when we got the first page of code and discovered that the colours had washed out totally on the projection and it was very hard to read what it said, but subsequent feedback was that this made me explain the code in a detail which I might otherwise have missed and so made the talk more useful. I learned a valuable lesson about displaying code snippits on projectors. You can view my presentation on SlideShare, download the code at GitHub and read more about it on this blog post.

I got asked to stay up on ‘stage’ for the developer panel which I gladly did. The panel was asked to explain why should a client pay multiple times for an application written natively for many devices, and not just pay once for a mobile website to cover all devices. The general conclusion was that it all came down to that last 5% of polish that makes a good mobile app a fantastic mobile app and that last 5% was far harder to achieve using the mobile web that with native apps.

Somehow I got volunteered to run a barcamp session about PhoneGap and Titanium in the afternoon and also managed to get a good play in with a Galaxy Tab that one of the delegates brought along. After that it was beers and more beers and then curry with beers. All of the delegates that I met were fantastic. So many knowledgable and friendly people and I had lots and lots of fun.

I am now trying to organize my time so that I can make it to the next openMIC on 2nd December in Oxford. Chris, Pinar and co have developed a great conference and I highly recommend that you find the time to go if one happens to pop up in your area in future, or make the effort to attend one somewhere else in the country.

Experiments in PhoneGap and Titanium

November 7th, 2010 6 comments

I talked recently at OpenMIC8 at the Grand Hotel on 4th November 2010 and the theme was native vs web. I decided to talk about the investigations I had recently been doing in cross platform development tools such as Titanium and PhoneGap.

I decided to write a simple app twice, once in each technology. I decided to replicate the first proper iPhone app that I wrote. This was a simple app that I wrote for Ian for a presentation that he was giving about the uses of Artificial Intelligence in the modern world. The app used a webservice that he had created that used Optical Character Recognition to read text in images and return the transcribed text. This meant that the app needed to do 4 main things:

  1. Take a photo with the camera (or choose from library)
  2. Upload an image to an external API
  3. Parse JSON response from API and display
  4. Play streaming Audio of transcribed text

As the first native iPhone app that I had written, only having implemented tutorial examples before, this app took me approx 14 hours to create. At this point all I had done in PhoneGap was some toy examples and I had not got further than downloading and installing Titanium. I thought this would make a good comparison point for the two techniques.

PhoneGap is the most versatile of the cross platform techniques. You write HTML and CSS which gets executed from inside an app using WebKit. It is basically a mobile website embedded inside your app. PhoneGap works on all iOS, Android, Symbian, Blackberry and Palm devices, but you must download and install each SDK and development environment, and then integrate PhoneGap with each. You then build and deploy your app inside the platform build environment. However, you can pretty much write your mobile site inside your web dev environment of choice and then just import it into the platform environments when it’s done. Then it’s just a case of making small, platform specific changes (there are small differences in the availability of methods for each platform).

Titanium is currently only available for iOS and Android, although there is a Blackberry version in beta. You download Titanium and it installs Titanium Developer, a project creation and execution environment. You then use your own favorite editor to write Javascript. This Javascript again runs within webkit, but instead of providing an HTML interface, it creates and uses native components, giving your app the look, feel and more of the speed of a native app. From within Titanium Developer you can then execute your app on iPhone devices or simulator, and Android devices or simulator at the touch of a button. You can also either use TD to deploy and package your app for the store, or to generate a platform specific project to import into the plaform development environment for packaging.

With both PhoneGap and Titanium, you can write your own native functionality and provide an interface to call from Javascript inside the non-native part of your app.

I started reading Titanium’s documentation, hungover, at 1pm on a Sunday afternoon. I found the documentation pretty good and it was very easy to find examples of everything that I needed to do. The Kitchen Sink app that you can download from GitHub contains a demonstration of all the features of Titanium and a couple of quick Google searches gave me everything else I needed.

By 6pm I had the main bones of the app, but another couple of hours (including time to create some images) gave me the final thing. The whole app is in one file, app.js. Despite not knowing a lot of javascript I found the whole experience easy and fairly enjoyable.

You can view the code here.

On Monday evening, about 6pm I started on the phonegap app. I had already done most of the examples that I could find so there was no need for the initial documentation reading that I had needed to go through for Titanium.

At this point I need to point out that I do not do web programming. I know no CSS and have very little HTML experience and no HTML5 knowledge. In fact, I have avoided as much front end web programming as I could throughout my whole career. I chose to use JQuery Mobile as I’d looked at it previously and I found that it provided some very nice looking components which massively reduced the need for me to know any css at all. For me this was hugely important. It enabled me to get to the point of having a most of my app within 2.5 hours of starting. It did, however, look utterly appalling. For this I needed to recruit some help from a friend of mine who knew CSS. I spent 30 minutes with him on Tuesday and we got it all looking fine.

However, it took me an awful lot longer than 3 hours to complete the app. It took 4 hours and 3 different people to solve the biggest problem I had – uploading the image to the webservice. In both the Objective C and Titanium versions this had caused only small, easy solvable problems. However all of the upload examples for PhoneGap used code that attached the image data as a URL query parameter to the POST. The webservice, however, needed the image attached as a file on a multipart message. Constructing the multipart header was no problem , but it took quite some time to discover that the message was being posted base 64 encoded. The webservice only returned a HTTP Status 500 so we had to debug for some time before we figured out what was wrong. However, even when i used a Javascript base64 decode function before posting, the webservice still failed to understand the message.

In the end we created a new webservice that did the base64 decode after posting, but this was not an idea solution. I have since (with help from Keiran Gutteridge) I have worked out how to add my own PhoneGap function to send the image base64 decoded so I can use the original webservice. This headache sadly took up most of Monday night and part of Tuesday and Wednesday. It also took 3 people. PhoneGap total: 8 hours & 4 people.

You can view the PhoneGap code here

In conclusion, writing cross platform code is far quicker and simpler than writing native. The results, for very simple apps, are almost as good. Titanium, building native components, gives an device experience almost as responsive as if you had written it natively. If I had been an experienced web developer, the look and feel of the PhoneGap app would have been almost as good as the real thing, and if I were to do PhoneGap development in future, I would have to ensure I had a CSS and HTML expert to handle the creation of CSS transitions and beautiful interfaces. Even then, the responsiveness of the result is somehow not quite good enough. It’s just slightly laggy and the transitions just don’t look perfect. If you app relies heavily on that last 5% of Wow factor to set it apart, I would still choose native.

Also, even though Titanium gives access to far larger parts of the device than PhoneGap, to do really complex functionality you still need to write it yourself natively and then you need to wire it in to your chosen non-native environment. In my opinion you would need a really good reason not to go native, but if that reason was there, then I would choose Titanium. If I were an experienced web dev, perhaps I would have more time for PhoneGap, but for me it’s not really an option. I will probably use Titanium again, for rapid prototyping, but would write natively once the prototype was finalised.

Open Plaques iPhone App now in Store

October 15th, 2010 No comments

Well, it’s done. The requisite number of months have passed and I am now officially a freelancer. I am also now officially a published iPhone app developer as, on October 5th 2010, the Open Plaques iPhone app got approved and went live. You can download the app from iTunes or directly from your phone. You can also find out more about the app from this page

Rather disappointingly it was not in the app store in time for the Open Plaques Open Day on the 25th September which was the original goal, but I have learned a lot about the app store approval process in the mean time.

It took me two attempts to get the app into the store and I thought it might be worth documenting those reasons here. The first rejection happened because the app crashed in iOS 4.1.  Fair enough, you might think, and yet I submitted the app 6 days before 4.1 was released and so was rather annoyed to find it rejected for that reason. A lesson learned there. If you are submitting around the time of an iOS release, submit your app compiled on the beta version.  Even more annoyingly, all I had to do was recompile the app with the new API and resubmit, so it’s not as if I had actually done anything wrong.

The second rejection was for a more justified reason, I suppose. Google Maps have a logo at the bottom of the map. You, as a developer, are obliged to ensure that this logo is displayed at all times, otherwise the T&C’s that Apple have with Google are violated. My map was slighly too big, causing the logo to not appear on the page.  This change required a 2 minute change and resubmission.  Rather delightfully the app was accepted on the third submission.

I am currently working on the first update to Open Plaques. The update consists only of performance improvements and fixing a couple of small memory issues in the first version. I do also have a loose roadmap for future versions. The ones displayed in Black are implementable now, the ones in Blue are awaiting the creation of a suitable API at openplaques.org

  • Display Plaques in an alternative Table View, each entry showing plaque description, location, distance from current location and direction that you should walk in to find it. Clicking on the table row will take you to the Plaque Details screen
  • There are often more than one image for each plaque in the flickr pool. If there are more than one then all images will be displayed on a swipe-able view in the Plaque Details screen.
  • Allowing the user to take a photo of any plaques found from within the app itself. This photo will be then uploaded to the openplaques flickr group,  machine tagged with the id of the plaque so that openplaques.org can pick it up.
  • Adding search to be able to view plaques in other locations around the world, or by subject, organization, name or role.
  • Displaying plaques based on the current view within the map, rather than just on the users current location
  • Allow the user to upload plaques currently not on the app directly to openplaques.org
  • The ability to access further information about the subject of plaques from the Plaque Details Screen

If you have any other suggestions or feature requests for what you would like to see in the app, please do contact me either here, email me, or raise a feature request on the github site

Categories: All, Mobile Dev Tags:

What to build? A dilemma

July 2nd, 2010 7 comments

How do you choose which of your many ideas to pursue? Do you choose the one with the interesting technologies that you want to explore, or do you go for the one that you think would be the most useful?

I am currently suffering from this dilemma in relation to which app I concentrate on as my ‘signature’ app.  If you’ve got time to build one app, in this case an iPhone app, to demonstrate your skill as a programmer to future potential clients, how do you decide what to do? I have 3 choices in this regard:

  1. Hikers footpath planner. A really complex and difficult app that will take a long time to put together but that demonstrates all my skill as a back end developer while also showing an ability to do some nice things with the front end. This app, although sounding ideal, is a risk. I have limited time and the app is complex enough that it might not meet the App Store requirements on stability within the time frame and, with my current dev time limited to evenings and weekends, might not get be completed to the final most impressive stage by the time my (admittedly self imposed) deadline come around. This app has, I believe, a wide audience, but only if done exactly right.
  2. OCR Menu reader and translator. An app that allows me to explore some really interesting technologies and demonstrate some skills with both backend and front end work. The app, however, would be trying to compete directly with another app, Google Goggles. Do I really want to develop in my spare time an app that could potentially be utterly eclipsed by a company who have million of dollars to spend on getting it right?
  3. Open Plaques. An app that demonstrates some back and front end skills and that solves a specific purpose for a group of passionate individuals. This app would provide a service to this group of people that would not only ease the lives of the owners and developers of the Open plaques site but also make is much easier for users of the site to add content. This also will use some of the technologies that would be used in the OCR menu reader.

So I guess the question is, is it better to make something that is popular and useful, or something that shows off my skills but with an uncertain audience or competing against a much more powerful opponent? I can’t help thinking that the popular app, if I add lots of bells a whistles, would be the way to go. I can always work on the hikers app at a later date, for if nothing else I want one even if it never makes it as far as the app store. Maybe I should just take a well learned lesson from Software Development, KISS (Keep It Simple, Stupid). If anyone has any opinions, I would really like to hear them.

Objective Flickr on the iPhone

June 27th, 2010 4 comments

I have been working towards building my very first iphone app, and one of the challenges was to upload some images to Flickr. After much failure to create the signature that the Flickr API requires to authorise requests that flickr would accept, I was pointed towards ObjectiveFlickr, an Objective C library designed to ease the pain of working with the Flickr API.  This is a fantastic tool and has saved me endless headaches, however getting the library to work when in Device mode in XCode is a right pain in the butt. In order to ease any development pains that others might experience when using this library, I will now detail what I did in order to get everything to work.

After downloading the Objective Flickr library from github, I followed the instructions for adding the library to an iphone project as detailed in the documentation. I was most distressed when, after following the install instructions word for word, my app would not build. To check I was doing things correctly, I created a new project just to play around and test ObjectiveFlickr. Here the project built just fine and I could use the libraries with no problems. I went back to my original project, removed ObjectiveFlickr and reinstalled. Again, compilation errors.

"No architectures to compile for (ARCHS=x86_64, VALID_ARCHS=i386)."

As an iphone newbie, the compilation error did not immediately raise any warning flags so I hit our old friend google and tried to find out more about it.  After a bit of digging I found that this was actually the only bug raised against ObjectiveFlickr in the github project.  This is a problem that only raises it’s ugly head using ObjectiveFlickr when building against the device. The reason my test project worked was that it was building against the simulator. If I changed my target to the simulator then the project compiled. However I was using the camera in my app, which does not work in the simulator, so I needed it to work against the device.  The workaround from the ObjectiveFlickr site was this:

Build the objectiveflickr.xcodeproj project with changed settings "Architectures : Standard(32-bit Universal)" and then rebuild the project using objectiveflicr.

Turns out it’s not quite as simple as all that, so here was what I had to do, step by step, to get ObjectiveFlickr working in XCode building against the iphone device 3.1.3:

  1. Open objectiveflickr.xcodeproj in XCode
  2. Go to Project->Edit Project Settings and select the Build tab
  3. Change the Base SDK to iPhone Device 3.1.3
  4. Change the Architecture to ${ARCHS_STANDARD_32_64_BIT}
  5. Change the C/C++ compiler version to GCC 4.2
  6. Rebuild ObjectiveFlickr
  7. Return to your project
  8. Rebuild you project (probably best to Clean All Targets first, just to be on the safe side)

If ObjectiveFlickr does not compile after setting the Base SDK to iPhone Device 3.1.3, return the Base SDK  to Current Mac OS. Changing the compiler version from the LLVM compiler to the GCC compiler was neccessary because once I had recompiled objective flickr against the iphone libraries, when I then recompiled my project it complained that the LLVM compiler was not available.

error: can't exec '/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/llvm-gcc-4.2' (No such file or directory)

Turns out that for iPhone SDK’s 3.1.3 and below at least, the LLVM compiler is not available on the device. It’s available for simulator though, which is annoying to say the least.

This has highlighted to me one very annoying thing that I was not aware of before. Things that compile, build and deploy successfully on the simulator, may not compile build and deploy successfully on the device. Why Apple have not created the simulator to exactly simulate behaviour on the device is beyond me, but now I am aware of it I shall be careful not the be caught out like this again.